Timezone: »
Topology optimization (TO) is a popular and powerful computational approach for designing novel structures, materials, and devices. Two computational challenges have limited the applicability of TO to a variety of industrial applications. First, a TO problem often involves a large number of design variables to guarantee sufficient expressive power. Second, many TO problems require a large number of expensive physical model simulations, and those simulations cannot be parallelized. To address these issues, we propose a general scalable deep-learning (DL) based TO framework, referred to as SDL-TO, which utilizes parallel CPU+GPU schemes to accelerate the TO process for designing additively manufactured (AM) materials. Unlike the existing studies of DL for TO, our framework accelerates TO by learning the iterative history data and simultaneously training on the mapping between the given design and its gradient. The surrogate gradient is learned by utilizing parallel computing on multi-CPUs incorporated with distributed DL training on multi-GPUs. The surrogate gradient enables a fast online update scheme instead of an expensive update. Using a local sampling strategy, we achieve to reduce the intrinsic high dimensionality of design space and improve the training accuracy and the scalability of the SDL-TO framework. The method is demonstrated by benchmark examples and AM materials design for heat conduction, and shows competitive performance compared to the baseline methods but significantly reduce the computational cost by a speed up of 8.6x over standard TO implementation.
Author Information
Sirui Bi (Johns Hopkins University)
Jiaxin Zhang (Oak Ridge National Laboratory)
I am now a Research Staff in Machine Learning and Data Analytics Group, Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL). My current research interest is on Artificial Intelligence for Science and Engineering (AISE). My broad interests revolve around robust machine learning, uncertainty quantification, inverse problems, and numerical optimization.
Guannan Zhang (Oak Ridge National Laboratory)
More from the Same Authors
-
2020 : A Nonlocal-Gradient Descent Method for Inverse Design in Nanophotonics »
Sirui Bi · Jiaxin Zhang · Guannan Zhang -
2021 : Self-Supervised Anomaly Detection via Neural Autoregressive Flows with Active Learning »
Jiaxin Zhang · Kyle Saleeby · Thomas Feldhausen · Sirui Bi · Alex Plotkowski · David Womble -
2021 : Self-Supervised Anomaly Detection via Neural Autoregressive Flows with Active Learning »
Jiaxin Zhang · Kyle Saleeby · Thomas Feldhausen · Sirui Bi · Alex Plotkowski · David Womble -
2021 Poster: On the Stochastic Stability of Deep Markov Models »
Jan Drgona · Sayak Mukherjee · Jiaxin Zhang · Frank Liu · Mahantesh Halappanavar -
2020 : 9 - Thermodynamic Consistent Neural Networks for Learning Material Interfacial Mechanics »
Jiaxin Zhang -
2019 Poster: Learning nonlinear level sets for dimensionality reduction in function approximation »
Guannan Zhang · Jiaxin Zhang · Jacob Hinkle