"Engineers often start modeling components of their system using first principles. The real value of a first-principles model is that results typically have a clear, explainable physical meaning. In addition, behaviors can often be parameterized. However, large-scale, high-fidelity nonlinear models can take hours or even days to simulate. In fact, system analysis and design may require thousands or hundreds of thousands of simulations to obtain meaningful results, causing a significant computational challenge for many engineering teams. Moreover, linearizing complex models can result in high-fidelity models that do not contribute to the dynamics of interest in your application. In such situations, you can use reduced order models to significantly speed up simulations and analysis of higher-order large-scale systems.
In this talk, you will learn how to speed up the simulation of a complex model (a vehicle engine) by replacing a high-fidelity model with an AI-based reduced-order model (ROM). These models may be trained in the AI Framework of your choice, including PyTorch, TensorFlow or MATLAB. By performing a thorough Design of Experiments (DoE), you will be able to obtain input-output data from the original high-fidelity first-principles model to construct an AI-based ROM that accurately represents the underlying system. You will see how different approaches may be explored, such as stacked LSTMs, Neural ODEs or Non-Linear ARX models. If your goal is to test the design and performance of the other components in your system, you may want to run the components you are designing on the target hardware and run the AI model on a real-time computer, instead of the original high-fidelity model. Once developed, the AI model is modular and reusable. Your colleagues, whether local or in other locations, can also use your AI model in their simulations and component tests, potentially accelerating parallel design and development of the overall system."