Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning with New Compute Paradigms

Real-Time FJ/MAC PDE Solvers via Tensorized, Back-Propagation-Free Optical PINN Training

Yequan Zhao · Xian Xiao · Xinling Yu · Ziyue Liu · Zhixiong Chen · Geza Kurczveil · Raymond Beausoleil · Zheng Zhang


Abstract: Solving partial differential equations (PDEs) numerically often requires huge computing time, energy cost, and hardware resources in practical applications. This has limited their applications in many scenarios (e.g., autonomous systems, supersonic flows) that have a limited energy budget and require near real-time response. Leveraging optical/photonic computing, this paper develops an on-chip training framework for physics-informed neural networks (PINNs), aiming to solve high-dimensional PDEs with fJ/MAC power consumption and ultra-low latency. Despite the ultra-high speed of optical neural networks, training a PINN on an optical chip is hard due to (1) the large size of photonic devices, and (2) the lack of scalable optical memory devices to store the intermediate results of back-propagation (BP). To enable realistic optical PINN training, this paper presents a BP-free method to avoid the BP process. We also employ a tensor-compressed approach to improve the convergence and scalability of our optical PINN training. This training framework is designed with tensorized optical neural networks (TONN) for scalable inference acceleration and MZI phase-domain tuning for \textit{in-situ} optimization. Our simulation results of a 20-dim HJB PDE show that our photonic accelerator can reduce the number of MZIs by a factor of 1.17$\times 10^3$, with only 1.36 J and 1.15 s to solve this equation. This is the first real-size optical PINN training framework that can be applied to solve high-dimensional PDEs.

Chat is not available.