Ph.D. Proposal
Tomoki Koike
(Advisor: Prof. Elizabeth Qian)
"Data-Driven Nonlinear Reduced Operator Learning with Stability Guarantees and Large-Scale Adaptation"
Monday, December 9
10:00 a.m.
Montgomery Knight Building 317
Abstract
Reduced model learning methods seek to infer reduced models—low-dimensional, efficient models—for dynamical systems by leveraging experimental and simulation data and prior knowledge of the system's model structure. These reduced models retain an accurate physical representation of nonlinear and high-dimensional dynamical systems and enable tasks such as optimization, uncertainty quantification, control, and design, which require numerous repeated simulations. However, these methods encounter two key challenges. First, the reduced models often fail to inherit essential physical principles such as energy conservation and stability, leading to unphysical and inaccurate long-term predictions. Second, reduced model learning methods, which require storing and processing all data simultaneously, are impractical for large-scale problems due to the data's massive degrees of freedom and numerous state snapshots, which result in excessively high storage and memory requirements. This thesis proposes methods to overcome these limitations. To guarantee stability in the learned surrogate models, we propose a stable operator learning method that impose structures derived from Lyapunov stability theory onto reduced models learned via Operator Inference (OpInf). We also develop a new method to analyze the domain where stability is ensured, providing quantitative stability guarantees. These approaches enhance the reliability, robustness, and long-term predictive capabilities of the surrogate models by preserving essential physical principles. To address the challenges of large-scale settings, we propose a streaming-based model reduction method termed Streaming-OpInf. By integrating incremental singular value decomposition (iSVD) and recursive least squares (RLS) algorithms into the OpInf framework, this method incrementally constructs the reduced basis and computes reduced operators using incoming data streams, eliminating the need to store all data simultaneously. This significantly reduces memory requirements and computational overhead, making model reduction feasible for large-scale systems. By ensuring stability and enabling the processing of extreme data, our proposed methods enhance the applicability of data-driven model reduction to a wider range of complex systems in science and engineering. Committee • Prof. Elizabeth Qian – School of Aerospace Engineering, Georgia Tech (advisor) • Prof. Keegan Moore – School of Aerospace Engineering, Georgia Tech • Prof. Benjamin Peherstorfer – Courant Institute of Mathematical Sciences, New York University.