Skip to yearly menu bar Skip to main content


Poster

Neural Krylov Iteration for Accelerating Linear System Solving

Jian Luo · Jie Wang · Hong Wang · huanshuo dong · Zijie Geng · Hanzhu Chen · Yufei Kuang


Abstract:

Solving large-scale sparse linear systems is essential in fields like mathematics, science, and engineering. Traditional numerical solvers, mainly based on the Krylov subspace iteration algorithm, suffer from the low efficiency problem, which primarily arises from the less-than-ideal iteration. To tackle this problem, we propose a novel method, namely Neural Krylov Iteration (NeurKItt), for accelerating linear system solving. Specifically, NeurKItt employs a neural operator to predict the invariant subspace of the linear system and then leverages the predicted subspace to accelerate linear system solving. To enhance the subspace prediction accuracy, we utilize QR decomposition for the neural operator outputs and introduce a novel projection loss function for training. NeurKItt benefits the solving by using the predicted subspace to guide the iteration process, which significantly reduces the number of iterations. We provide extensive experiments and comprehensive theoretical analyses to demonstrate the feasibility and efficiency of NeurKItt. In our main experiments, NeurKItt accelerates the solving of linear systems across various settings and datasets, achieving up to a 5.5× speedup in computation time and a 16.1× speedup in the number of iterations.

Live content is unavailable. Log in and register to view live content