Statistical Learning and Inverse Problems: A Stochastic Gradient Approach

Yuri Fonseca · Yuri Saporito

Hall J #535

Keywords: [ Statistical Learning ] [ Inverse Problems ] [ Stochastic Gradient Descent ]

[ Abstract ]
[ Paper [ OpenReview
Thu 1 Dec 2 p.m. PST — 4 p.m. PST


Inverse problems are paramount in Science and Engineering. In this paper, we consider the setup of Statistical Inverse Problem (SIP) and demonstrate how Stochastic Gradient Descent (SGD) algorithms can be used to solve linear SIP. We provide consistency and finite sample bounds for the excess risk. We also propose a modification for the SGD algorithm where we leverage machine learning methods to smooth the stochastic gradients and improve empirical performance. We exemplify the algorithm in a setting of great interest nowadays: the Functional Linear Regression model. In this case we consider a synthetic data example and a classification problem for predicting the main activity of bitcoin addresses based on their balances.

Chat is not available.