In this work, we consider the linear inverse problem y = Ax+ε, where A: X → Y is a known linear operator between the separable Hilbert spaces X and Y, x is a random variable in X and ε is a zero-mean random process in Y . This setting covers several inverse problems in imaging including denoising, deblurring and X-ray tomography. Within the classical framework of regularization, we focus on the case where the regularization functional is not given a priori, but learned from data. Our first result is a characterization of the optimal generalized Tikhonov regularizer, with respect to the mean squared error. We find that it is completely independent of the forward operator A and depends only on the mean and covariance of x. Then, we consider the problem of learning the regularizer from a finite training set in two different frameworks: one supervised, based on samples of both x and y, and one unsupervised, based only on samples of x. In both cases we prove generalization bounds, under some weak assumptions on the distribution of x and ε, including the case of sub-Gaussian variables. Our bounds hold in infinite-dimensional spaces, thereby showing that finer and finer discretizations do not make this learning problem harder. The results are validated through numerical simulations.

Learning the optimal Tikhonov regularizer for inverse problems

Alberti G.;De Vito E.;Ratti L.;Santacesaria M.
2021-01-01

Abstract

In this work, we consider the linear inverse problem y = Ax+ε, where A: X → Y is a known linear operator between the separable Hilbert spaces X and Y, x is a random variable in X and ε is a zero-mean random process in Y . This setting covers several inverse problems in imaging including denoising, deblurring and X-ray tomography. Within the classical framework of regularization, we focus on the case where the regularization functional is not given a priori, but learned from data. Our first result is a characterization of the optimal generalized Tikhonov regularizer, with respect to the mean squared error. We find that it is completely independent of the forward operator A and depends only on the mean and covariance of x. Then, we consider the problem of learning the regularizer from a finite training set in two different frameworks: one supervised, based on samples of both x and y, and one unsupervised, based only on samples of x. In both cases we prove generalization bounds, under some weak assumptions on the distribution of x and ε, including the case of sub-Gaussian variables. Our bounds hold in infinite-dimensional spaces, thereby showing that finer and finer discretizations do not make this learning problem harder. The results are validated through numerical simulations.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1090973
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? 7
social impact