In this work we are interested in the problems of supervised learning and variable selection when the input-output dependence is described by a nonlinear function depending on a few variables. Our goal is to devise a sparse nonparametric model, avoiding linear or additive models. The key intuition is to measure the importance of each variable in the model by making use of partial derivatives. Based on this idea we propose and study a new regularizer and a corresponding least squares regularization scheme. Using concepts and results from the theory of reproducing kernel Hilbert spaces and proximal methods, we show that the proposed learning algorithm induces a minimization problem which can be provably solved by an iterative procedure. The consistency properties of the obtained estimator are studied both in terms of prediction and selection performance.
Is There Sparsity Beyond Additive Models?
ROSASCO, LORENZO;VERRI, ALESSANDRO;Villa S.
2012-01-01
Abstract
In this work we are interested in the problems of supervised learning and variable selection when the input-output dependence is described by a nonlinear function depending on a few variables. Our goal is to devise a sparse nonparametric model, avoiding linear or additive models. The key intuition is to measure the importance of each variable in the model by making use of partial derivatives. Based on this idea we propose and study a new regularizer and a corresponding least squares regularization scheme. Using concepts and results from the theory of reproducing kernel Hilbert spaces and proximal methods, we show that the proposed learning algorithm induces a minimization problem which can be provably solved by an iterative procedure. The consistency properties of the obtained estimator are studied both in terms of prediction and selection performance.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.