We develop a theoretical analysis of the generalization perfor- mances of regularized least-squares algorithm on a reproducing kernel Hilbert space in the supervised learning setting. The presented results hold in the general framework of vector-valued functions, therefore they can be applied to multi-task problems. In particular we observe that the concept of effective di- mension plays a central role in the definition of a criterion for the choice of the regularization parameter as a function of the number of samples. Moreover a complete minimax analysis of the problem is described, showing that the convergence rates obtained by regularized least-squares estimators are indeed optimal over a suitable class of priors defined by the considered kernel. Finally we give an improved lower rate result describing worst asymptotic behavior on individual probability measures rather than over classes of priors
Optimal rates for regularized least squares algorithm
DE VITO, ERNESTO
2007-01-01
Abstract
We develop a theoretical analysis of the generalization perfor- mances of regularized least-squares algorithm on a reproducing kernel Hilbert space in the supervised learning setting. The presented results hold in the general framework of vector-valued functions, therefore they can be applied to multi-task problems. In particular we observe that the concept of effective di- mension plays a central role in the definition of a criterion for the choice of the regularization parameter as a function of the number of samples. Moreover a complete minimax analysis of the problem is described, showing that the convergence rates obtained by regularized least-squares estimators are indeed optimal over a suitable class of priors defined by the considered kernel. Finally we give an improved lower rate result describing worst asymptotic behavior on individual probability measures rather than over classes of priorsI documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.