Regularized Classifiers such as SVM or RLS are among the most used and successful classifiers in machine learning. The theory and the empirical evaluation of the associate generalization bounds are of paramount importance; bounds based on the Maximal-Discrepancy approach proved quite effective. The paper shows an efficient, iterative procedure to evaluate Maximal-Discrepancy bounds for this kind of classifiers. Empirical results on UCI datasets show that this approach can attain tighter bounds to the run-time classification error
Maximal-Discrepancy bounds for margin-maximizing classifiers
GASTALDO, PAOLO;ZUNINO, RODOLFO
2009-01-01
Abstract
Regularized Classifiers such as SVM or RLS are among the most used and successful classifiers in machine learning. The theory and the empirical evaluation of the associate generalization bounds are of paramount importance; bounds based on the Maximal-Discrepancy approach proved quite effective. The paper shows an efficient, iterative procedure to evaluate Maximal-Discrepancy bounds for this kind of classifiers. Empirical results on UCI datasets show that this approach can attain tighter bounds to the run-time classification errorFile in questo prodotto:
Non ci sono file associati a questo prodotto.
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.