Regularized Classifiers such as SVM or RLS are among the most used and successful classifiers in machine learning. The theory and the empirical evaluation of the associate generalization bounds are of paramount importance; bounds based on the Maximal-Discrepancy approach proved quite effective. The paper shows an efficient, iterative procedure to evaluate Maximal-Discrepancy bounds for this kind of classifiers. Empirical results on UCI datasets show that this approach can attain tighter bounds to the run-time classification error

Maximal-Discrepancy bounds for margin-maximizing classifiers

GASTALDO, PAOLO;ZUNINO, RODOLFO
2009-01-01

Abstract

Regularized Classifiers such as SVM or RLS are among the most used and successful classifiers in machine learning. The theory and the empirical evaluation of the associate generalization bounds are of paramount importance; bounds based on the Maximal-Discrepancy approach proved quite effective. The paper shows an efficient, iterative procedure to evaluate Maximal-Discrepancy bounds for this kind of classifiers. Empirical results on UCI datasets show that this approach can attain tighter bounds to the run-time classification error
2009
978-1-4244-3549-4
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/376386
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
social impact