In this paper, we review the k–Fold Cross Validation (KCV) technique, applied to the Support Vector Machine (SVM) classification algorithm. We compare several variations on the KCV technique: some of them are often used by practitioners, but without any theoretical justification, while others are less used but more rigorous in finding a correct classifier. The last ones allow to establish an upper– bound of the error rate of the SVM, which represent a way to guarantee, in a statistical sense, the reliability of the classifier and, therefore, turns out to be quite important in many real–world applications. Some experimental results on well–known benchmarking datasets allow to perform the comparison and support our claims.

K–Fold Cross Validation for Error Rate Estimate in Support Vector Machines

ANGUITA, DAVIDE;GHIO, ALESSANDRO;RIDELLA, SANDRO;
2009-01-01

Abstract

In this paper, we review the k–Fold Cross Validation (KCV) technique, applied to the Support Vector Machine (SVM) classification algorithm. We compare several variations on the KCV technique: some of them are often used by practitioners, but without any theoretical justification, while others are less used but more rigorous in finding a correct classifier. The last ones allow to establish an upper– bound of the error rate of the SVM, which represent a way to guarantee, in a statistical sense, the reliability of the classifier and, therefore, turns out to be quite important in many real–world applications. Some experimental results on well–known benchmarking datasets allow to perform the comparison and support our claims.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/315563
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact