The K-fold Cross Validation (KCV) technique is one of the most used approaches by practitioners for model selection and error estimation of classifiers. The KCV consists in splitting a dataset into k subsets; then, iteratively, some of them are used to learn the model, while the others are exploited to assess its performance. However, in spite of the KCV success, only practical rule-of-thumb methods exist to choose the number and the cardinality of the subsets. We propose here an approach, which allows to tune the number of the subsets of the KCV in a data–dependent way, so to obtain a reliable, tight and rigorous estimation of the probability of misclassification of the chosen model.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
Titolo: | The ‘K’ in K-fold Cross Validation |
Autori: | |
Data di pubblicazione: | 2012 |
Abstract: | The K-fold Cross Validation (KCV) technique is one of the most used approaches by practitioners for model selection and error estimation of classifiers. The KCV consists in splitting a dataset into k subsets; then, iteratively, some of them are used to learn the model, while the others are exploited to assess its performance. However, in spite of the KCV success, only practical rule-of-thumb methods exist to choose the number and the cardinality of the subsets. We propose here an approach, which allows to tune the number of the subsets of the KCV in a data–dependent way, so to obtain a reliable, tight and rigorous estimation of the probability of misclassification of the chosen model. |
Handle: | http://hdl.handle.net/11567/539164 |
ISBN: | 9782874190490 |
Appare nelle tipologie: | 04.01 - Contributo in atti di convegno |