The problem of how to effectively implement k-fold cross-validation for support vector machines is considered. Indeed, despite the fact that this selection criterion is widely used due to its reasonable requirements in terms of computational resources and its good ability in identifying a well performing model, it is not clear how one should employ the committee of classifiers coming from the k folds for the task of on-line classification. Three methods are here described and tested, based respectively on: averaging, random choice and majority voting. Each of these methods is tested on a wide range of data-sets for different fold settings.
K-Fold Generalization Capability Assessment for Support Vector Classifiers
ANGUITA, DAVIDE;RIDELLA, SANDRO;
2005-01-01
Abstract
The problem of how to effectively implement k-fold cross-validation for support vector machines is considered. Indeed, despite the fact that this selection criterion is widely used due to its reasonable requirements in terms of computational resources and its good ability in identifying a well performing model, it is not clear how one should employ the committee of classifiers coming from the k folds for the task of on-line classification. Three methods are here described and tested, based respectively on: averaging, random choice and majority voting. Each of these methods is tested on a wide range of data-sets for different fold settings.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.