In this paper, we propose a novel approach to compare the performances of binary classification models with an application on a real data set on credit risk provided by Unicredit bank. Starting from the probability of default estimated by each predictive model under comparison, the idea is to derive an uncertainty interval comparing the predictions with the observed target variable. A model is considered to have good performances if the associated uncertainty interval is small. The shape of the uncertainty interval provides also some information about the model performances in terms of classification errors, false positive and false negative. The uncertainty interval permits to compare different models without selecting a binarization threshold and it applies both for parametric and non parametric predictive models.
|Titolo:||UNCERTAINTY INTERVAL TO ASSESS PERFORMANCES OF CREDIT RISK MODELS|
|Data di pubblicazione:||2019|
|Appare nelle tipologie:||01.01 - Articolo su rivista|