Many companies, such as credit granting companies, have to decide on granting or denying customer or invoice loans on a daily basis. Increasingly, machine learning is used to learn probability-of-default models from previously granted cases and, thus, whether the outcome was positive or negative for the company, i.e. whether the client paid back or defaulted. However, as the outcome can only be observed for the granted cases, the data inherently has sample selection bias and caution should be taken when applying the probability-of-default model to the full through-the-door population. In reject inference, this problem is studied with respect to whether using the unlabeled rejected instances can help improve a classifier that is only trained on granted instances, e.g. using semi-supervised learning. In contrast, we investigate under what circumstances a model trained on the granted instances, with known outcome, can be used on all possible instances. For this, we believe a model should indicate when it cannot reliably predict the outcome. That is, it should refrain from making predictions on instances unlike those on which it was trained. If not, the credit granting company would expose itself to great risk, and experts could lose their trust in the predictions. We discuss similarities and differences of this problem compared to novelty detection, classification with a reject option and reject inference. We compare a number of methods that combine novelty detection with classification, with decent results even for two-stage methods and especially when using data of existing instances with unknown outcome.
|Status||Published - okt 2020|
|Evenement||International Conference on Data Science and Advanced Analytics - Sydney, Australia|
Duur: 6 okt 2020 → 9 okt 2020
|Conference||International Conference on Data Science and Advanced Analytics|
|Periode||6/10/20 → 9/10/20|