Categories

Versions

You are viewing the RapidMiner Studio documentation for version 10.2 - Check here for latest version

Rescale Confidences (Logistic) (Model Simulator)

Synopsis

Creates a model wrapper for rescaling classification confidences based on a Platt scaling approach. This operator learns a logistic regression model for each of the classes based on the confidence values for that class.

Description

Most confidence values are not resembleing probabilities and sometimes can be severely skewed. This is a problem commonly seen for most learning schemes, especially for methods like Support Vector Machines. The literature suggests to rescale the confidences produced by models to behave closer to probabilities and one of the most commonly used methods for this is called Platt scaling.

The original Platt paper was suggesting a simplified version of a logistic regression model using the confidence values as inputs and the true outcomes as the target. However, we found that this simplified approach had robustness issues for some data sets and also only worked for binary classification problems.

This operator overcomes both those issues by explicitly calculating a full logistic regression model for each class which therefore also support multiclass problems. While this is a bit slower than the usual Platt scaling, it seems to be more robust and also works for multiple classes which makes this model more universal. In particular you should consider to rescale confidences if you optimize for other measurements than AUC, use cost-sensitive learning, or if your classes are highly imbalanced.

The operator requires the model, the data it has been built on, and a labeled calibration data set which the model has not been built on. The model is then applied on the calibration data, and for each of the classes a new calibration training set it built with the confidence for that class coming from the model and True if the data point has the class of interest and False otherwise. Those models are stored and when the calibration model is applied on new data those models are used to predict confidences based on the base model confidences. While there is still no guarantee that those are perfect probabilities, the behavior is much more consistent and matches human expectations closer, e.g. that a natural cut-off threshold for binary classes should be close to 0.5 which is not necessarily the case for uncalibrated confidences.

Input

  • model (Model)

    This port expects a model, whose confidence values should be calibrated. It has to be a classification model.

  • training set (Data Table)

    This port expects an ExampleSet, the same ExampleSet that was used to create the model. It has to have a nominal label.

  • test set (Data Table)

    This port expects an ExampleSet which will be used for creating the calibration models for each class. It has to have a nominal label.

Output

  • model (Model)

    The calibrated model, i.e. it will now produce calibrated confidences.

  • training set (Data Table)

    The input training set.

Tutorial Processes

Calibrate Confidences for Sonar

This process trains a Fast Large Margin model on the Sonar data. Without confidence rescaling, the confidences are not really spreading across the the full spectrum from 0 to 1 but hover closer to the center. We use a part of the data for calibrating the model with the operator Rescale Confidences (Logistic).

If we apply the calibrated model on a third part of the data, we can now see that the confidences cover the full spectrum between 0 and 1. The overall behave closer to probabilities now.