Categories

Versions

You are viewing the RapidMiner Studio documentation for version 10.1 - Check here for latest version

Fast Large Margin (RapidMiner Studio Core)

Synopsis

This operator is a fast learning method for large margin optimizations.

Description

The Fast Large Margin operator applies a fast margin learner based on the linear support vector learning scheme proposed by R.E. Fan, K.W. Chang, C.J. Hsieh, X.R. Wang, and C.J. Lin. Although the result is similar to those delivered by classical SVM or logistic regression implementations, this linear classifier is able to work on data set with millions of examples and attributes.

Here is a basic description of SVM. The standard SVM takes a set of input data and predicts, for each given input, which of two possible classes comprises the input, making the SVM a non-probabilistic binary linear classifier. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on.

More formally, a support vector machine constructs a hyperplane or set of hyperplanes in a high- or infinite- dimensional space, which can be used for classification, regression, or other tasks. Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. Whereas the original problem may be stated in a finite dimensional space, it often happens that the sets to discriminate are not linearly separable in that space. For this reason, it was proposed that the original finite-dimensional space would be mapped into a much higher-dimensional space, presumably making the separation easier in that space. To keep the computational load reasonable, the mapping used by SVM schemes are designed to ensure that dot products may be computed easily in terms of the variables in the original space, by defining them in terms of a kernel function K(x,y) selected to suit the problem. The hyperplanes in the higher dimensional space are defined as the set of points whose inner product with a vector in that space is constant.

Input

  • training set (Data Table)

    This input port expects an ExampleSet. This operator cannot handle nominal attributes; it can be applied on data sets with numeric attributes. Thus often you may have to use the Nominal to Numerical operator before applying this operator.

Output

  • model (Model)

    The classification/regression model is delivered from this output port. This model can now be applied on unseen data sets.

  • example set (Data Table)

    The ExampleSet that was given as input is passed without changing to the output through this port. This is usually used to reuse the same ExampleSet in further operators or to view the ExampleSet in the Results Workspace.

Parameters

  • solverThis parameter specifies the solver type for this fast margin method. The following options are available: L2 SVM Dual, L2 SVM Primal, L2 Logistic Regression and L1 SVM Dual. Range: selection
  • CThis parameter specifies the cost parameter C. It is the penalty parameter of the error term. Range: real
  • epsilonThis parameter specifies the tolerance of the termination criterion. Range: real
  • class_weightsThis is an expert parameter. It specifies the weights 'w' for all classes. The Edit List button opens a new window with two columns. The first column specifies the class name and the second column specifies the weight for that class. The parameter C is calculated as weight of the class multiplied by C. If the weight of a class is not specified, that class is assigned weight = 1. Range: list
  • use_biasThis parameter indicates if an intercept value should be calculated. Range: boolean

Tutorial Processes

Introduction to the Fast Large Margin operator

The 'Sonar' data set is loaded using the Retrieve operator. A breakpoint is inserted here so that you can have a look at the ExampleSet. The Split Validation operator is applied on this ExampleSet to assist in training and testing a model. Have a look at the subprocesses of this operator. The Fast Large Margin operator is applied in the Training subprocess for training a model. The resultant model is applied in the Testing subprocess on the testing data set using the Apply Model operator. The performance of the operator is measured using the Performance (Classification) operator. The final model and its performance vector are connected to the output and they can be seen in the Results Workspace.