Class MLP
- All Implemented Interfaces:
Serializable
,ToDoubleFunction<double[]>
,ToIntFunction<double[]>
,Classifier<double[]>
The representational capabilities of an MLP are determined by the range of mappings it may implement through weight variation. Single layer perceptrons are capable of solving only linearly separable problems. With the sigmoid function as activation function, the single-layer network is identical to the logistic regression model.
The universal approximation theorem for neural networks states that every continuous function that maps intervals of real numbers to some output interval of real numbers can be approximated arbitrarily closely by a multi-layer perceptron with just one hidden layer. This result holds only for restricted classes of activation functions, which are extremely complex and NOT smooth for subtle mathematical reasons. On the other hand, smoothness is important for gradient descent learning. Besides, the proof is not constructive regarding the number of neurons required or the settings of the weights. Therefore, complex systems will have more layers of neurons with some having increased layers of input neurons and output neurons in practice.
The most popular algorithm to train MLPs is back-propagation, which is a gradient descent method. Based on chain rule, the algorithm propagates the error back through the network and adjusts the weights of each connection in order to reduce the value of the error function by some small amount. For this reason, back-propagation can only be applied on networks with differentiable activation functions.
During error back propagation, we usually times the gradient with a small number η, called learning rate, which is carefully selected to ensure that the network converges to a local minimum of the error function fast enough, without producing oscillations. One way to avoid oscillation at large η, is to make the change in weight dependent on the past weight change by adding a momentum term.
Although the back-propagation algorithm may perform gradient descent on the total error of all instances in a batch way, the learning rule is often applied to each instance separately in an online way or stochastic way. There exists empirical indication that the stochastic way results in faster convergence.
In practice, the problem of over-fitting has emerged. This arises in convoluted or over-specified systems when the capacity of the network significantly exceeds the needed free parameters. There are two general approaches for avoiding this problem: The first is to use cross-validation and similar techniques to check for the presence of over-fitting and optimally select hyper-parameters such as to minimize the generalization error. The second is to use some form of regularization, which emerges naturally in a Bayesian framework, where the regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over the "empirical risk" and the "structural risk".
For neural networks, the input patterns usually should be scaled/standardized. Commonly, each input variable is scaled into interval [0, 1] or to have mean 0 and standard deviation 1.
For penalty functions and output units, the following natural pairings are recommended:
- linear output units and a least squares penalty function.
- a two-class cross-entropy penalty function and a logistic activation function.
- a multi-class cross-entropy penalty function and a softmax activation function.
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from interface smile.classification.Classifier
Classifier.Trainer<T,
M extends Classifier<T>> -
Field Summary
-
Constructor Summary
ConstructorDescriptionMLP
(LayerBuilder... builders) Constructor.MLP
(IntSet classes, LayerBuilder... builders) Constructor. -
Method Summary
Modifier and TypeMethodDescriptionint[]
classes()
Returns the class labels.static MLP
fit
(double[][] x, int[] y, Properties params) Fits a MLP model.int
Returns the number of classes.boolean
online()
Returns true if this is an online learner.int
predict
(double[] x) Predicts the class label of an instance.int
predict
(double[] x, double[] posteriori) Predicts the class label of an instance and also calculate a posteriori probabilities.boolean
soft()
Returns true if this is a soft classifier that can estimate the posteriori probabilities of classification.void
update
(double[][] x, int[] y) Updates the model with a mini-batch.void
update
(double[] x, int y) Updates the model with a single sample.Methods inherited from class smile.base.mlp.MultilayerPerceptron
backpropagate, getClipNorm, getClipValue, getLearningRate, getMomentum, getWeightDecay, propagate, setClipNorm, setClipValue, setLearningRate, setMomentum, setParameters, setRMSProp, setWeightDecay, toString, update
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
Methods inherited from interface smile.classification.Classifier
applyAsDouble, applyAsInt, predict, predict, predict, predict, predict, predict, score, update
-
Constructor Details
-
MLP
Constructor.- Parameters:
builders
- the builders of layers from bottom to top.
-
MLP
Constructor.- Parameters:
classes
- the class labels.builders
- the builders of layers from bottom to top.
-
-
Method Details
-
numClasses
public int numClasses()Description copied from interface:Classifier
Returns the number of classes.- Specified by:
numClasses
in interfaceClassifier<double[]>
- Returns:
- the number of classes.
-
classes
public int[] classes()Description copied from interface:Classifier
Returns the class labels.- Specified by:
classes
in interfaceClassifier<double[]>
- Returns:
- the class labels.
-
predict
public int predict(double[] x, double[] posteriori) Description copied from interface:Classifier
Predicts the class label of an instance and also calculate a posteriori probabilities. Classifiers may NOT support this method since not all classification algorithms are able to calculate such a posteriori probabilities.- Specified by:
predict
in interfaceClassifier<double[]>
- Parameters:
x
- an instance to be classified.posteriori
- a posteriori probabilities on output.- Returns:
- the predicted class label
-
predict
public int predict(double[] x) Description copied from interface:Classifier
Predicts the class label of an instance.- Specified by:
predict
in interfaceClassifier<double[]>
- Parameters:
x
- the instance to be classified.- Returns:
- the predicted class label.
-
soft
public boolean soft()Description copied from interface:Classifier
Returns true if this is a soft classifier that can estimate the posteriori probabilities of classification.- Specified by:
soft
in interfaceClassifier<double[]>
- Returns:
- true if soft classifier.
-
online
public boolean online()Description copied from interface:Classifier
Returns true if this is an online learner.- Specified by:
online
in interfaceClassifier<double[]>
- Returns:
- true if online learner.
-
update
public void update(double[] x, int y) Updates the model with a single sample. RMSProp is not applied.- Specified by:
update
in interfaceClassifier<double[]>
- Parameters:
x
- the training instance.y
- the training label.
-
update
public void update(double[][] x, int[] y) Updates the model with a mini-batch. RMSProp is applied ifrho > 0
.- Specified by:
update
in interfaceClassifier<double[]>
- Parameters:
x
- the training instances.y
- the training labels.
-
fit
Fits a MLP model.- Parameters:
x
- the training dataset.y
- the training labels.params
- the hyper-parameters.- Returns:
- the model.
-