Class RandomForest

java.lang.Object
smile.regression.RandomForest
All Implemented Interfaces:
Serializable, ToDoubleFunction<Tuple>, SHAP<Tuple>, TreeSHAP, DataFrameRegression, Regression<Tuple>

public class RandomForest extends Object implements DataFrameRegression, TreeSHAP
Random forest for regression. Random forest is an ensemble method that consists of many regression trees and outputs the average of individual trees. The method combines bagging idea and the random selection of features.

Each tree is constructed using the following algorithm:

  1. If the number of cases in the training set is N, randomly sample N cases with replacement from the original data. This sample will be the training set for growing the tree.
  2. If there are M input variables, a number m << M is specified such that at each node, m variables are selected at random out of the M and the best split on these m is used to split the node. The value of m is held constant during the forest growing.
  3. Each tree is grown to the largest extent possible. There is no pruning.
The advantages of random forest are:
  • For many data sets, it produces a highly accurate model.
  • It runs efficiently on large data sets.
  • It can handle thousands of input variables without variable deletion.
  • It gives estimates of what variables are important in the classification.
  • It generates an internal unbiased estimate of the generalization error as the forest building progresses.
  • It has an effective method for estimating missing data and maintains accuracy when a large proportion of the data are missing.
The disadvantages are
  • Random forests are prone to over-fitting for some datasets. This is even more pronounced in noisy classification/regression tasks.
  • For data including categorical variables with different number of levels, random forests are biased in favor of those attributes with more levels. Therefore, the variable importance scores from random forest are not reliable for this type of data.
See Also:
  • Constructor Details

    • RandomForest

      public RandomForest(Formula formula, RandomForest.Model[] models, RegressionMetrics metrics, double[] importance)
      Constructor.
      Parameters:
      formula - a symbolic description of the model to be fitted.
      models - the base models.
      metrics - the overall out-of-bag metric estimations.
      importance - the feature importance.
  • Method Details

    • fit

      public static RandomForest fit(Formula formula, DataFrame data)
      Fits a random forest for regression.
      Parameters:
      formula - a symbolic description of the model to be fitted.
      data - the data frame of the explanatory and response variables.
      Returns:
      the model.
    • fit

      public static RandomForest fit(Formula formula, DataFrame data, Properties params)
      Fits a random forest for regression.
      Parameters:
      formula - a symbolic description of the model to be fitted.
      data - the data frame of the explanatory and response variables.
      params - the hyper-parameters.
      Returns:
      the model.
    • fit

      public static RandomForest fit(Formula formula, DataFrame data, int ntrees, int mtry, int maxDepth, int maxNodes, int nodeSize, double subsample)
      Fits a random forest for regression.
      Parameters:
      formula - a symbolic description of the model to be fitted.
      data - the data frame of the explanatory and response variables.
      ntrees - the number of trees.
      mtry - the number of input variables to be used to determine the decision at a node of the tree. p/3 generally give good performance, where p is the number of variables.
      maxDepth - the maximum depth of the tree.
      maxNodes - the maximum number of leaf nodes in the tree.
      nodeSize - the number of instances in a node below which the tree will not split, nodeSize = 5 generally gives good results.
      subsample - the sampling rate for training tree. 1.0 means sampling with replacement. < 1.0 means sampling without replacement.
      Returns:
      the model.
    • fit

      public static RandomForest fit(Formula formula, DataFrame data, int ntrees, int mtry, int maxDepth, int maxNodes, int nodeSize, double subsample, LongStream seeds)
      Fits a random forest for regression.
      Parameters:
      formula - a symbolic description of the model to be fitted.
      data - the data frame of the explanatory and response variables.
      ntrees - the number of trees.
      mtry - the number of input variables to be used to determine the decision at a node of the tree. p/3 generally give good performance, where p is the number of variables.
      maxDepth - the maximum depth of the tree.
      maxNodes - the maximum number of leaf nodes in the tree.
      nodeSize - the number of instances in a node below which the tree will not split, nodeSize = 5 generally gives good results.
      subsample - the sampling rate for training tree. 1.0 means sampling with replacement. < 1.0 means sampling without replacement.
      seeds - optional RNG seeds for each regression tree.
      Returns:
      the model.
    • formula

      public Formula formula()
      Description copied from interface: DataFrameRegression
      Returns the model formula.
      Specified by:
      formula in interface DataFrameRegression
      Specified by:
      formula in interface TreeSHAP
      Returns:
      the model formula.
    • schema

      public StructType schema()
      Description copied from interface: DataFrameRegression
      Returns the schema of predictors.
      Specified by:
      schema in interface DataFrameRegression
      Returns:
      the schema of predictors.
    • metrics

      public RegressionMetrics metrics()
      Returns the overall out-of-bag metric estimations. The OOB estimate is quite accurate given that enough trees have been grown. Otherwise, the OOB error estimate can bias upward.
      Returns:
      the overall out-of-bag metric estimations.
    • importance

      public double[] importance()
      Returns the variable importance. Every time a split of a node is made on variable the impurity criterion for the two descendent nodes is less than the parent node. Adding up the decreases for each individual variable over all trees in the forest gives a fast measure of variable importance that is often very consistent with the permutation importance measure.
      Returns:
      the variable importance
    • size

      public int size()
      Returns the number of trees in the model.
      Returns:
      the number of trees in the model
    • models

      public RandomForest.Model[] models()
      Returns the base models.
      Returns:
      the base models.
    • trees

      public RegressionTree[] trees()
      Description copied from interface: TreeSHAP
      Returns the decision trees.
      Specified by:
      trees in interface TreeSHAP
      Returns:
      the decision trees.
    • trim

      public RandomForest trim(int ntrees)
      Trims the tree model set to a smaller size in case of over-fitting. Or if extra decision trees in the model don't improve the performance, we may remove them to reduce the model size and also improve the speed of prediction.
      Parameters:
      ntrees - the new (smaller) size of tree model set.
      Returns:
      the trimmed model.
    • merge

      public RandomForest merge(RandomForest other)
      Merges two random forests.
      Parameters:
      other - the model to merge with.
      Returns:
      the merged model.
    • predict

      public double predict(Tuple x)
      Description copied from interface: Regression
      Predicts the dependent variable of an instance.
      Specified by:
      predict in interface Regression<Tuple>
      Parameters:
      x - an instance.
      Returns:
      the predicted value of dependent variable.
    • test

      public double[][] test(DataFrame data)
      Test the model on a validation dataset.
      Parameters:
      data - the test data set.
      Returns:
      the predictions with first 1, 2, ..., regression trees.