Download Now Instant Evaluation
Get Price Quote

Advances in Gradient Boosting: The Power of Post Processing

Advances in Gradient Boosting: The Power of Post Processing

Click to View / Download PDF

Advances in Gradient Boosting: the Power of Post-Processing

Learn how TreeNet stochastic gradient boosting can be improved by post processing techniques such as GPS Generalized Path Seeker, RuleLearner, and ISLE.

Course Outline:

I. Gradient Boosting and Post-Processing:

  • What is missing from Gradient Boosting?
  • Why post-processing techniques are used?

II. Applications Benefiting from Post-Processing: Examples from a variety of industries.

  • Financial Services
  • Biomedical
  • Environmental
  • Manufacturing
  • Adserving

III. Typical Post-Processing Steps

 

IV. Techniques

  • Generalized Path Seeker (GPS): Modern high-speed LASSO-style regularized regression
  • Importance Sampled Learning Ensembles (ISLE): identify and reweight the most influential trees
  • RuleLearner: ISLE on “steroids.” Identify the most influential nodes and rules

V. Case Study Example

  • Output/Results without Post-Processing
  • Output/Results with Post-Processing
  • Demo

Watch the Video

 

[J#59:1603]

Algorithms

Algorithms

Components and Features

Download Components and Features 

SPM Components and FeaturesWhat's New
SPM Components and FeaturesWhat's New
CART (Classification and Regression Trees) User defined linear combination lists for splitting; Constrains on trees; Automatic addition of missing value indicators; Enhanced GUI reporting; User controlled Cross Validation; Out-of-bag performance stats and predictions; Profiling terminals nodes based on user supplied variables; Comparison of Train vs. Test consistency across nodes; RandomForests-style variable importance
MARS (Automated Nonlinear Regression) Updated GUI interface; Model performance based on independent test sample or Cross Validation; Support for time series models
TreeNet (Gradient Boosting, Boosted Trees) One-Tree TreeNet (CART alternative); RandomForests via TreeNet (RandomForests regression alternative) Interaction Control Language (ICL); Interaction strength reporting; Enhanced partial dependency plots; RandomForests-style randomized splits;
RandomForests (Bagging Trees) RandomForests regression; Saving out-of-bag scores; Speed enhancements
High-Dimensional Multivariate Pattern Discovery Battery Target (link) to identify mutual dependencies in the data
Unsupervised Learning (Breiman's Column Scrambler) New
Text Mining New
Model Compression and Rule Extraction New: ISLE; RuleLearner; Hybrid Compression
Automation 56 pre-packaged scenarios based on years of high-end consulting
Parallel Processing New: Automatic support of multiple cores via multithreading
Interaction Detection  
Hotspot Detection Segment Extraction (Battery Priors)
Missing Value Handling and Imputation  
Outlier Detection New: GUI reports, tables, and graphs
Linear Methods for Regression, Recent Advances and Discoveries New: OLS Regression; Regularized Regression Including: LAR/LASSO Regression; Ridge Regression; Elastic Net Regression/ Generalized Path Seeker
Linear Methods for Classification, Recent Advances and Discoveries New: LOGIT; LAR/LASSO; Ridge; Elastic Net/ Generalized Path Seeker
Model Assessment and Selection Unified reporting of various performance measures across different models
Ensemble Learning New: Battery Bootstrap; Battery Model
Time Series Modeling New
Model Simplification Methods   
Data Preparation New: Battery Bin for automatic binning of a user selected set of variables with large number of options
Large Data Handling 64 bit support; Large memory capacity limited only by your hardware
Model Translation (SAS, C, Java, PMML, Classic) Java
Data Access (all popular statistical formats supported) Updated Stat Transfer Drivers including R workspaces
Model Scoring Score Ensemble (combines multiple models into a powerful predictive machine)

[J#57:1602]

AutoDiscovery of Predictors in SPM

Autodiscovery leverages the stability advantages of multiple trees to rank variables for importance and thus select a subset of predictors for modeling. In SPM® v8.2 and earlier Autodiscovery runs a very simple training data only TreeNet model growing out to 200 trees. The variable importance ranking generated from this model is then used to reduce the list of all available predictors down to the top performing predictors in this background model. Autodiscovery is fast and easy, as there are no control parameters to set, but it is just a mechanism for quickly testing whether a substantial refinement in the number of predictors can improve model performance.

How can MARS models be implemented for predictive purposes?

A MARS predictive model can be implemented in two ways. First, new databases can be scored directly by identifying the MARS model and the data to be scored. MARS will perform all the required data transformations and calculations automatically and output the predicted scores. Second, the MARS predictive equation can be exported as ready-to-run C and SAS®-compatible code that can be deployed in the user's application framework.

How does MARS construct its models?

MARS starts from the premise that most relevant variables affect the outcome in a complex way. Therefore, when MARS considers whether to add a variable, it simultaneously searches for appropriate break points – knots. Models are constructed in a two-phase procedure. Phase I tests variables and potential knots, resulting in an overfit model. Phase II eliminates redundant factors and components that do not stand up to testing.

How does MARS differ from conventional regression?

Conventional regression models typically fit straight lines to data. MARS approaches model construction more flexibly, allowing for bends, thresholds, and other departures from straight-line methods. MARS builds its model by piecing together a series of straight lines with each allowed its own slope. This permits MARS to trace out any pattern detected in the data.

How does MARS ensure that a model will perform as claimed on future data?

Almost all modeling technologies can track training data accurately. MARS protects users from misleading results through its two-stage modeling process. MARS overfits its model initially but then prunes away all components that would not hold up with new data. MARS provides assessments through use of one of two built-in testing regimens: cross validation or reference to independent test data. Using these tests, MARS determines the degree of accuracy that can be expected from the best predictive model.

How does MARS handle missing values?

MARS automatically creates a missing value indicator – a dummy variable – that becomes one of the available predictors. These dummy variables represent the absence or the presence of data for the predictor variables in focus.

Get In Touch With Us

Contact Us

9685 Via Excelencia, Suite 208, San Diego, CA 92126
Ph: 619-543-8880
Fax: 619-543-8888
info (at) salford-systems (dot) com