Salford Predictive Modeler® Software Suite.
DOWNLOAD and try it for yourself today. See SPM 8 in action: SIGN UP FOR LIVE DEMO.
- SPM® 8
- CART®
- MARS®
- TreeNet®
- Random Forests®
- Automation
- Features List
- Price
- University Program
- Webinars
SPM® 8
- Brainpower:
- 70+ pre-packaged automation scenarios inspired by the way leading model analysts structure their work.
- Efficiencies:
- Tools to relieve gruntwork, allowing the analyst to focus on the creative aspects of model development.
- Enhanced Algorithms:
- Regression, Classification, and Logistic Regression enhanced to support massive datasets.
- Improvements:
- New features for our core tools, based on user feedback and advances in data science.
- Bridging-the-gap
- Between the leading edge academic thinking of Jerome Friedman and Leo Breiman and real-world applications.
Because Accuracy Matters
The Salford Predictive Modeler® (SPM) software suite is a highly accurate and ultra-fast platform for developing predictive, descriptive, and analytical models from databases and datasets of any size, complexity, or organization.
The SPM software suite’s data mining technologies span classification, regression, survival analysis, missing value analysis, data binning and clustering/segmentation. SPM algorithms are considered to be essential in sophisticated data science circles.
The SPM software suite‘s automation accelerates the process of model building by conducting substantial portions of the model exploration and refinement process for the analyst. While the analyst is always in full control we optionally anticipate the analysts’ next best steps and package a complete set of results from alternative modeling strategies for easy review.
[J#47:1707]
Automation
70+ pre-packaged scenarios, basically experiments, inspired by how leading model analysts structure their work. We call them "Automates". These "Automates" or experiments create multiple models automatically so that the analyst can easily see choices.
Example 1: Banking Applications
Automate Shaving
Automate Shaving helps to identify subsets of informative data within large datasets containing correlated variables within the account data. With automation, you may accomplish significant model reduction with minimal (if any) sacrifice to model accuracy. For example, start with a complete list of variables, and run automated shaving from the top to eliminate variables that look promising on the learn sample but fail to generalize. Later you can run shaving from the bottom to automatically eliminate a major bulk of redundant and unnecessary predictors. Then follow up with "shaving error" to quickly zero in on the most informative subset of features.
As opposed to typical data mining tools, Automate Shaving offers more than the typical variable importance list. Additionally, the analyst is provided with a full set of variable importance subsets/variations enabling the analyst to quickly optimize/select the final variable list and eliminating the burden of repetitive testing. Expert modelers typically devote a lot of time and effort to optimizing their variable importance list; Automate Shaving automates this process.
Example 2: Fraud Detection
Automate Priors
In typical fraud detection applications the analyst is concerned with identifying different sets of rules leading to a varying probability of fraud. Decision trees and TreeNet gradient boosting technology are typically used to build classification rules for detecting fraud. Any classification tree is constructed based on a specific user-supplied set of prior probabilities.
One set of priors will force trees to search for rules with high levels of fraud, while other sets of priors will produce trees with somewhat relaxed assumptions. To gain the most benefits of tree-based rule searching approaches, analysts will try a large number of different configurations of prior probabilities. This process is fully automated in Automate Priors. The result is a large collection of rules ranging from extremely high confidence fraud segments with low support to moderate indication of fraud segments with very wide support. For example, you can identify small segments with 100% fraud or you may find a large segment with a lesser probability of fraud, and everything in-between.
Example 3: Market Research - Surveys
Automate MVI (Missing Value Indicators)
In any survey, a large fraction of information may be missing. Often, the respondent will not answer questions either because they don't want to or are unable to do so. In addition to Salford Systems' expertise in handling missing values, a new automation feature allows the analyst to automatically generate multiple models including: 1) a model predicting response based solely on the pattern of missing values; 2) a model that automatically creates dummy missing value indicators in addition to the original set of predictors; and/or 3) a model that relies on engine-specific internal handling of missing values.
Example 4: Engineering Application
Automate Target
In a modern engineering application, as part of the experimental design, a large collection of sampled points may be gathered under different operating conditions. It can be challenging to identify mutual dependencies among the different parameters. For example, temperatures could be perfectly dependent on each other, or could be some unknown functions of other operating conditions like pressure and/or revolutions. Automate Target gives you powerful means to automatically explore and extract all mutual dependencies among predictors. By the word "dependencies," we mean a potentially nonlinear multivariate relationship that goes way beyond the simplicity of conventional correlations. Furthermore, as a powerful side effect, this Automate provides general means for missing value imputation, which is extremely useful to support those modeling engines that do not directly handle missing values.
Example 5: Web Advertising
Automate Sample
In an online ad placing application one has to balance the amount of data used vs. the time it takes to complete the model building. In web advertising there can be virtually an unlimited amount of data. So while ideally you would wish you use all available data, there is always a limit on how much can be used for real-time deployment. Automate Sample allows the analyst to automatically explore the impact of learn sample size on model accuracy. For example, you may discover that using 200,000,000 transactions provides no additional benefit in terms of model accuracy compared to 100,000,000 transactions.
Example 6: Microarray Application
Automate TARGETSHUFFLE
Microarray research datasets are characterized by an extremely large number of predictors (genes) and a very limited number of records (patients). This opens up a vast area of ambiguity resulting from the fact that even a random subset of predictors may produce a seemingly good looking model. Automate TARGETSHUFFLE allows you to determine whether the model performance is as accurate as it appears to be. Automate TARGETSHUFFLE automatically constructs a large number of auxiliary models based on randomly shuffled target variables. By comparing the actual model performance with the reference distribution (no dependency models), a final decision on model performance can be made. This technology could result in challenges to some of the currently produced papers in microarray research. If a dataset with deliberately destroyed target dependency can give you a model with good accuracy, then relying on the original model becomes rather dubious.
[J#50:1707]
Features List
Salford Predictive Modeler® 8 General Features:
- Modeling Engine: CART® decision trees
- Modeling Engine: TreeNet® gradient boosting
- Modeling Engine: Random Forests® tree ensemble
- Modeling Engine: MARS® nonlinear regression splines
- Modeling Engine: GPS regularized regression (LASSO, Elastic Net, Ridge, etc.)
- Modeling Engine: RuleLearner, incorporating TreeNet’s accuracy plus the interpretability of regression
- Modeling Engine: ISLE model compression
- 70+ pre-packaged automation routines for enhanced model building and experimentation
- Tools to relieve gruntwork, allowing the analyst to focus on the creative aspects of model development.
- Open Minitab Worksheet (.MTW) functionality
CART® Features:
- Hotspot detection to discover the most important parts of the tree and the corresponding tree rules
- Variable importance measures to understand the most important variables in the tree
- Deploy the model and generate predictions in real-time or otherwise
- User defined splits at any point in the tree
- Differential lift (also called “uplift” or “incremental response”) modeling for assessing the efficacy of a treatment
- Automation tools for model tuning and other experiments including
- Automatic recursive feature elimination for advanced variable selection
- Experiment with the prior probabilities to obtain a model that achieves better accuracy rates for the more important class
- Perform repeated cross validation
- Build CART models on bootstrap samples
- Build two linked models, where the first one predicts a binary event while the second one predicts a numeric value. For example, predicting whether someone will buy and how much they will spend.
- Discover the impact of different learning and testing partitions
MARS® Features:
- Graphically understand how variables affect the model response
- Determine the importance of a variable or set of interacting variables
- Deploy the model and generate predictions in real-time or otherwise
- Automation tools for model tuning and other experiments including
- Automatic recursive feature elimination for advanced variable selection
- Automatically assess the impact of allowing interactions in the model
- Easily find the best minimum span value
- Perform repeated cross validation
- Discover the impact of different learning and testing partitions
TreeNet® Features:
- Graphically understand how variables affect the model response with partial dependency plots
- Regression loss functions: least squares, least absolute deviation, quantile, Huber-M, Cox survival, Gamma, Negative Binomial, Poisson, and Tweedie
- Classification loss functions: binary or multinomial
- Differential lift (also called “uplift” or “incremental response”) modeling
- Column subsampling to improve model performance and speed up the runtime.
- Regularized Gradient Boosting (RGBOOST) to increase accuracy.
- RuleLearner: build interpretable regression models by combining TreeNet gradient boosting and regularized regression (LASSO, Elastic Net, Ridge etc.)
- ISLE: Build smaller, more efficient gradient boosting models using regularized regression (LASSO, Elastic Net, Ridge, etc.)
- Variable Interaction Discovery Control
- Determine definitively whether or not interactions of any degree need to be included
- Control the interactions allowed or disallowed in the model with Minitab’s patented interaction control language
- Discover the most important interactions in the model
- Calibration tools for rare-event modeling
- Automation tools for model tuning and other experiments including
- Automatic recursive feature elimination for advanced variable selection
- Experiment with different learn rates automatically
- Control the extent of interactions occurring in the model
- Build two linked models, where the first one predictions a binary event while the second one predicts a numeric value. For example, predicting whether someone will buy and how much they will spend.
- Find the best parameters in your regularized gradient boosting model
- Perform a stochastic search for the core gradient boosting parameters
- Discover the impact of different learning and testing partitions
Random Forests® Features:
- Use for classification, regression, or clustering
- Outlier detection
- Proximity heat map and multi-dimensional scaling for graphically determining clusters in classification problems (binary or multinomial)
- Parallel Coordinates Plot for a better understanding of what levels of predictor values lead to a particular class assignment
- Advanced missing value imputation
- Unsupervised learning: Random Forest creates the proximity matrix and hierarchical clustering techniques are then applied
- Variable importance measures to understand the most important variables in the model
- Deploy the model and generate predictions in real-time or otherwise
- Automation tools for model tuning and other experiments including
- Automatic recursive feature elimination for advanced variable selection
- Easily fine tune the random subset size taken at each split in each tree
- Assess the impact of different bootstrap sample sizes
- Discover the impact of different learning and testing partitions
[J#51:1707]
Webinars
The Evolution of Regression Modeling
The Evolution of Regression Modeling: from Classical Linear Regression to Modern Ensembles
Webinar Title: The Evolution of Regression Modeling: from Classical Linear Regression to Modern Ensembles
Date/Time: Friday, March 1, 15, 29, and April 12 2013, 10am-11am, PST
Course Description: Regression is one of the most popular modeling methods, but the classical approach has significant problems. This webinar series address these problems. Are you are working with larger datasets? Is your data challenging? Does your data include missing values, nonlinear relationships, local patterns and interactions? This webinar series is for you! We will cover improvements to conventional and logistic regression, and will include a discussion of classical, regularized, and nonlinear regression, as well as modern ensemble and data mining approaches. This series will be of value to any classically trained statistician or modeler.
Part 1
Part 1: Regression methods discussed (download slides)
- Classical Regression
- Logistic Regression
- Regularized Regression: GPS Generalized Path Seeker
- Nonlinear Regression: MARS Regression Splines
The Evolution of Regression Modeling
The Evolution of Regression Modeling: from Classical Linear Regression to Modern Ensembles
Part 2
Part 1:Step-by-step demonstration
- Datasets and software available for download
- Instructions for reproducing demo at your leisure
- For the dedicated student: apply these methods to your own data (optional)
The Evolution of Regression Modeling
The Evolution of Regression Modeling: from Classical Linear Regression to Modern Ensembles
Part 3
Part 3: Regression methods discussed (download slides)
*Part 1 is a recommended pre-requisite
- Nonlinear Ensemble Approaches: TreeNet Gradient Boosting; Random Forests; Gradient Boosting incorporating RF
- Ensemble Post-Processing: ISLE; RuleLearner
The Evolution of Regression Modeling
The Evolution of Regression Modeling: from Classical Linear Regression to Modern Ensembles
Part 4
Part 4: Hands-on demonstration of concepts discussed in part 3 (download slides)
- Step-by-step demonstration
- Datasets and software available for download
- Instructions for reproducing demo at your leisure
- For the dedicated student: apply these methods to your own data (optional)
The Evolution of Regression Modeling
The Evolution of Regression Modeling: from Classical Linear Regression to Modern Ensembles
[J#58:1708]
Advances in TreeNet Gradient Boosting
Advances in Gradient Boosting: The Power of Post Processing
Advances in Gradient Boosting: the Power of Post-Processing
Learn how TreeNet stochastic gradient boosting can be improved by post processing techniques such as GPS Generalized Path Seeker, RuleLearner, and ISLE.Course Outline:
I. Gradient Boosting and Post-Processing:
- What is missing from Gradient Boosting?
- Why post-processing techniques are used?
II. Applications Benefiting from Post-Processing: Examples from a variety of industries.
- Financial Services
- Biomedical
- Environmental
- Manufacturing
- Adserving
III. Typical Post-Processing Steps
IV. Techniques
- Generalized Path Seeker (GPS): Modern high-speed LASSO-style regularized regression
- Importance Sampled Learning Ensembles (ISLE): identify and reweight the most influential trees
- RuleLearner: ISLE on “steroids.” Identify the most influential nodes and rules
V. Case Study Example
- Output/Results without Post-Processing
- Output/Results with Post-Processing
- Demo
Watch the Video
[J#59:1603]
Combining CART and TreeNet
Combining CART decision trees with TreeNet stochastic gradient boosting: A winning combination.
Learn about how you can combine the best of both tools in this 1 hour webinar.
Course Outline
I. Classification and Regression Trees Pros/Cons
II. Stochastic Gradient Boosting: a promising way to overcome the shortcomings of a single tree
III. Introducing Stochastic Gradient Boosting, a powerful modern ensemble of boosted trees
- Methodology
- Reporting
- Interpretability
- Post-Processing
- Interaction Detection
IV. Advantages of using both Classification and Regression Trees and Tree Ensembles
Watch the Video
TreeNet Tree Ensembles and CART Decision Trees
TreeNet Tree Ensembles and CART Decision Trees: A Winning Combination
[J#60:1706]
[J#62:1603]
Requirements
Windows System Requirements
- Operating System: Windows 7 SP 1 or later, Windows 8 or 8.1, Windows 10.
- RAM: 2 GB.
- Processor: Intel® Pentium® 4 or AMD Athlon™ Dual Core, with SSE2 technology.
- Hard Disk Space: 2 GB (minimum) free space available.
- Screen Resolution: 1024 x 768 or higher.
Linux System Requirements
- Operating System: Ubuntu 14.04 or 16.04, CentOS 6.9 or 7.5, RHEL 6.9 or 7.5.
- RAM: 2 GB.
- Processor: Intel® Pentium® 4 or AMD Athlon™ Dual Core, with SSE2 technology.
- Hard Disk Space: 2 GB (minimum) free space available.
[J#51:1707]
[j#38:1707]
Tags: v8, Salford Predictive Modeler, SPM, Salford-Systems