SPM® for Windows has long had the ability to read tables in relational databases through the ODBC interface. This capability was also recently added to the command line version on Windows and it is planned on UNIX platforms (including MacOS X). The purpose of this article is to describe how to access MySQL databases specifically, but the same principles will apply to accessing data stored in other relational database systems. Probably, the only thing that will differ will be the driver used.
There are a variety of ways to represent dates in data files and there is standard, which can make life difficult if one is trying to use date variables in a predictive model. Two of the more common representations are the Microsoft date format (used in Excel and other Microsoft products) , which is the number of days since December 30, 1899; and the SAS date format, which is the number of days since January 1, 1960. For the sake of establishing consistency, the data access library used by SPM® converts all date variables to Microsoft dates. The advantage of doing so is that one does not have to guess how dates are represented in the input dataset and Microsoft products are common; the disadvantage is that you might be confused if you are using non-Microsoft products (like SAS) to manage your data.
SPM 6.6 (TreeNet TN 6.4) or greater supports data access to Microsoft SQL Server, Oracle, MySQL and other RDMS via ODBC interface.
Since SQL Queries cannot be entered via standard Windows ODBC dialog data source selection dialog, one has to use command line to open data directly from SQL Server.
Autodiscovery leverages the stability advantages of multiple trees to rank variables for importance and thus select a subset of predictors for modeling. In SPM® v8.2 and earlier Autodiscovery runs a very simple training data only TreeNet model growing out to 200 trees. The variable importance ranking generated from this model is then used to reduce the list of all available predictors down to the top performing predictors in this background model. Autodiscovery is fast and easy, as there are no control parameters to set, but it is just a mechanism for quickly testing whether a substantial refinement in the number of predictors can improve model performance.
CART®, MARS®, and TreeNet® were originally developed to analyze cross-sectional data, where each observation or record in the data is independent of all other records and no explicit accommodation is made for either time or censoring. Fortunately, research in statistics has shown us how to adapt our tools, as well as classical statistical tools such as logistic regression, to the analysis of time series cross-sectional and survival analysis data. This brief note outlines the topic, sometimes known as "discrete time survival analysis," showing you how to set up your data to estimate survival or failure time models. The methods discussed here also apply to the analysis of web logs and other sequentially-structured data. A collection of useful references is provided below.
Like many programs, the Salford Predictive Modeler® software suite reads, writes, and otherwise manages temporary files in the course of its work. These are written to a particular directory on your computer called a "scratch directory". SPM also writes a command log to the scratch directory. The GUI version of SPM allows the location of this directory to be set as an option (with a sensible default), but non-GUI versions determine where to write temporary files by means of environment variables. Presently, SPM searches for the following environment variables and uses the value of the first one defined as its scratch directory:
A user's license sets a limit on the amount of learn sample data that can be analyzed. The learn sample is the data used to build the model. Note that there is no limit to the number of test sample data points that may be analyzed. In other words, rows -by- columns of variable and observations used to build the model. Variable not used in the model do not count. Observations reserved for testing, or excluded for other reasons, do not count.
The following is a table that describes the current set of "sizes" available. Please note that the minimum required RAM is not the same as the learn sample limitation.