After a long time of inactivity, I started reading books again. This time my choice is "Introduction to Statistical Learning with Applications in R" (ISLR) by Gareth James et al. I have developed a habit of noting down important points from the book as well as my insights while reading it. This is my notes for Chapter 2.
Terminologies
Statistical learning investigates the relationship between a set of features and a particular output . The very general formula is
There are many names for and :
- : input variable, predictor, (independent) variable, feature
- : output variable, dependent variable, response
is called the error term and it is random, independent of , irreducible, and has mean zero. This is due to the fact that may be dependent on some features other than and the values of such features are not reflected in the data set.
represents the systematic information that provides about . Oftentimes, we cannot obtain the exact form of but we can get a good estimate of through a number of methods.
Two purposes of statistical learning
Prediction
Suppose that our estimate for is , we have called a prediction. In some methods such as neural networks, we can treat as a black box, i.e. we don't care of the formula of as long as we have a function in the program to calculate given .
In general, is not a perfect estimate of , the error introduces is called reducible error. We can show that the difference between the prediction and the true response depends on two quantities, reducible error and irreducible error. Assume for that and is fixed.
where is reducible (by making a good estimate ) and is irreducible.
Inference
Sometimes we want to investigate the formula of (which is assumed to be a good estimate of ) to understand the relationship between the response and each predictor. Thus we cannot treat as a black box. By looking at the exact form of , we can maybe eliminate some unimportant predictors to simplify the problem.
Some modeling is conducted for prediction, some for inference, some for both.
Estimate Methods
Parametric Methods
There are generally two steps:
- Choose model: Make an assumption ( ) about the formula of , leave the coefficients (parameters) unknown.
- Train model: Choose the coefficients such that where is a training observation, i.e. an observation in the training data set.
The advantage of this method class is its simplicity: instead of estimating an entirely arbitrary function , we fixate our work on a particular formula (e.g. a linear formula) and estimating the coefficients. However, this method is often inaccurate because the real life problem may be more complicated than, say, what a linear model suggests.
Of course, we can always increase the complexity of our model in step 1, but we then face a dilemma. The more complex our formula is, the proner it is to noise. Data is not always perfect either, overfitting the data means that our model follows the errors.
Parametric methods are suitable for inference purposes, because the formula is simple (easy to interpret) and inflexible (changing a few observations does not affect the formula significantly).
Non-parametric Methods
In this class, we do not make any assumption about the formula of , but seek an estimate that gets as close to the data points as possible without being too rough or wiggly.
The advantage of this method is that the set of possible shapes to fit the observations is larger. However, estimating may require estimating a lot of parameters than the parametric methods. Thus we need a very large number of observations in order to obtain an accurate estimate of .
Non-parametric methods are suitable for prediction purposes, because it yields a closer estimation with a complicated formula which is not of our interest.
Throughout this series, I will use the term method to depict a schema (e.g. linear regression, kNN) from which a particular configuration (model) of the schema will be derived (e.g. what the coefficients are, what the value of k in kNN is).
Supervised vs. Unsupervised Learning
Supervised methods are applied to a training data set where for each predictor set is associated with a response , and we need to come up with an estimate function .
Unsupervised methods are applied to a training data set where we do not know the responses of the observations. In this case, we seek to understand the relationships between the variables or the observations.
Regression vs. Classification Problems
When the response is quantitative, the problem is called regression. Otherwise ( is qualitative), the problem is called classification. Some methods suit better to a class of problems.
Whether the predictors are quantitative or qualitative is less important because if they are qualitative we usually encode the categories with numerical values.
Assessing model accuracy
For regression models
In order to evaluate the performance of a method on a given data set, we need to quantify the difference between the predicted response and the true response. The most commonly-used measure in regression problems is mean squared error (MSE) given by
where is a training observation and is the estimate model.
We also distinguish training MSE (the formula above) versus testing MSE, where the difference is is a test observation. "Test observations" are observations that did not participate in training the model. We want to choose the model with the lowest testing MSE, instead of training MSE.
There is not a close relationship between the training MSE and the testing MSE. Some model ( ) may have very low training MSE (by overfitting data) but poor testing MSE. As the flexibility (i.e. complexity) increases, the training MSE decreases monotonically, while the testing MSE initially decreases to a certain point and increases again (U-shaped graph). This is a fundamental property of statistical learning, regardless of data set and regardless of the method being used.
We also have two properties of a learning method
- Variance: The amount by which the model will have to change when there is a change in the training data. High variance means that a small change may result in a completely different shape of the model.
- (squared) Bias: The (squared) difference between the response (measured in real life) and the prediction (calculated from the model). High bias means that a model may not follow closely with the data points.
Example: The linear regression model is a straight line, which has low variance and high bias
We can show that the expected test MSEof a method is dependent on its variance and bias. In the so-called variance-bias decomposition below, assume that are fixed and is the average test MSE that we will obtain if we estimate of the method using different training data sets.
Note that we omit the term `$\text{Var}(\epsilon)$` at the last step because `$\epsilon$` is fixed by assumption and thus its variance is zero.
In general, the more flexible a method is, the less Bias and the more Variance it has. We want to choose a moderate level of flexibility such that the method has small bias and small variance, which would result in the least expected test MSE. The opposite growth of Bias and Variance is called the Bias-Variance Tradeoff.
Flexibility | 0 | ||||
---|---|---|---|---|---|
Expected Test MSE | |||||
Bias | 0 | ||||
Variance |
For classification models
In a classification setting, the response and predictions are qualitative data, thus we need a different approach from what we did in a regression setting. We can compute the error rate, i.e. the proportion of wrong predictions when compared to the response.
We also distinguish training error rate and testing error rate, which are respectively computed against training data set and testing data set. A good classifier (classification model) is the one that has the smallest testing error rate.
Bayes classifier
(Not to be confused with Naïve Bayes Classifier)
This is a very simple and effective classifier. It has a conditional probability function , i.e. the probability the class being given the predictor . Basically we will assign each observation with predictor to class such that is the largest:
We also define Bayes error rate which differs from the standard error rate in the sense that the indicator variable is either 1 or 0, while with the Bayes error rate we replace the term with the probability that the prediction is wrong.
However, as you probably realized, this model is just too good to be real. Indeed, in real life application, we do not know the probability distribution of the classes given the predictors, thus applying the Bayes classifier is impossible. Many methods have nevertheless attempted to approximate this probability function of given .
K-Nearest Neighbor (KNN)
KNN is one of the methods that use the idea of Bayes classifier by approximating the conditional probability of given . For an observation , it looks for -nearest observations, gathering the classes of these neighbors, and assign the most popular class in the neighborhood of . The larger is, the less flexible is the method. When , the method became so flexible that it overfits the data: while the training error rate is 0, the test error is very likely to be high.