01.Introductions (Review) and Several Preliminary Statistical Methods
Some basics of statistical learning; least squares and k nearest neighbors; statistical decision theory; local methods in high dimensions
Review
Skeleton notes
Abbreviations
- MSE: mean squared error
- EPE: expected prediction error
- RSS: sum of squares
Notations
Fonts:
- Vectors or scalars are denoted by italic math font $X$.
- Components of vectors are denoted by subscripts $X_i$.
- Matrix is denoted by math bold font $\mathbf X$.
Symbols
- $X$ for input variables;
- $Y$ for quantitative output;
- $G$ for qualitative output;
- $\hat {}$ for prediction.
Least Squares and Nearest Neighbors
Least Squares
Least square model:
$$ \hat Y = X^T \hat \beta. $$
Residual sum of squares (RSS):
$$ \mathrm{RSS}(\beta) = (\mathbf y - \mathbf X \beta)^{\mathrm T} (\mathbf y - \mathbf X \beta). $$
The parameters we need is the set that minimizes RSS, which requires
$$ \frac{d}{d\beta} \mathrm{RSS} = 0. $$
So we can solve the parameters easily.
Nearest-Neighbor
- For input data $x$, calculate the Euclidean distance between $x$ and other input data $x_j$.
- Choose the $k$ nearest neighbors based on the distance.
- Output prediction is determined by average of the corresponding outputs of the selected inputs. $$ \hat Y = \frac{1}{k} \sum_{N_k} y_i. $$
For the calculation of distance, metric must be implemented. The book used examples of Euclidean metric. Another metric that can be inspiring is the hyperbolic space. I talked about this in our reading club. {: .notes–info}
For Which Scenario
- Least squares: Gaussian-like data set;
- Nearest-Neighbor: mixture of Gaussians.
Mixture of Gaussians can be described by generative model. I am not really sure what that is. It seems to me that the final data is basically generated from Gaussians of different parameters which are generated randomly. {: .notes–warning}
Statistical Decision Theory
- Given input $X$ and output $Y$;
- Following a joint distribution $\mathrm{Pr}(X,Y)$;
- Based on input and output, we look for a function that predicts the behavior, i.e., $\hat Y = f(X)$;
- How well the prediction is is defined by squared error loss $L(Y,\hat Y) = (Y-\hat Y)^2$.
- With the distribution, we predict the expected prediction error (EPE) as $$ \mathrm{EPE}(f) = E[ ( Y- \hat Y )^2 ] = \int (y - f(x))^2 \mathrm{Pr}(dx, dy). $$
- The book derived that the best prediction is $f(x) = E(Y\vert X=x)$.
- Different loss functions lead to different EPE’s.
Question: Can we simply solve the probability distribution and find out the function of prediction? The conclusion says the best prediction of $Y$ is the conditional mean. Is it effectively solving $Y$ from the probability distribution? {: .notes–warning}
Nearest-Neighbor
- The best prediction based on EPE is conditional mean, Eq. 2.13;
- Both $k$ nearest neighbor and linear regression fits into this framework;
- Additive models: basically turn the linear $x^T\beta$ into a function of $f_j(X_j)$. The summation still holds.
- The best prediction based on expectation only is conditional median.
- Categorical variable $G$ also follows the same paradigm but with different loss function.
- A choice of loss function for categorical case is a matrix. It has to be a matrix because we have to specify penalties a given prediction class compared to the output class. The dimension of this matrix should be the number of categories. It is rank 2.
- 0 neighbor indicates an exact classification for the sample data but without the implementation of expectation values at each point since there is only one value at that point in one set of sample data;
- $k$ nearest neighbor assumed that expectation around a small patch of a point is identical to expectation at the exact point with the corresponding distribution.
- In Monte Carlo method, calculation of volume in high dimension converges very slowly. The reason is that we need a very large number of sampling points since the dimension is high. The procedure is multiplicative. The same thing might happen here. $k$ nearest neighbor is basically some kind of averaging procedure of the volume density. It requires a large number of sample data points to perform an fairly accurate average.
- The linear regression is basically a first order Taylor expansion of the approximator $f(x)$. $f(x) = x^T\beta$.
Local Methods in High Dimensions
- Curse of high dimensions: edge length of a cube of volume $r$ is $e_p(r) = r^{1/p}$. An extreme example: $(10^{-10})^{1/10} =0.1$.
- Small volume leads to high variance.
- Homogeneous sampling doesn’t work in high dimensions. Since most points will fall near the edges.
- Requires huge number of sample points in high dimensions.
Statistical Models, Supervised Learning and Function Approximation
Joint Distribution
I didn’t not get the point of this subsection. It seems that the authors are talking about whether it is proper to assume the relation between input and output is deterministic. {: .notes–warning}
Supervised Learning
- learn by example.
Function Approximation
- Linear model;
- Function as basis (Eq. 2.30): $f_\theta(x) = \sum h_k(x)\theta_k$.
- Examples of function bases are Fourier expansions, sigmoid, etc.
- Learning through minimizing sum of squares (RSS), or maximum likelihood estimation, etc.
- Maximum likelihood estimation:
- Likelihood: $L(\theta) = \sum_{i=1}^N \log \mathrm{Pr}_\theta (y_i)$;
- Maximized it (“probability of the observed sample is largest”)
- Minimizing RSS is equivalent to maximum likelihood estimation. Eq. 2.35.
Table of Contents
Current Ref:
- esl/01.statistical-learning-theory.md