Cross-Validation: Compares various forms/types of Machine Learning methods and gives insight regarding its actual performance.
In Machine Learning,
We will
1. Train the Machine Learning Model
2. Test the Machine Learning Model
Ridge Regression is the process of underfitting/biasing the multiple regression to create a better performing model.
n = # of samples
p = # of features
λ = tuning parameters/penalty (alpha, regularization parameter, penalty term)
This function below asks to solve for lambda which minimizes the function within in the parenthesis. As lambda increases, the regression decreases. Thus, lambda(the penalty) works to determine how much of the model it wishes to regulate/control/penalize.
Penalty=0 : J(Ridge Regression Cost Function) → MSE(Linear Regression Cost Function)
Penalty→ Increase(Infinity): J → horizontal lines that cross the mean of the data (weight = 0)