RIDGE REGRESSION

Ridge regression is a model tuning technique used to analyze data with multicollinearity. L2 regularization is performed via this approach. When there is a problem with multicollinearity, least-squares are unbiased, and variances are enormous, resulting in projected values that are distant from the actual values.
The cost function for ridge regression:
Min(||Y – X(theta)||^2 + λ||theta||^2)
The punishment term is lambda. In the ridge function, the provided value is indicated by an alpha parameter. As a result, by adjusting the values of alpha, we may modify the penalty term. The greater the value of alpha, the greater the penalty, and therefore the size of the coefficients is lowered.
It reduces the parameters. As a result, it is utilized to avoid multicollinearity.
It minimizes model complexity by coefficient shrinking.
For any type of regression machine learning model, the usual regression equation forms the base which is written as:
Y = XB + e
. Where Y is the dependent variable, X is the independent variable, B is the regression coefficients to be calculated, and e is the residual errors.
When we add the lambda function to this equation, we take into account the variance that is not assessed by the general model. There are procedures that may be taken once the data has been prepared and determined as being suitable for L2 regularization.
THE MAIN CODE OF RIDGE REGRESSION
Full Code Of Implementing Ridge Regression
Ridge regression makes the same assumptions as linear regression: linearity, constant variance, and independence. However, because ridge regression does not offer confidence bounds, the distribution of errors should not be assumed to be normal. Let's look at an example of a linear regression issue and see how ridge regression may help us minimize error.
from sklearn.linear_model import Ridge