- This is Manhattan norm technique. We use this to normalize the data.
- It transforms the data in such a way that the sum of the absolute values of the vector
(like a column in a dataset) is equal to 1. - L1 normalization is useful when we deal with sparse data (data with many zeros).
- It can help in preserving the sparsity of the data, which is often desirable in high-dimensional data scenarios like text analysis or image processing
- As it considers the absolute values L1 normalization is less sensitive to outliers. When features having more and more outliers this technique prevents the dominance of certaub features
- L1 normalization is often used in conjunction with techniques like Lasso regression.
- Lasso stands for Least Absolute Shrinkage and Selection Operator. It incorporates L1 penalty
- this shrinks the coefficients of some features to zero and automatically removes the features from the model
- This function is embedded this L1 regularization is called an embedded method.
Formula used:

Let us open the houseprice .csv data and check how the features are normalized.
Load libraries and read data

Contains 9 features and one dependent Variable feature price. Total records works out to 21613
Data Info:

Descriptive Statistics

Check if any Null value is present

Separate Response feature and Explanatory features
Draw Regression plots for each features with response feature




Correlation heatmap


Linear Regression Using Statsmodels.api



Normalize the data using L1 normalization technique



Separate features and response features of normalized data

Draw Regression plots for each features with response feature of normalized data




Correlation Heatmap for the normalized data


Linear Regression using OLS




