How to prevent overfitting in Machine Learning

YoungJoon Suh·2023년 2월 22일
0

Detecting overfitting is useful, but it doesn't solve the problem. Fortunately, you have several options to try.
Here are a few of the most popular solutions for overfitting.
1. Cross-validation
Cross-validation is a powerful preventative measure against overfitting.
The idea is clever: Use your initial training data to generate multiple mini train-test splits. Use these splits to tune your model.
In standard k-fold cross-validation, we partition the data into k subsets, called folds. Then, we iteratively train the algorithm on k-1 folds while using the remaining fold as the test set (called the "holdout fold").
Cross-validation allows you to tune hyperparameters with only your original training set. This allows you to keep your test set as a truly unseen dataset for selecting your final model.
2. Train with more data
3. Remove features
4. Early stopping: Training process is stopped early at early stopping point. This method is generally used in deep learning.
5. Regularization: This method is generally used for machine learning.
6. Ensembling

profile
저는 서영준 입니다.

0개의 댓글