[TIL_200919] Optimization Algorithms - EWMA, Momemtum, RMS Prop, Adam
오늘 배운 것
Improving Deep Neural Networks 2주 차
Optimization Algorithms
- Mini-batch Gradient Descent
- Batch Gradient Descent : mini-batch size=m
- Stochastic Gradient Descent : mini-batch size=1
- EWMA: Exponentially Weighted Moving Average
- Bias Correction
- Gradient Descent with Momemtum
- RMSprop: Root Mean Square propagation
- Adam Optimization Algorithm
내일 배울 것
Improving Deep Neural Networks 2주 차
- Learning Rate Decay
- Local Optima Problem