
The purpose of this project is to learn and get hands-on experience on different parts of deep learning. Followings are the areas I will be exploring:

I plan to use a BERT-based Transformer model for this task.As I am working on a laptop, to be as efficient as possible, I used EarlyStopping CallbackW

For Masked Language Modeling, I referred to keras’s official document: keras_masked_langauge_modelingI got the dataset from : \[Kaggle] A Million News

When I was training my model with a large dataset (1,200,000 x 1) on my mac, it took 19 hours for the model to finish 1 epoch. I knew that google cola
My current BERT model is generating the same output for any input, and I wasn’t able to find the reason behind it. So I started working on my LSTM mod
Last time, my model was overfitted because I did not handle the noises within the data. This time, I was inspired by the article: Denoising Stock Pri
I finally found out why my Sentiment Classification BERT Model was keep on overfitting to the dataset. The Transformer block I made wasn’t capable of

There were a lot of improvement for this model.The biggest change is that I added more columns for the dataset. It now refers to:RSIMACDBollinger Band