⬇️ Main Note
Email and password data are sent from browser to backend with the request of login-API.
Inside the data base, there is a data table that contains the user log in data. Then it finds out the corresponding data.
--> (ex) Josh 123#1 firstname.lastname@example.org)
When the log in data is found, backend memory saves the user log in data into a variable called "Session". Session is a type of memory-based-data.
Since the user is logged in, every single requests the user sends is requested with session number.
--> For example for payment, the computer should know who(which id/user) is trying to pay the money.
When the traffic gets higher, meaning that when lots of people start to join in, backend cannot handle all the requests at once. Backend responds one at a time.
CPU helps backend computer to solve all the requests in a short time.
--> It's a memory that saves left waiting users or sending data, etc.
--> Scale-up => Increasing CPU
Even iof there are bunch of backend computer, the number of API is the same. So it is possible to expand backend computer.
--> Scale-out => Expanding by copy and pasting.
It is still hard to divide those copied backend computers.
What if Josh went to different browser because the previous browser was full? ( people 10/10)
--> The new browser isn't the one where Josh logged in, so scale-out is impossible.
Stateful => Each backend computer having its own state!
To solve this problem, Login Session should be saved inside the data base.
--> Literally the log-in data are saved inside the data base.
--> For data base, it doesn't matter how large the database are expanded.
Then What's the difference between expanding backend copmuter and expanding data base?
Q. How to solve these problems?
➡️ 🧩 Data Partitioning
Just think as dividing the table into pieces.
//DB를 긁는다 = 디스크에서 DB로부터 데이터를 꺼내온다
<JSON Web Token>
2 Ways to encode =>
abcd -> 1234
273719 -> 7 7 9
7 7 9