Deep Learning 1 (DSC3032) - Lecture 3

김진주·2022년 3월 15일
0

딥러닝 조지기

목록 보기
1/6

궂은 유행병에 시달려 커피냄새와 강아지 특유 꼬순내가 가득한 방에서 나는 새벽 1시가 되어서야 강의를 킨다 ...
우리 집의 고요함은 (대로변에 위치해 자동차 소음이 백색소음처럼 깔려있지만) 한 영국 출신 교수님의 딱딱한 강의로 깨진다.
교수님의 목소리 ... 노트북 타자 소리 ...
간혹 이해되지 못할 때 강의를 멈추면 다시 고요함이 찾아온다.
내용이 어려울 수록 고요함만 깊어진다 ...
고요함은 곧 난해함이요, 동시에 성장의 소음이니 ...
3월 16일 수요일, 딥러닝1 3주차 강의를 들으며


Lecture 3 - Intro to Deep Learning : Machine Learning Recap & Neural Network Basics


1. Machine Learning

1) ML : Basic Paradigm

Machine Learning's idea -'learning' from data

  1. training data :
  2. test data :

First, you observe from certain set of examples called training data
Then, you try to infer something out
Last, you use this for predictions

Machine learning basic paradigms

data - machine learning is about learning from data

first observe from set of examples called training data
data, in a raw sense, is a spreadsheet with lots of numbers that are hard to make sense of,

what you wanna do is infer something about the process that generated the data - find a pattern
are these variables relates? - "fit a line or curve" "fit a model" using linear regression - 대충 규칙성을 찾는다는 말.

the whole point is using this for 'predictions - stock markets, weather forecasts, ...

test it on data that haven't been seen, unseen data - test data

test data :

2) Supervised Learning

Supervised L : given a set of 'input-label' pairs, find a rule that predicts the label associated with an unseen input

correct, ideal answer(label) is given with data(input)
old data of house - price of house

Learn to predict some output y when given an input vector x,
goal : learn a function that best approximates the relationship between input x and output y in a given sampe of labelled data (correct output given)

Regression : target output is a real number

Classification : target output is a class label (ex. Yes or No / )

Using training data (inputs and outputs) , a model is created during training phase

  • the phase the machine learning model learn how the inputs are mapped to outputs

how? parameters.

learning involves adjusting the parameters to reduce the discrepancy, loss between the actual output with the predicted output produced by the model.

Once the parameters have been optimized, the model can be tested on new unseen data to check its preformance

machine learning techniques : linear/logistic regression, decision trees, naiive Bayes, support vector machines, neural networks

example Machine Learning tasks

binary : only 0 or 1, no or yes, two classes
binary image classification task

3) Unsupervised Machine Learning

you don't actually have a target answer or variable that you want, called 'label'

Unsupervised ML : given a set of input data (without labels) find patterns in data

price can be any real value ? because it is continuous data?

chance of purchase - continuous / regression
probabiblity - regression / continuous

NLP image captioning - image pixels -> text

Neural Networks

Biologically Inspired - collection of simple trainable mathematical units organised in layers that work together to solve complex tasks
NNs are inspired by the structure and function of the brain called neurons

The human brain is the best system we know - can compute large computations over short period ofs time, using little energy (simply looking outside and understanding everythin)

Neurons have weights that can excite or inhibit an actibity (재밌거나 신기한 거 볼때 집중하고 뇌 활성화 되듯)

Cortex exhibits rapid parallel computation plus flexibility

Brain has 100 billion neurons, 100 trillion connections and operates on 20 watts! 효율 갑

Biggest NN 175 billion parameters and operates on 285,000 CPU cores and 10,000 GPUs (190 million watts)

Artificial NNs vs Human Brain

Human perception is fast & effortless

  • neurons fire at most 100 times per second
  • humans solve perception in 0.1s (no delay)
    <-> artificial NNs fire 10 times, at most

Anything a human can do in 0.1s, a big 10 layer NN can do too!

Deep learning systems approach (and sometimes surpass) human-level benchmarks on a wide array of tasks, but they also fail on carefuly chosen inputs that cause bizarre misclassifications

Neuron - how NNs are trying to mimic

a neuron - a graph data structure with nodes and edges
each input has a 'weight' associated with it (like human neurons do)
the neuron takes in the weighted sum of the input and also adds a bias

input vector -> sum of (input * weight) + bias -> activation function -> output

The Perceptron (1958)

1st generation of NN, where it all began - a single neuron (late 50's and 60's)

Output

dot product : input X with W (multiplied and summed) plus non-linearity (activation)

??dot product 에 activation이 뭐인거지?

Matrix

activation functions

sigmoid

non linearities?

linear functions only produce linear decisions (ex/ straight lines) no matter the network size. youre only multiplying input with weights.

this does not help you do fancy things -> linear 이미지처리의 문제,

non linearities let you transfer it complex.

b- a scalar (스칼라, 스케일라.. )

아무튼 xW + b --> gets put into activation function (non-linear)

perceptron limitations

  • lacked mechanism for learning
  • limites (only 1 output but laid the foundation XOR 같은 문제... 에 선 그을 수가 없자나. )
  • single perceptron could not ...

now, lots of neurons stacked in layers -> stacked layers of perceptrons

neural network = running several logistic regressions at the same time such that it is no longer just a linear classifier

multiple output perceptrons

every input has to be connected to a neuron

Single Hidden Layer

Deep Neural Network (now)

neuron neuron neuron neuron neuron ...

features & hidden layers

traditional - features should be given / curves, lines, organs like noses, eyes..
deep learning & hidden layers - they figure out the features by themselves. extract features.

5->2
1->-1

https://daebaq27.tistory.com/60
https://anweh.tistory.com/21
참고해서 마저 쓰기

profile
화성 갈래요. 아니 진짜로.

0개의 댓글