Basics: Artificial Neurons

Austin Jiuk Kim·2022년 3월 22일
0

Deep Learning

목록 보기
3/10

Affine Functions

Step #1. Weighted Sum :

z=w1x1++wlxl=(x)Twz = w_1x_1 + \dots + w_lx_l = (\overrightarrow{x})^T\overrightarrow{w}

Step #2. Affine Transformation :

z=w1x1++wlxl+b=(x)Tw+bz = w_1x_1 + \dots + w_lx_l + b = (\overrightarrow{x})^T\overrightarrow{w} + b

Step #3. Affine Functions with n Features

(x)T=(x1    xl)    R1×l(\overrightarrow{x})^T = (x_1\; \dots \; x_l) \; \in \; \mathbb{R}^{1 \times l}
w=(w1    wl)    Rl×1\overrightarrow{w} = (w_1\; \dots \; w_l) \; \in \; \mathbb{R}^{l \times 1}
b    Rb \; \in \; \mathbb{R}

f((x)T;w,b)f( \: (\overrightarrow{x})^T; \, \overrightarrow{w},\, b \: )

z    Rz \; \in \; \mathbb{R}

Activation Functions

Sigmoid (logit -> probability)

g(x)=σ(x)=11+exg(x) = \sigma(x) = {1 \over {1 + e^{-x}}}

Tanh

g(x)=tanh(x)=exexex+exg(x) = tanh(x) = {{e^{x} - e^{-x}} \over {e^{x} + e^{-x}}}

ReLU

g(x)=ReLU(x)=max(0,x)g(x) = ReLU(x) = max(0,\,x)
(z)T=(x1    xl)    R1×l(\overrightarrow{z})^T = (x_1\; \dots \; x_l) \; \in \; \mathbb{R}^{1 \times l}
w=(w1    wl)    Rl×1\overrightarrow{w} = (w_1\; \dots \; w_l) \; \in \; \mathbb{R}^{l \times 1}
b    Rb \; \in \; \mathbb{R}

f((x)T;w,b)f( \: (\overrightarrow{x})^T; \, \overrightarrow{w},\, b \: )

a    Ra \; \in \; \mathbb{R}

Artificial Neurons

mini-batch in Artificial Neurons

XT    RN×lX^T \; \in \; \mathbb{R}^{N \times l}

neuron(x)=g(f((x)T;w,b);w,b)neuron(x) = g( \: f( \: (\overrightarrow{x})^T; \, \overrightarrow{w},\, b \: ); \, \overrightarrow{w},\, b \: )

A    RN×1A \; \in \; \mathbb{R}^{N \times 1}
XT    RN×lX^T \; \in \; \mathbb{R}^{N \times l}
w=(w1    wl)    Rl×1\overrightarrow{w} = (w_1\; \dots \; w_l) \; \in \; \mathbb{R}^{l \times 1}
b    Rb \; \in \; \mathbb{R}

f(XT;w,b)f( \: X^T; \, \overrightarrow{w},\, b \: )

Z    RN×1Z \; \in \; \mathbb{R}^{N \times 1}


Z    RN×1Z \; \in \; \mathbb{R}^{N \times 1}

g(Z)g(Z)

A    RN×1A \; \in \; \mathbb{R}^{N \times 1}
profile
그냥 돼지

0개의 댓글

관련 채용 정보