tmp

Min Jae Cho·2025년 6월 11일
0

활성화 함수

Sigmoid: σ(x)=1/(1+e^(-x)), 범위[0,1], 도함수:σ'(x)=σ(x)(1-σ(x)), 문제:기울기소실,비영점중심,exp()비용 Tanh: tanh(x)=(e^x-e^(-x))/(e^x+e^(-x))=2σ(2x)-1, 범위[-1,1], 도함수:1-tanh²(x), 영점중심,기울기소실 ReLU: f(x)=max(0,x), 도함수:1 if x>0 else 0, 장점:빠른수렴6배,효율적,포화없음, 문제:dying neuron,비영점중심 Leaky ReLU: f(x)=max(αx,x) α=0.01~0.1, PReLU는α학습, dying neuron해결 ELU: f(x)=x if x>0, α(e^x-1) if x≤0, α=1 디폴트, 영점근사평균,노이즈견고,exp()비용 SELU: Self-Normalizing, λ=1.0507,α=1.6733, BatchNorm없이 깊은망 가능 GELU: x·Φ(x)=x·(1+erf(x/√2))/2≈x·σ(1.702x), Transformer에서 널리사용, 확률적마스킹 Swish: x·σ(βx), β=1일때 x·σ(x), 무한미분가능 Mish: x·tanh(ln(1+e^x)), 부드러운특성 권장: ReLU 기본사용, 미세조정시 Leaky/ELU/GELU/Swish, sigmoid/tanh 사용금지

가중치 초기화

Zero 초기화: W=0→모든뉴런동일기울기→학습불가(대칭성깨짐없음) 작은랜덤수: N(0,0.01²)→깊은망에서 활성화값0수렴→기울기소실 큰랜덤수: N(0,1²)→활성화값포화→기울기소실 Xavier/Glorot: std=1/√D_in 또는 √(2/(D_in+D_out)), tanh/sigmoid용, 분산보존원리 Kaiming/MSRA: std=√(2/D_in), ReLU용, He초기화, ReLU절반뉴런죽는것보상 LeCun: std=√(1/D_in), SELU용 Uniform Xavier: U(-√(6/(D_in+D_out)), √(6/(D_in+D_out))) ResNet: 첫conv=MSRA, 둘째conv=0, Var(F(x)+x)=Var(x) Transformer: N(0, d_model^(-0.5)) LSTM: forget bias=1, 다른bias=0 BatchNorm: γ=1, β=0 Conv층: fan_out mode도 가능

데이터 전처리

기본정규화: X=(X-μ)/σ, μ=E[X], σ=√Var(X) Min-Max: X=(X-min)/(max-min)→[0,1] Robust: X=(X-median)/IQR, 이상치견고 이미지: 평균이미지빼기(AlexNet), 채널별평균빼기(VGG), 채널별평균/표준편차(ResNet) ImageNet통계: mean=[0.485,0.456,0.406], std=[0.229,0.224,0.225] 왜 영점중심: ht=Wx_t+Wh(t-1), 모든x_t>0이면 ∂L/∂W 모두같은부호→지그재그업데이트 PCA: 상관관계제거, eigenvalue큰순으로 Whitening: 공분산=I, 이미지엔 비추천(공간구조파괴) Z-score클리핑: |z|>3이면 클리핑 Quantile: 분위수기반변환

정규화 기법

L2: Loss+λ||W||², 가중치감쇠, λ=1e-4~1e-5, 큰가중치패널티 L1: Loss+λ|W|, 스파스가중치, 특성선택효과 Elastic Net: α·L1+(1-α)·L2, 두장점결합 Dropout: 훈련시p확률로끄기, 테스트시모두사용·p배수, 앙상블효과, FC에만적용 Inverted Dropout: 훈련시/(1-p)배수, 테스트시변화없음 DropConnect: 가중치를 확률적으로 0 DropPath: skip connection을 확률적으로 드롭 Cutout: 이미지패치마스킹 Mixup: (x₁,y₁)+(1-λ)(x₂,y₂) CutMix: 이미지영역바꾸기+라벨믹싱 Batch Norm: 훈련시배치통계μ_B,σ²_B, 테스트시이동평균, 내부공분산이동완화 Layer Norm: 층별정규화, RNN/Transformer용 Instance Norm: 인스턴스별, 스타일전이용 Group Norm: 채널그룹별, 배치크기무관 Weight Norm: W=g·v/||v||, 크기와방향분리 Spectral Norm: 최대특이값으로정규화, GAN안정화

학습률 스케줄링

Step: epoch 30,60,90에서 ×0.1(ResNet), 고정점에서급격감소 Multi-Step: 여러step에서 감소 Exponential: α_t=α_0·γ^t, γ=0.96~0.99 Cosine: α_t=(1/2)α_0(1+cos(πt/T)), 부드러운감소, SGDR과함께 Cosine Annealing: T_i=T_0·T_mult^i, warm restart Linear: α_t=α_0(1-t/T), BERT/GPT용 Inverse√: α_t=α_0/√t, Transformer원논문 Polynomial: α_t=α_0(1-t/T)^p Warm Restart: 주기적으로초기값복원, 지역최소탈출 Warmup: 처음k스텝동안선형증가, 큰배치훈련용 Plateau: 성능개선없으면감소, ReduceLROnPlateau Cyclic: 삼각파형태, 최소최대사이순환 One Cycle: warmup→decay, 큰학습률사용가능 Early Stopping: patience=n 에포크후성능개선없으면중단

하이퍼파라미터 최적화

체계적접근: 1.초기손실체크(log(C)) 2.작은샘플오버핏(5-10배치→100%정확도) 3.학습률탐색(1e-1,1e-2,1e-3,1e-4) 4.Coarse Grid(1-5epoch) 5.Fine Grid(10-20epoch) 6.곡선분석 Grid Search: 모든조합시도, 차원저주 Random Search: 무작위샘플링, 중요파라미터더잘탐색 Bayesian: 가우시안프로세스, 적응적탐색 Hyperband: 다중보상bandit, 조기종료 BOHB: Bayesian+Hyperband Population Based: 진화알고리즘, 동적조정 학습률범위: 1e-6~1e-1, 로그스케일 배치크기: 2^n, 메모리제약고려 정규화강도: 1e-6~1e-1 Dropout비율: 0.1~0.9, FC는0.5 검증전략: k-fold, holdout, time-series split

RNN 계열

Vanilla RNN: ht=tanh(W_x·x_t+W_h·h(t-1)+b), 문제:기울기소실/폭발 BPTT: ∂L/∂h(t-1)=W_h^T·diag(1-tanh²(·))·∂L/∂h_t, 연쇄곱으로인한기울기소실 Truncated BPTT: k스텝만역전파, 메모리절약 기울기클리핑: ||g||>θ면 g←g·θ/||g|| LSTM 상세: f_t=σ(W_f[h(t-1),xt]+b_f) i_t=σ(W_i[h(t-1),xt]+b_i) g_t=tanh(W_g[h(t-1),xt]+b_g) o_t=σ(W_o[h(t-1),xt]+b_o) c_t=f_t⊙c(t-1)+it⊙g_t h_t=o_t⊙tanh(c_t) GRU: r_t=σ(W_r[h(t-1),xt]) z_t=σ(W_z[h(t-1),xt]) h'_t=tanh(W[x_t,r_t⊙h(t-1)]) ht=(1-z_t)⊙h(t-1)+zt⊙h'_t 파라미터수: LSTM>GRU>RNN, 성능:LSTM≈GRU>RNN 양방향: 전후방향정보결합, h_t=[h→_t;h←_t] Multi-layer: 깊이쌓기, 3-4층이상효과제한적 Residual RNN: h_t=h(t-1)+F(h_(t-1),x_t) Teacher Forcing: 훈련시ground truth사용, 테스트시자체예측사용→exposure bias

Attention & Transformer

Key-Value-Query: Q=XW_Q∈R^(n×d_k), K=XW_K∈R^(n×d_k), V=XW_V∈R^(n×d_v) Scaled Dot-Product: Attention(Q,K,V)=softmax(QK^T/√d_k)V 멀티헤드: MultiHead(Q,K,V)=Concat(head_1,...,head_h)W^O, head_i=Attention(QW^Q_i,KW^K_i,VW^V_i) Self-Attention: Q,K,V모두같은입력, 시퀀스내관계모델링 Cross-Attention: Q≠K,V, Encoder-Decoder간정보전달 Masked Attention: 미래토큰보지못하게-∞마스킹, causal masking Positional Encoding: PE(pos,2i)=sin(pos/10000^(2i/d)), PE(pos,2i+1)=cos(pos/10000^(2i/d)) Learned PE: 학습가능한위치임베딩 Relative PE: 상대위치정보, T5/BERT사용 Layer Norm: LN(x)=γ⊙(x-μ)/σ+β, μ=mean(x), σ=std(x) Pre-LN vs Post-LN: Pre가안정적 Feed-Forward: FFN(x)=max(0,xW_1+b_1)W_2+b_2 GELU in FFN: FFN(x)=GELU(xW_1+b_1)W_2+b_2 Transformer Block: x→LayerNorm→MultiHeadAttention→Residual→LayerNorm→FFN→Residual 복잡도: O(n²d) attention, O(nd²) FFN 장점: 병렬화,장거리의존성,해석가능성 단점: 메모리O(n²), 위치정보부족

CNN 아키텍처

기본연산: (f*g)(i,j)=∑∑f(m,n)g(i-m,j-n) 출력크기: O=(I+2P-K)/S+1, I=입력,P=패딩,K=커널,S=스트라이드 패딩: VALID(패딩없음), SAME(출력=입력), CAUSAL(시간축미래차단) 1×1 Conv: 채널차원변환, 파라미터수감소, Network-in-Network Depthwise Conv: 채널별독립합성곱, 파라미터효율적 Separable Conv: Depthwise+Pointwise, MobileNet Dilated Conv: 확장합성곱, 수용영역증가, atrous Transposed Conv: 업샘플링, deconvolution, 체커보드효과주의 LeNet: 28×28→C5-S2-C5-S2-F120-F84-F10 AlexNet: 227×227→C96-P3-C256-P3-C384-C384-C256-P3-F4096-F4096-F1000, ReLU,Dropout,GPU병렬 VGG: 작은필터(3×3)깊게쌓기, VGG-16/19 GoogLeNet: Inception모듈, 1×1,3×3,5×5,pooling 병렬후concat ResNet: skip connection, H(x)=F(x)+x, 기울기소실해결, He초기화 DenseNet: 모든이전층과연결, 특성재사용 MobileNet: Depthwise Separable Conv, 모바일최적화 EfficientNet: 폭/깊이/해상도균형조정, compound scaling

배치 정규화

공식: BN(x)=γ(x-μ_B)/√(σ²_B+ε)+β 훈련: μ_B=(1/m)∑x_i, σ²_B=(1/m)∑(x_i-μ_B)² 테스트: μ_test=E[μ_B], σ²_test=E[σ²_B] 이동평균사용 내부공분산이동: 네트워크파라미터변화로인한입력분포변화완화 장점: 빠른수렴, 높은학습률가능, 초기화덜민감, 정규화효과 위치: Conv후Activation전 권장, 일부는후 γ,β: 학습가능파라미터, 표현력유지 ε: 수치안정성, 1e-5~1e-8 추론시: 고정통계사용, deterministic 변형: LayerNorm(RNN용), InstanceNorm(style transfer), GroupNorm(batch무관) 문제점: 배치크기의존성, RNN적용어려움

최적화 알고리즘

SGD: w(t+1)=w_t-η∇w, 확률적기울기하강 Momentum: v(t+1)=βvt+∇w, w(t+1)=wt-ηv(t+1), β=0.9, 관성효과 Nesterov: v(t+1)=βv_t+∇w(w_t-ηβv_t), 미리보기효과 AdaGrad: G(t+1)=Gt+∇w²_t, w(t+1)=wt-η∇w_t/√(G(t+1)+ε), 적응적학습률,분모증가문제 RMSprop: G(t+1)=βG_t+(1-β)∇w²_t, β=0.9~0.99, AdaGrad개선 Adam: m_t=β₁m(t-1)+(1-β₁)∇w, vt=β₂v(t-1)+(1-β₂)∇w², m̂t=m_t/(1-β₁^t), v̂_t=v_t/(1-β₂^t), w(t+1)=w_t-ηm̂_t/(√v̂_t+ε), β₁=0.9,β₂=0.999,ε=1e-8 AdamW: Adam+올바른weight decay, λ분리 AdaBound: Adam→SGD 적응적전환 RAdam: warmup없는Adam Lookahead: 느린가중치+빠른가중치 LAMB: 큰배치Adam, 층별적응 학습률: SGD는큰값, Adam은작은값 선호

손실함수

분류: Cross-Entropy=-∑y_i log(p_i), Binary CE=-y log(p)-(1-y)log(1-p), Sparse CE, Focal Loss=-(1-p)^γ CE(focal parameter) 회귀: MSE=(1/n)∑(y-ŷ)², MAE=(1/n)∑|y-ŷ|, Huber=MSE if |y-ŷ|<δ else MAE 순위: Hinge=max(0,1-yf(x)), Triplet=max(0,d(a,p)-d(a,n)+margin) 생성: KL Divergence, Wasserstein, JS Divergence 계층적: Large Margin, Center Loss 다중라벨: Binary CE 각라벨별, Jaccard 불균형: Weighted CE, Focal Loss, Class Balanced Loss 정규화포함: L1/L2 penalty 추가 커스텀: 도메인특화손실함수

평가지표

분류: Accuracy=TP+TN/(TP+TN+FP+FN), Precision=TP/(TP+FP), Recall=TP/(TP+FN), F1=2PR/(P+R), AUC-ROC, AUC-PR 다중클래스: macro-avg(클래스별평균), micro-avg(전체평균), weighted-avg Top-k Accuracy: 상위k개중정답포함비율 회귀: MSE, RMSE, MAE, MAPE, R², 상관계수 순위: NDCG, MAP, MRR, Hit Rate 생성모델: Perplexity=exp(-∑log p(w_i)), BLEU(n-gram일치), ROUGE(recall기반), FID(이미지품질), IS(Inception Score) 검색: Precision@k, Recall@k, MAP, NDCG 추천: Hit Rate, NDCG, Diversity, Coverage 시계열: MASE, sMAPE, directional accuracy

언어모델링

N-gram: P(wn|w_1...w(n-1))≈P(wn|w(n-k+1)...w(n-1)) Neural LM: h_t=RNN(h(t-1),e(wt)), P(w(t+1)|w_1...w_t)=softmax(Wh_t) Perplexity: PP=exp(-1/N ∑log P(w_i)), 낮을수록좋음 Character-level: 글자단위, OOV없음, 긴시퀀스 Subword: BPE, SentencePiece, WordPiece, OOV문제해결 Teacher Forcing: 훈련시정답입력, exposure bias문제 Scheduled Sampling: 확률적으로예측값사용 Beam Search: width=k, length penalty, coverage penalty Top-k Sampling: 상위k개만고려 Nucleus Sampling: 누적확률p이상 Temperature: T높으면다양, T낮으면확실 BLEU: n-gram precision, brevity penalty, 기계번역평가

시퀀스 모델링

Seq2Seq: Encoder(input→context)→Decoder(context→output) Attention: ct=∑α(t,i)hi, α(t,i)=softmax(e(t,i)), e(t,i)=a(s_(t-1),h_i) Global vs Local: 전체vs일부만 attention Content vs Location: 내용기반vs위치기반 Monotonic: 순차적attention, 음성인식용 Coverage: 반복attention 방지 Copy Mechanism: 입력에서직접복사, UNK처리 Pointer Network: 입력순서출력, TSP등 Memory Network: 외부메모리접근, QA용 Transformer: self-attention만사용, 병렬화가능 BERT: bidirectional, MLM+NSP pre-training GPT: autoregressive, unsupervised pre-training T5: text-to-text, 모든태스크를생성으로

이미지 캡셔닝

기본구조: CNN(ResNet,VGG)→RNN(LSTM)→Caption Show&Tell: CNN마지막FC→LSTM초기상태 Show,Attend&Tell: CNN feature map→spatial attention Bottom-up Attention: 객체검출+attention, Faster R-CNN Transformer Captioning: CNN+Transformer decoder Self-Critical Training: REINFORCE+baseline, CIDEr최적화 훈련: Teacher forcing, Cross-entropy loss 추론: Beam search, Greedy, Sampling 평가: BLEU-1~4, METEOR, ROUGE-L, CIDEr, SPICE 데이터: MS-COCO, Flickr30k, Visual Genome 전처리: 이미지resize/crop, 캡션토큰화 Attention시각화: 각단어별주목영역표시

고급 정규화

DropBlock: 구조화된드롭아웃, 공간인접영역제거 StochasticDepth: 층을확률적으로건너뛰기, ResNet용 DropPath: skip connection 확률적제거 ShakeDrop: 정규화+augmentation, ResNeXt용 AutoAugment: 강화학습으로augmentation정책학습 RandAugment: 단순무작위augmentation MixUp: λx₁+(1-λ)x₂, λy₁+(1-λ)y₂ CutMix: 이미지영역바꾸고라벨믹싱 AugMix: 여러augmentation조합 Manifold Mixup: 은닉층에서mixup Label Smoothing: hard label→soft label, 과신방지 Entropy Regularization: 예측분포엔트로피제약 Consistency Regularization: 같은입력다른augmentation 일관성

ResNet & 변형

기본블록: F(x)+x, F(x)=Conv-BN-ReLU-Conv-BN 병목블록: 1×1(차원축소)→3×3→1×1(차원복원) Pre-activation: BN-ReLU-Conv 순서, 더안정적 Identity mapping: skip connection최적화 Wide ResNet: 더넓고얕게, width multiplier ResNeXt: cardinality증가, grouped convolution SE-ResNet: Squeeze&Excitation attention CBAM: Convolutional Block Attention, channel+spatial ResNeSt: Split-Attention, multi-path DenseNet: 모든이전층과연결, feature reuse Highway: 게이트방식skip connection FractalNet: fractal구조, dropout대신 초기화: conv마지막BN을0으로, skip connection강화

Transformer 심화

Encoder: N=6층, d_model=512, h=8, d_ff=2048 Decoder: masked self-attention+encoder-decoder attention Multi-head수식: head_i=Attention(QW^Q_i,KW^K_i,VW^V_i), MultiHead=Concat(head₁...head_h)W^O Query,Key,Value: d_k=d_v=d_model/h=64 FFN: d_ff=4×d_model, GELU activation Dropout: attention weights, FFN output, embeddings Warmup: lr=d_model^(-0.5)×min(step^(-0.5),step×warmup^(-1.5)) 학습: Adam β₁=0.9,β₂=0.98,ε=10^(-9) BERT변형: RoBERTa,ALBERT,DeBERTa,ELECTRA GPT변형: GPT-2,GPT-3,GPT-4,ChatGPT 효율적Transformer: Linformer,Performer,Reformer,Longformer 위치인코딩: sinusoidal,learned,relative,rotary

실용적 디버깅

Loss발산: 학습률↓(1e-5~1e-3), gradient clipping, 초기화확인, 데이터정규화 Loss정체: 학습률↑↓실험, 모델복잡도↑, regularization↓, 데이터품질확인 Overfitting: validation loss상승, train-val gap큼, regularization↑(dropout,weight decay), 데이터augmentation, 조기종료 Underfitting: train loss높음, 모델용량↑, regularization↓, 학습시간↑ 기울기소실: LSTM/GRU/ResNet, 초기화확인(Xavier/He), BatchNorm, skip connection 기울기폭발: gradient clipping, 학습률↓, BatchNorm NaN발생: 학습률↓, weight decay↓, gradient clipping, mixed precision주의 메모리부족: 배치크기↓, gradient accumulation, model parallelism, gradient checkpointing 학습속도: Mixed precision, 큰배치+linear scaling, GPU최적화, 데이터로딩병렬화

성능 최적화

Mixed Precision: FP16+FP32, 2배속도향상, automatic mixed precision Gradient Accumulation: 작은배치로큰배치효과, effective_batch=micro_batch×accumulation_steps Model Parallelism: 모델을여러GPU분할, 큰모델용 Data Parallelism: 배치를여러GPU분할, DistributedDataParallel Pipeline Parallelism: 층별GPU분할, 파이프라인처리 Gradient Checkpointing: 중간활성화재계산, 메모리↓속도↓ Knowledge Distillation: teacher→student, 압축 Pruning: 가중치제거, structured/unstructured Quantization: INT8,INT4, 추론가속 Early Exit: 층별분류기, 계산량적응적조절 Dynamic Batching: 다양한길이배치효율적처리

고급 기법

Self-Supervised: SimCLR,MoCo,BYOL,SwAV, 라벨없이학습 Few-Shot: Prototypical,Matching,Relation Networks, 적은데이터학습 Meta-Learning: MAML,Reptile, 학습방법학습 Transfer Learning: pre-train→fine-tune, feature extraction Domain Adaptation: 도메인간격차줄이기, DANN,CORAL Multi-Task: 여러태스크동시학습, shared encoder Continual Learning: 치명적망각방지, EWC,PackNet Neural Architecture Search: AutoML, DARTS,ENAS Hyperparameter Optimization: Bayesian,Hyperband,BOHB Active Learning: 불확실한샘플우선라벨링 Curriculum Learning: 쉬운것부터어려운것순으로 Adversarial Training: 적대적예제로견고성향상

생성 모델

VAE: p(x)=∫p(x|z)p(z)dz, ELBO=E[log p(x|z)]-KL(q(z|x)||p(z)) GAN: min_G max_D V(D,G)=E[log D(x)]+E[log(1-D(G(z)))] WGAN: Wasserstein distance, weight clipping WGAN-GP: gradient penalty, ∇_x D(x)=1 Progressive GAN: 점진적해상도증가 StyleGAN: style기반생성, disentangled representation Diffusion: 점진적노이즈추가/제거, DDPM,DDIM Flow: invertible transformation, RealNVP,Glow PixelCNN: autoregressive 이미지생성 Transformer LM: GPT계열, autoregressive 텍스트생성

강화학습 연결

Policy Gradient: REINFORCE, ∇log π(a|s), high variance Actor-Critic: value function baseline A2C/A3C: advantage actor-critic PPO: proximal policy optimization DDPG: deep deterministic policy gradient TD3: twin delayed DDPG SAC: soft actor-critic Rainbow DQN: 여러기법결합 AlphaZero: MCTS+neural network MuZero: model-based RL 언어모델RL: RLHF,PPO로 human feedback 반영

profile
A.I. Engineer

0개의 댓글