Variational Auto-Encoder

JungJae Lee·2023년 5월 25일
0

딥러닝

목록 보기
5/11
post-thumbnail

Variational Auto-Encoder (VAE)

• AE encoder directly produces a latent vector zz (single value for each attribute).
Then, AE decoder takes these values to reconstruct the original input.

Goal is getting a compressed latent vector of the input.
• VAE encoder produces 2 latent vectors mean µ and standard deviation σ of Gaussian N(µ, σ2) approximating the posterior distribution p(z| x) of each attribute (Variational inference).

Then, using sampled latent vectors from the distributions, the decoder reconstructs the input.
Goal is generating many variations of the input (probabilistic encoder + generative decoder).

VAE encoder would like to produce the posterior p(zx)p(z|x) instead of a single point zz, but because of the intractability of p(zx)p(z|x), we approximate it by variational posterior qθ(zx)q_θ(z|x) which is Gaussian N(µx,σx2)N(µ_x, σ^2_x) so that the encoder network qθq_θ is enforced to produce µxµ_x and σx.

Reparametrization trick

0개의 댓글