GAN

Variational Autoencoders 3: Training, Inference and comparison with other models

Variational Autoencoders 1: Overview
Variational Autoencoders 2: Maths
Variational Autoencoders 3: Training, Inference and comparison with other models

Recalling that the backbone of VAEs is the following equation:

\log P\left(X\right) - \mathcal{D}\left[Q\left(z\vert X\right)\vert\vert P\left(z\vert X\right)\right] = E_{z\sim Q}\left[\log P\left(X\vert z\right)\right] - \mathcal{D}\left[Q\left(z\vert X\right) \vert\vert P\left(z\right)\right]

In order to use gradient descent for the right hand side, we need a tractable way to compute it:

  • The first part E_{z\sim Q}\left[\log P\left(X\vert z\right)\right] is tricky, because that requires passing multiple samples of z through f in order to have a good approximation for the expectation (and this is expensive). However, we can just take one sample of z, then pass it through f and use it as an estimation for E_{z\sim Q}\left[\log P\left(X\vert z\right)\right] . Eventually we are doing stochastic gradient descent over different sample X in the training set anyway.
  • The second part \mathcal{D}\left[Q\left(z\vert X\right) \vert\vert P\left(z\right)\right] is even more tricky. By design, we fix P\left(z\right) to be the standard normal distribution \mathcal{N}\left(0,I\right) (read part 1 to know why). Therefore, we need a way to parameterize Q\left(z\vert X\right) so that the KL divergence is tractable.

Here comes perhaps the most important approximation of VAEs. Since P\left(z\right) is standard Gaussian, it is convenient to have Q\left(z\vert X\right) also Gaussian. One popular way to parameterize Q is to make it also Gaussian with mean \mu\left(X\right) and diagonal covariance \sigma\left(X\right)I, i.e. Q\left(z\vert X\right) = \mathcal{N}\left(z;\mu\left(X\right), \sigma\left(X\right)I\right), where \mu\left(X\right) and \sigma\left(X\right) are two vectors computed by a neural network. This is the original formulation of VAEs in section 3 of this paper.

This parameterization is preferred because the KL divergence now becomes closed-form:

\displaystyle \mathcal{D}\left[\mathcal{N}\left(\mu\left(X\right), \sigma\left(X\right)I\right)\vert\vert P\left(z\right)\right] = \frac{1}{2}\left[\left(\sigma\left(X\right)\right)^T\left(\sigma\left(X\right)\right) +\left(\mu\left(X\right)\right)^T\left(\mu\left(X\right)\right) - k - \log \det \left(\sigma\left(X\right)I\right) \right]

Although this looks like magic, but it is quite natural if you apply the definition of KL divergence on two normal distributions. Doing so will teach you a bit of calculus.

So we have all the ingredients. We use a feedforward net to predict \mu\left(X\right) and \sigma\left(X\right) given an input sample X draw from the training set. With those vectors, we can compute the KL divergence and \log P\left(X\vert z\right), which, in term of optimization, will translate into something similar to \Vert X - f\left(z\right)\Vert^2.

It is worth to pause here for a moment and see what we just did. Basically we used a constrained Gaussian (with diagonal covariance matrix) to parameterize Q. Moreover, by using \Vert X - f\left(z\right)\Vert^2 for one of the training criteria, we implicitly assume P\left(X\vert z\right) to be also Gaussian. So although the maths that lead to VAEs are generic and beautiful, at the end of the day, to make things tractable, we ended up using those severe approximations. Whether those approximations are good enough totally depend on practical applications.

There is an important detail though. Once we have \mu\left(X\right) and \sigma\left(X\right) from the encoder, we will need to sample z from a Gaussian distribution parameterized by those vectors. z is needed for the decoder to reconstruct \hat{X}, which will then be optimized to be as close to X as possible via gradient descent. Unfortunately, the “sample” step is not differentiable, therefore we will need a trick call reparameterization, where we don’t sample z directly from \mathcal{N}\left(\mu\left(X\right), \sigma\left(X\right)\right), but first sample z' from \mathcal{N}\left(0, I\right), and then compute z = \mu\left(X\right) + \mu\left(X\right)Iz'. This will make the whole computation differentiable and we can apply gradient descent as usual.

The cool thing is during inference, you won’t need the encoder to compute \mu\left(X\right) and \sigma\left(X\right) at all! Remember that during training, we try to pull Q to be close to P\left(z\right) (which is standard normal), so during inference, we can just inject \epsilon \sim \mathcal{N}\left(0, I\right) directly into the decoder and get a sample of X. This is how we can leverage the power of “generation” from VAEs.

There are various extensions to VAEs like Conditional VAEs and so on, but once you understand the basic, everything else is just nuts and bolts.

To sum up the series, this is the conceptual graph of VAEs during training, compared to some other models. Of course there are many details in those graphs that are left out, but you should get a rough idea about how they work.

vae

In the case of VAEs, I added the additional cost term in blue to highlight it. The cost term for other models, except GANs, are the usual L2 norm \Vert X - \hat{X}\Vert^2.

GSN is an extension to Denoising Autoencoder with explicit hidden variables, however that requires to form a fairly complicated Markov Chain. We may have another post  for it.

With this diagram, hopefully you will see how lame GAN is. It is even simpler than the humble RBM. However, the simplicity of GANs makes it so powerful, while the complexity of VAE makes it quite an effort just to understand. Moreover, VAEs make quite a few severe approximation, which might explain why samples generated from VAEs are far less realistic than those from GANs.

That’s quite enough for now. Next time we will switch to another topic I’ve been looking into recently.

Seq2Seq and recent advances

This is the slides I used for a talk I did recently in our reading group. The slides, particularly the Attention part, was based on one of Quoc Le’s talks on the same topic. I couldn’t come up with any better visual than what he did.

It has been quite a while since the last time I look at this topic, unfortunately I never managed to fully anticipate its beauty. Seq2Seq is one of those simple-ideas-that-actually-work in Deep Learning, which opened up a whole lot of possibilities and enabled many interesting work in the field.

A friend of mine did Variational Inference for his PhD, and once he said Variational Inference is one of those mathematically-beautiful-but-don’t-work things in Machine Learning.

Indeed, there are stuff like Variational, Bayesian inference, Sum-Product Nets etc… that come with beautiful mathematical frameworks, but don’t really work at scale, and stuff like Convolutional nets, GANs, etc.. that are a bit slippery in their mathematical foundation, often empirically discovered, but work really well in practice.

So even though many people might not really like the idea of GANs, for example, but given this “empirical tradition” in Deep Learning literature, probably they are here to stay.

NIPS 2016

So, NIPS 2016, the record-breaking NIPS with more than 6000 attendees, the massive recruiting event, the densest collection of great men with huge egos, whatever you call it.

I gotta write about this. Maybe several hundreds of people will also write something about NIPS, so I would start with something personal, before going to the usual march through papers and ideas, you know…

One of the cool things about this NIPS is I got to listen directly to the very men who taught me so many things during the last several years. Hearing Nando de Freitas talking on stage, I could easily recall his voice, the accent when he says thee-ta (\theta ) was so familiar. Listening to Rajesh Rao talking, I couldn’t help recalling the joke with the adventurer hat he made in order to “moisturise” his Neuroscience lectures. Sorry professor, nice try, but the joke didn’t quite work.

And of course, Yoshua Bengio with his usual hard-to-be-impressed style (although he hasn’t changed much since last time we talked). Also Alex Graves whose works wowed me so many times.

One of the highlights of the days was Jurgen Schmidhuber, with his deep, machine-generated voice, told a deep joke. The joke goes like this:

Three men were sentenced to death because of the invention of technology that causes mass unemployment in some certain industries. They were a French guy named LeCun, a British guy named Hinton and a German guy named Schmidhuber.

Before the execution, the Death asked them: “Any last word?”
– The French guy said: Je veux … (blah blah, in French, I couldn’t get it)
– The German guy: I want to give a final speech about the history of Deep Learning!
– The British guy: Please shoot me before Schmidhuber gives his goddamn speech!

As some of my friends put it: when he can make a joke about himself, probably he is still mentally healthy (pun intended).

(more…)

Deep Generative models – part 2: GSN, GAN and Ladder nets

Trong bài trước, ta đã nói vắn tắt về bài toán Generative Modeling trong Deep Learning. Bài này sẽ nói tiếp chuyện này một cách formal hơn, đồng thời điểm qua một số phương pháp đang “thịnh hành” trong cộng đồng gần đây. Lưu ý rằng Generative Modeling vẫn là một bài toán chưa được giải quyết trọn vẹn, thành ra có thể có vài cách tiếp cận khác không được điểm danh ở đây, và đa số các cách tiếp cận nói đến trong bài này đều vẫn còn đang ở trong tình trạng đang được nghiên cứu.

Bài toán ultimate trong học máy thống kê có lẽ là bài toán này: cho một tập mẫu \left\{x_i \right\}_{i=1}^N được lấy từ phân phối xác suất P\left(X\right) chưa biết. Xây dựng mô hình để “bắt chước” phân phối P\left(X\right) này.

Bài toán này khó vì ta giả sử rằng ta không biết gì về P\left(X\right), ngoại trừ một tập mẫu hữu hạn của nó. Hơn nữa, trong nhiều trường hợp, P\left(X\right) có thể rất phức tạp, chẳng hạn như mô hình sinh ra tất cả các ảnh RGB chụp phong cảnh tự nhiên, hay là ảnh X-quang chụp phổi bị ung thư, mô hình phái sinh ra thơ Shakespeare, v.v…. Trong những trường hợp như vậy, mô hình hoá trực tiếp P\left(X\right) có thể rất khó.

Mô hình mà mình nghĩ là “đơn giản” nhất trong Deep Learning để giải quyết bài toán này có lẽ là Sum-Product Networks (SPN). Ý tưởng chính của SPN là thiết kế mạng sao cho nó tractable sẵn, vì vậy huấn luyện SPN không cần phải quan tâm đến partition fuction, vì tính partition fuction trong SPN lúc nào cũng tractable (by construction). Mặc dù ý tưởng này rất tốt, nhưng một vài kết quả thực nghiệm cho thấy chính vì ràng buộc này mà có thể lớp hàm SPN có thể xấp xỉ không đủ lớn để mô hình hoá các phân phối phức tạp trong thực tế.

Ngoài SPN, một mô hình khác có vẻ hứa hẹn sẽ giải quyết được vấn đề này là Autoencoders, nhất là thể loại Denoising Autoencoder (DAE). DAE là mô hình rất đơn giản với chỉ một hidden layer. Đầu tiên ta chọn một phân phối nhiễu \mathcal{C}\left(\tilde{X}\vert X\right). Với mỗi mẫu “sạch” X từ phân phối P\left(X\right), ta áp dụng mô hình nhiễu, chẳng hạn nếu X là ảnh thì ta có thể thêm nhiễu Gaussian vào để tạo thành bản “lỗi” \tilde{X}. Sau đó ta đưa \tilde{X} vào cho DAE và huấn luyện để nó làm sạch nhiễu cho ta mẫu X ban đầu.

Nói theo ngôn ngữ function approximation thì DAE thực chất được huấn luyện để xấp xỉ phân phối có điều kiện P\left(X \vert \tilde{X}\right) (gọi là reconstruction function, vì đây là hàm sẽ cho ta phiên bản sạch X từ bản nhiễu \tilde{X}). Người ta cho rằng xấp xỉ P\left(X \vert \tilde{X}\right) dễ hơn nhiều so với xấp xỉ P\left(X\right), vì về cơ bản P\left(X \vert \tilde{X}\right) sẽ gồm ít mode hơn so với P\left(X\right). (more…)