In a previous post, we briefly mentioned some recent approaches for Generative Modeling. Among those, RBMs and DBMs are probably very tricky because the estimation of gradients in those models is based on a good mixing of MCMC, which tends to get worse during the course of training because the model distribution gets sharper. Autogressive models like PixelRNN, WaveNet, etc… are easier to train but have no latent variables, which makes them somewhat less powerful. Therefore, the current frontier in Generative Modelling is probably GANs and Variational Autoencoders (VAEs).

While GANs are too mainstream, I thought I can probably write a post or two about Variational Autoencoders, at least to clear up some confusions I am having about them.

Formally, generative modeling is the area in Machine Learning that deals with models of distributions , defined over datapoints in some high-dimensional space . The whole idea is to construct models of that assigns high probabilities to data points similar to those in the training set, and low probabilities every where else. For example, a generative models of images of cows should assign small probabilities to images of human.

However, computing the probability of a given example is not the most exciting thing about generative models. More often, we want to use the model to generate new samples that look like those in the training set. This “creativity” is something unique to generative models, and does not exist in, for instance, discriminative models. More formally, say we have a training set sampled from an unknown distribution , and we want to train a model which we can take sample from, such that is as close as possible to .

Needless to say, this is a difficult problem. To make it tractable, traditional approaches in Machine Learning often have to 1) make strong assumptions about the structure of the data, or 2) make severe approximation, leading to suboptimal models, or 3) rely on expensive sampling procedures like MCMC. Those are all limitations which make Generative modeling a long-standing problem in ML research.

Without further ado, let’s get to the point. When is a high-dimensional space, modeling is difficult mostly because it is tricky to handle the inter-dependencies between dimensions. For instance, if the left half of an image is a horse then probably the right half is likely another horse.

To further reduce this complexity, we add a latent variable in a high-dimensional space that we can easily sample from, according to a distribution defined over . Then say we have a family of deterministic function parameterized by a vector in some space where . Now is deterministic, but since is a random variable, is a random variable in .

During inference, we will sample from , and then train such that is close to samples in the training set. Mathematically, we want to maximize the following probability for every sample in the training set:

**(1)**

This is the good old *maximum likelihood* framework, but we replace by (called the *output distribution*) to explicitly indicate that depends on , so that we can use the integral to make it a proper probability distribution.

There are a few things to note here:

**In VAEs, the choice of the output distribution is often Gaussian**, i.e. , meaning it is a Gaussian distribution with mean and diagonal covariance matrix where is a scalar hyper-parameter. This particular choice has some important motivations:- We need the output distribution to be continuous, so that we can use gradient descent on the whole model. It wouldn’t be possible if we use discontinuous function like the Dirac distribution, meaning to use exactly the output value of for .
- We don’t really need to train our model such that produces
*exactly*some sample X in the training set. Instead, we want it to produce samples that are*merely*like X. In the beginning of training, there is no way for to gives exact samples in the training set. Hence by using a Gaussian, we allow the model to gradually (and gently) learn to produce samples that are more and more like those in the training set. - It doesn’t have to be Gaussian though. For instance, if is binary, we can make a Bernoulli parameterized by . The important property is that can be computed, and is continuous in the domain of .

**The distribution of is simply the normal distribution**, i.e. . Why? How is it possible? Is there any limitation with this? A related question is why don’t we have several levels of latent variables. which potentially might help modelling complicated processes?

All those question can be answered by the key observation that a**ny distribution in dimensions can be generated by taking variables from the normal distribution and mapping them through a sufficiently complicated function.**

Let that sink for a moment. Readers who are interested in the mathematical details can have a look at the*conditional distribution method*described in this paper. You can also convince yourself if you remember how we can sample from any Gaussian as described in an earlier post.

Now, this observation means we don’t need to go to more than one level of latent variable, with a condition that we need a*sufficiently complicated function*for . Since deep neural nets has been shown to be a powerful function approximator, it makes a lot of sense to use deep neural nets for modeling .- Now the
*only*business is to maximize (1). Using the law of large numbers, we can approximate the integral by the expected value over a large number of samples. So the plan will be to take a very large sample from , then compute . Unfortunately the plan is infeasible because in high dimensional spaces, needs to be very large in order to have a good enough approximation of (imagine how much samples you would need for images, which is in 120K dimensional space?)

Now the key to realize is that we don’t need to sample from all over . In fact, we only need to sample such that is more likely to be similar to samples in the training set. Moreover, it is likely that for most of , is nearly zero, and therefore contribute very little into the estimation of . So the question is:**is there any way to sample such that it is likely to generate , and only estimate from those**?

It is the key idea behind VAEs.

That’s quite enough for an overview. Next time we will do some maths and see how we go about maximizing (1). Hopefully I can then convince you that VAEs, GANs and GSNs are really not far away from each other, at least in their core ideas.

## 2 comments