0x544 Autoregressive
Neural AR models factorize the generation problem with a sequence of conditional probabilities, then use network to model them. The cons of autoregressive model is its slow generation.
Modeling \(p(x_d | x_{<d})\) separately requires \(D\) different model, which is infeasible. Instead we use the shared model (i.e: autoregression model).
To reduce the complexity, one simple idea is to use the finite memory
where the trigram model \(p(x_d | x_{d-1}, x_{d-2})\) is modeled using an MLP
1.1. Long-Range Memory with RNN
RNN can be used as an autoregressive model.
Model (char-rnn) The character-level language model is model the character sequence \(\mathbf{x}\) with RNN as follows
Karpathy's blog shows that this model can be used to generate many different sequences such as Shakespeare, Wikipedia, XML, latex and source code.
This model can also generate non-text objects such as images by representing pixel as character.
1.2. Masking-based Models
Model (Masking-based autoregressive model, MADE) An MLP based autoencoder can be turned into an autoregressive model by removing (masking) some connections.
Model (wavenet) Wavenet is a 1d convolution AR model
Model (PixelCNN) Pixel CNN is the 2d convolution AR model. Unlike normal CNN which will use all neighborhood pixels to convolve, PixelCNN masks out those pixels it has not seen (e.g. with the raster scan ordering)
Model (PixelCNN++) OpenAI's implementation of PixelCNN with several improvement:
- Use mixture of logistic (e.g: 5 component) to model the discretized distribution instead of 256 softmax because
- saves memory
- allow dense gradient flow to speedup training
- pixel conditioning is simplified
- short-cut connection like the U-net
The mixture of logistic is sa follows:
PMF is modeled as