Generating images with recurrent adversarial networks
D.J.Im, C.D.Kim,H.Jiang and R.Memisevic, arXiv Dec 2016
This paper focuses to change the texture of a given reference image by generating a new image that matches image features and texture features within the layers of a pre-trained convolutional network. This is done by using generative recurrent adversarial networks, where the generator G consists of a recurrent feedback loop that takes a sequence of noise samples drawn from distribution it draws output at multiple time steps accumulating update at each time step yields the final sample.
At each time step , the sample obtained from prior distribution is passed through f(.) which acts as a decoder. It receives input from previous hidden state and noise sample and has one fully connected layer at the bottom and a deconvolution layer with fractional stride convolution at rest of the upper layers. The function g(.) as a encoder and starts from convolutional layers and fully connected layers at the top. The full network is trained via backpropagation through the time. The performance is evaluated using Generative Adversarial Metric (GAM-refer page 6).
At each time step , the sample obtained from prior distribution is passed through f(.) which acts as a decoder. It receives input from previous hidden state and noise sample and has one fully connected layer at the bottom and a deconvolution layer with fractional stride convolution at rest of the upper layers. The function g(.) as a encoder and starts from convolutional layers and fully connected layers at the top. The full network is trained via backpropagation through the time. The performance is evaluated using Generative Adversarial Metric (GAM-refer page 6).