Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.

Metrics for Deep Generative Models.

RSS Source
Authors
Nutan Chen, Alexej Klushyn, Richard Kurle, Xueyan Jiang, Justin Bayer, Patrick van der Smagt

Neural samplers such as variational autoencoders (VAEs) or generativeadversarial networks (GANs) approximate distributions by transforming samplesfrom a simple random source---the latent space---to samples from a more complexdistribution represented by a dataset. While the manifold hypothesis impliesthat the density induced by a dataset contains large regions of low density,the training criterions of VAEs and GANs will make the latent space denselycovered. Consequently points that are separated by low-density regions inobservation space will be pushed together in latent space, making stationarydistances poor proxies for similarity. We transfer ideas from Riemanniangeometry to this setting, letting the distance between two points be theshortest path on a Riemannian manifold induced by the transformation. Themethod yields a principled distance measure, provides a tool for visualinspection of deep generative models, and an alternative to linearinterpolation in latent space. In addition, it can be applied for robotmovement generalization using previously learned skills. The method isevaluated on a synthetic dataset with known ground truth; on a simulated robotarm dataset; on human motion capture data; and on a generative model ofhandwritten digits.