Gaussian Processes, Grad School, and Richard Zemel See More Episodes arXiv Whitepapers Learning the Base Distribution in Implicit Generative Models. Popular generative model learning methods such as Generative Adversarial Networks (GANs), and Variational Autoencoders (VAE) enforce the latent representation to follow simple distributions such as isotropic Gaussian. In this paper, we argue that learning a complicated distribution over the latent... A Game-Theoretic Approach to Design Secure and Resilient Distributed Support Vector Machines. Distributed Support Vector Machines (DSVM) have been developed to solve large-scale classification problems in networked systems with a large number of sensors and control units. However, the systems become more vulnerable as detection and defense are increasingly difficult and expensive. This work... Random Features for Large-Scale Kernel Machines To accelerate the training of kernel machines, we propose to map the input data to a randomized low-dimensional feature space and then apply existing fast linear methods. The features are designed so that the inner products of the transformed data are approximately equal to those in the feature... More featured content News Articles Shrinking data for surgical training MIT Intelligence Quest kicks off Stay in the loop. Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings. E-mail ML 2.0: Machine learning for many Eric Schmidt provides support to MIT Intelligence Quest Inventing the “Google” for predictive analytics Artificial intelligence aids materials fabrication Faster big-data analysis Bug-repair system learns from example New leadership for MIT-IBM Watson AI Lab Artificial intelligence suggests recipes based on food photos More news