In episode fourteen of season two, we talk about Perturb-and-MAP, we take a listener question about classic artificial intelligence ideas being used in modern machine learning, plus we talk with Jake Abernethy of the University of Michigan about municipal data and his work on the Flint water crisis.
In episode thirteen of season two, we talk about t-Distributed Stochastic Neighbor Embedding (t-SNE) we take a listener question about statistical physics, plus we talk with Hal Daume of the University of Maryland. (who is a great follow on Twitter.)
In episode eleven of season two, we talk about the machine learning toolkit Spark, we take a listener question about the differences between NIPS and ICML conferences, plus we talk with Sinead Williamson of The University of Texas at Austin.
In episode ten of season two, we talk about Computational Learning Theory and Probably Approximately Correct Learning originated by Professor Leslie Valiant of SEAS at Harvard, we take a listener question about generative systems, plus we talk with Aviv Regev, Chair of the Faculty and Director of the Klarman Cell Observatory and the Cell Circuits Program at the Broad Institute.
Episode seven of season two is a little different than our usual episodes, Ryan and Katherine just returned from a conference where they got to talk with Neil Lawrence of the University of Sheffield about some of the larger issues surrounding machine learning and society. They discuss anthropomorphic intelligence, data ownership, and the ability to empathize. The entire episode is given over to this conversation in hopes that it will spur more discussion of these important issues as the field continues to grow.
In episode six of season two, we talk about how to build software for machine learning (and what the roadblocks are), we take a listener question about how to start exploring a new dataset, plus, we talk with Rob Tibshirani of Stanford University.
In episode five of Season two Ryan walks us through variational inference, we put some listener questions about Go and how to play it to Andy Okun, president of the American Go Association (who is in Seoul South Korea watching the Lee Sedol/AlphaGo games). Plus we hear from Suchi Saria of Johns Hopkins about applying machine learning to understanding health care data.
In episode four of season two, we talk about some of the major issues in AI safety, (and how they’re not really that different from the questions we ask whenever we create a new tool.) One place you can go for other opinions on AI safety is the Future of Life Institute. We take a listener question about time series and we talk with Nick Patterson of the Broad Institute about everything from ancient DNA to Alan Turing. If you're as excited about AlphaGo playing Lee Sedol at Nick is, you can get details on the match on DeepMind's You Tube channel March 5th through the 15th.
In episode three of season two Ryan walks us through the Alpha Go results and takes a lister question about using Gaussian processes for classifications. Plus we talk with Michael Littman of Brown University about his work, robots, and making music videos.
Also not to be missed, Michael’s appearance in the recent Turbotax ad!
In episode two of season two Ryan introduces us to Gaussian processes, we take a listener question on K-means. Plus, we talk with Ilya Sutskever the director of research for OpenAI. (For more from Ilya, you can listen to our season one interview with him.)
In episode one of season two, we celebrate the 10th anniversary of Women in Machine Learning (WiML) with its co-founder (and our guest host for this episode) Hanna Wallach of Microsoft Research. Hanna and Jenn Wortman Vaughan, who also helped to found the event, tell us how about how the 2015 event went. Lillian Lee (Cornell), Raia Hadsell (Google Deepmind), Been Kim (AI2/University of Washington), and Corinna Cortes (Google Research) gave invited talks at the 2015 event. WiML also released a directory of women in machine learning, if you’d like to listed, want to find a collaborator, or are looking for an expert to take part in an event, it’s an excellent resource. Plus, we talk with Jenn Wortman Vaughan, about the research she is doing at Microsoft Research which examines the assumptions we make about how humans actually act and using that to inform thinking about our interactions with computers.
Want to learn more about the talks at WiML 2015? Here are the slides from each speaker.
In episode twenty four we talk with Ben Vigoda about his work in probabilistic programming (everything from his thesis, to his new company) Ryan talks about Tensor Flow and Autograd for Torch, some open source tools that have been recently releases. Plus we talk a listener question about the biggest thing in machine learning this year.
This is the last episode in season one. We want to thanks all our wonderful listeners for supporting the show, asking us questions, and making season two possible! We’ll be back in early January with the beginning of season two!
In episode 23 we talk with David Mimno of Cornell University about his work in the digital humanities (and explore what machine learning can tell us about lady zombie ghosts and huge bodies of literature) Ryan introduces us to probabilistic programming and we take a listener question about knowledge transfer between math and machine learning.