In episode thirteen of season two, we talk about t-Distributed Stochastic Neighbor Embedding (t-SNE) we take a listener question about statistical physics, plus we talk with Hal Daume of the University of Maryland. (who is a great follow on Twitter.)
In episode eleven of season two, we talk about the machine learning toolkit Spark, we take a listener question about the differences between NIPS and ICML conferences, plus we talk with Sinead Williamson of The University of Texas at Austin.
In episode ten of season two, we talk about Computational Learning Theory and Probably Approximately Correct Learning originated by Professor Leslie Valiant of SEAS at Harvard, we take a listener question about generative systems, plus we talk with Aviv Regev, Chair of the Faculty and Director of the Klarman Cell Observatory and the Cell Circuits Program at the Broad Institute.
Episode seven of season two is a little different than our usual episodes, Ryan and Katherine just returned from a conference where they got to talk with Neil Lawrence of the University of Sheffield about some of the larger issues surrounding machine learning and society. They discuss anthropomorphic intelligence, data ownership, and the ability to empathize. The entire episode is given over to this conversation in hopes that it will spur more discussion of these important issues as the field continues to grow.
In episode six of season two, we talk about how to build software for machine learning (and what the roadblocks are), we take a listener question about how to start exploring a new dataset, plus, we talk with Rob Tibshirani of Stanford University.
In episode five of Season two Ryan walks us through variational inference, we put some listener questions about Go and how to play it to Andy Okun, president of the American Go Association (who is in Seoul South Korea watching the Lee Sedol/AlphaGo games). Plus we hear from Suchi Saria of Johns Hopkins about applying machine learning to understanding health care data.
In episode four of season two, we talk about some of the major issues in AI safety, (and how they’re not really that different from the questions we ask whenever we create a new tool.) One place you can go for other opinions on AI safety is the Future of Life Institute. We take a listener question about time series and we talk with Nick Patterson of the Broad Institute about everything from ancient DNA to Alan Turing. If you're as excited about AlphaGo playing Lee Sedol at Nick is, you can get details on the match on DeepMind's You Tube channel March 5th through the 15th.
In episode three of season two Ryan walks us through the Alpha Go results and takes a lister question about using Gaussian processes for classifications. Plus we talk with Michael Littman of Brown University about his work, robots, and making music videos.
Also not to be missed, Michael’s appearance in the recent Turbotax ad!
In episode two of season two Ryan introduces us to Gaussian processes, we take a listener question on K-means. Plus, we talk with Ilya Sutskever the director of research for OpenAI. (For more from Ilya, you can listen to our season one interview with him.)
In episode one of season two, we celebrate the 10th anniversary of Women in Machine Learning (WiML) with its co-founder (and our guest host for this episode) Hanna Wallach of Microsoft Research. Hanna and Jenn Wortman Vaughan, who also helped to found the event, tell us how about how the 2015 event went. Lillian Lee (Cornell), Raia Hadsell (Google Deepmind), Been Kim (AI2/University of Washington), and Corinna Cortes (Google Research) gave invited talks at the 2015 event. WiML also released a directory of women in machine learning, if you’d like to listed, want to find a collaborator, or are looking for an expert to take part in an event, it’s an excellent resource. Plus, we talk with Jenn Wortman Vaughan, about the research she is doing at Microsoft Research which examines the assumptions we make about how humans actually act and using that to inform thinking about our interactions with computers.
Want to learn more about the talks at WiML 2015? Here are the slides from each speaker.
In episode twenty four we talk with Ben Vigoda about his work in probabilistic programming (everything from his thesis, to his new company) Ryan talks about Tensor Flow and Autograd for Torch, some open source tools that have been recently releases. Plus we talk a listener question about the biggest thing in machine learning this year.
This is the last episode in season one. We want to thanks all our wonderful listeners for supporting the show, asking us questions, and making season two possible! We’ll be back in early January with the beginning of season two!
In episode 23 we talk with David Mimno of Cornell University about his work in the digital humanities (and explore what machine learning can tell us about lady zombie ghosts and huge bodies of literature) Ryan introduces us to probabilistic programming and we take a listener question about knowledge transfer between math and machine learning.
In episode twenty two we talk with Adam Kalai of Microsoft Research New England about his work using crowdsourcing in Machine Learning, the language made of shapes of words, and New England Machine Learning Day. We take a look at the workshops being presented at NIPS this year, and we take a listener question about changing the number of features your data has.
In episode twenty one we talk with Quaid Morris of the University of Toronto, who is using machine learning to find a better way to treat cancers. Ryan introduces us to expectation maximization and we take a listener question about how to master machine learning.
In episode 20 we chat with Pedro Domingos of the University of Washington, he's just published a book The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. We get some insight into Linear Dynamical Systems which the Datta Lab at Harvard Medical School is doing some interesting work with. Plus, we take a listener question about using video games to generate labeled data (spoiler alert, it's an awesome idea!)
I hope you've been enjoying our show so far. Ryan and I have been having a fantastic time making it for you. But we've reached a point where if we're going to keep going, we need your help.
We take pride that our podcast is professional quality, but reaching that level of quality takes a lot of time, effort, and resources. We have to pay for studio time, audio engineering, and production time.
But our greatest expense is probably the thing that you enjoy most about our show, our interviews. We're able to get interviews with the top experts in academia and industry because we're willing to go where they are. It may seems like a small thing, but it really makes a big difference. Unfortunately it's also an expensive difference, travel is not usually an expense that podcasts incur but we've found it's essential for making ours.
We've got a few days left in our Kickstarter and we've raised a little more than half of the funds we need. We need your help now more than ever, so please lend a hand and let's continue Talking Machines!
In episode nineteen we chat with Hugo Larochelle about his work on unsupervised learning, the International Conference on Learning Representations (ICLR), and his teaching style. His Youtube courses are not to be missed, and his twitter feed @Hugo_Larochelle is a great source for paper reviews. Ryan introduces us to autoencoders (for more, turn to the work of Richard Zemel) plus we tackle the question of what is standing in the way of strong AI.