Content Tags

There are no tags.

Generalization in Machine Learning via Analytical Learning Theory.

RSS Source
Authors
Kenji Kawaguchi, Yoshua Bengio

This paper introduces a novel measure-theoretic learning theory to analyzegeneralization behaviors of practical interest. The proposed learning theoryhas the following abilities: 1) to utilize the qualities of each learnedrepresentation on the path from raw inputs to outputs in representationlearning, 2) to guarantee good generalization errors possibly with arbitrarilyrich hypothesis spaces (e.g., arbitrarily large capacity and Rademachercomplexity) and non-stable/non-robust learning algorithms, and 3) to clearlydistinguish each individual problem instance from each other. Ourgeneralization bounds are relative to a representation of the data, and holdtrue even if the representation is learned. We discuss several consequences ofour results on deep learning, one-shot learning and curriculum learning. Unlikestatistical learning theory, the proposed learning theory analyzes each probleminstance individually via measure theory, rather than a set of probleminstances via statistics. Because of the differences in the assumptions and theobjectives, the proposed learning theory is meant to be complementary toprevious learning theory and is not designed to compete with it.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.