The lack of a priori distinctions between learning algorithms

DH Wolpert - Neural computation, 1996 - direct.mit.edu
Neural computation, 1996direct.mit.edu
This is the first of two papers that use off-training set (OTS) error to investigate the
assumption-free relationship between learning algorithms. This first paper discusses the
senses in which there are no a priori distinctions between learning algorithms.(The second
paper discusses the senses in which there are such distinctions.) In this first paper it is
shown, loosely speaking, that for any two algorithms A and B, there are “as many” targets (or
priors over targets) for which A has lower expected OTS error than B as vice versa, for loss …
Abstract
This is the first of two papers that use off-training set (OTS) error to investigate the assumption-free relationship between learning algorithms. This first paper discusses the senses in which there are no a priori distinctions between learning algorithms. (The second paper discusses the senses in which there are such distinctions.) In this first paper it is shown, loosely speaking, that for any two algorithms A and B, there are “as many” targets (or priors over targets) for which A has lower expected OTS error than B as vice versa, for loss functions like zero-one loss. In particular, this is true if A is cross-validation and B is “anti-cross-validation” (choose the learning algorithm with largest cross-validation error). This paper ends with a discussion of the implications of these results for computational learning theory. It is shown that one cannot say: if empirical misclassification rate is low, the Vapnik-Chervonenkis dimension of your generalizer is small, and the training set is large, then with high probability your OTS error is small. Other implications for “membership queries” algorithms and “punting” algorithms are also discussed.
MIT Press
Showing the best result for this search. See all results