Accession Number : ADA295639
Title : Sequential Optimal Recovery: A Paradigm for Active Learning.
Descriptive Note : Memorandum rept.,
Corporate Author : MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE LAB
Personal Author(s) : Niyogi, Partha
PDF Url : ADA295639
Report Date : JAN 1995
Pagination or Media Count : 23
Abstract : In most classical frameworks for learning from examples, it is assumed that examples are randomly drawn and presented to the learner. In this paper, we consider the possibility of a more (it active) learner who is allowed to choose his/her own examples. Our investigations are carried out in a function approximation setting. In particular, using arguments from optimal recovery, we develop an adaptive sampling strategy (equivalent to adaptive approximation) for arbitrary approximation schemes. We provide a general formulation of the problem and show how it can be regarded as sequential optimal recovery. We demonstrate the application of this general formulation to two special cases of functions on the real line monotonically increasing functions and functions with bounded derivative. An extensive investigation of the sample complexity of approximating these functions is conducted yielding both theoretical and empirical results on test functions. Our theoretical results (stated in PAC-style), along with the simulations demonstrate the superiority of our active scheme over both passive learning as well as classical optimal recovery. The analysis of active function approximation is conducted in a worst-case setting, in contrast with other Bayesian paradigms obtained from optimal design. (AN)
Descriptors : *OPTIMIZATION, *RULE BASED SYSTEMS, *LEARNING, MATHEMATICAL MODELS, ALGORITHMS, RECOVERY, STRATEGY, DATA MANAGEMENT, PROBABILITY DISTRIBUTION FUNCTIONS, PASSIVE SYSTEMS, APPROXIMATION(MATHEMATICS), NUMERICAL INTEGRATION, SAMPLING, ADAPTIVE SYSTEMS, BAYES THEOREM, MONOTONE FUNCTIONS.
Subject Categories : Cybernetics
Distribution Statement : APPROVED FOR PUBLIC RELEASE