Uniform Glivenko-Cantelli theorems and concentration of measure in the mathematical modelling of learning

Anthony, M.ORCID logo (2002). Uniform Glivenko-Cantelli theorems and concentration of measure in the mathematical modelling of learning. (CDAM research report series LSE-CDAM-2002-07). Centre for Discrete and Applicable Mathematics, London School of Economics and Political Science.
Copy

This paper surveys certain developments in the use of probabilistic techniques for the modelling of generalization in machine learning. Building on ‘uniform convergence’ results in probability theory, a number of approaches to the problem of quantifying generalization have been developed in recent years. Initially these models addressed binary classification, and as such were applicable, for example, to binary-output neural networks. More recently, analysis has been extended to apply to regression problems, and to classification problems in which the classification is achieved by using real-valued functions (in which the concept of a large margin has proven useful). In order to obtain more useful and realistic bounds, and to analyse model selection, another development has been the derivation of datadependent bounds. Here, we discuss some of the main probabilistic techniques and key results, particularly the use (and derivation of) uniform Glivenko-Cantelli theorems, and the use of concentration of measure results. Many details are omitted, the aim being to give a high-level overview of the types of approaches taken and methods used.

Full text not available from this repository.

Export as

EndNote BibTeX Reference Manager Refer Atom Dublin Core JSON Multiline CSV
Export