TransWikia.com

Generalisations of the Fundamental Theorem of Statistical Learning to different tasks and losses

Theoretical Computer Science Asked by user27182 on February 27, 2021

The fundamental theorem of statistical learning gives an equivalence between uniform convergence of the empirical risk to learning in the PAC framework.

I have only seen this stated in the case of binary classification with the 0-1 loss.
Does a result of this form hold in more general settings? For example: margin-based classification rules, regression, multi-class classification, …?

Another statement of this question could be: under what circumstances does uniform convergence of the empirical risk imply PAC learning? (I am most interested in this direction of implication.)

Please provide references if you have them.

One Answer

Turns out the answer is yes and can be found in Part 3 (eg chapter 19) of Neural Network Learning: Theoretical Foundations, by Anthony and Bartlett.

Answered by user27182 on February 27, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP