**Speaker:****Hanti Lin**(UC Davis)

**Abstract**: Learning theory is a framework for evaluations of inductive methods, belief revision procedures, and learning algorithms. It has a distinctive emphasis on pursuit of truth and convergence to the desired learning target. This tutorial will present learning theory in a way that seems to have seldom, if ever, been attempted. Specifically, the epistemological foundations will be developed and defended with a high level of philosophical rigor. Furthermore, the mathematical formalism will be developed in a general way that unifies a number of different approaches to learning theory which, unfortunately, have remained largely isolated from one another. Those approaches include: (i) formal learning theory in logic and philosophy of science, (ii) PAC learning theory in machine learning, (iii) a certain part of hypothesis testing and point estimation in statistics, and (iv) a certain version of Bayesian epistemology and Bayesian statistics.

**Mathematical prerequisites**: Participants are expected to be familiar with some basic mathematical tools in logic, such as sets, sequences, and quantifiers. It would be helpful, but inessential, to know some elementary probability theory or at least be able to say what counts as a probability distribution over a countable infinity of possible outcomes or worlds. No prior knowledge of statistics will be assumed.

**Schedule**:

** Session 1**. Pursuit of Platonic Tethering to the Truth:
Learning Theory & It’s Philosophical Foundations

**Sessions 2 and 3**. Pursuit of Truth:
The Learning-Theoretic Way from Logic to Statistics and Then to Machine Learning

**Session 4**. Pursuit of Truth in the Face of Severe Underdetermination (I): A Case Study on Full Enumerative Induction

**Session 5**. Pursuit of Truth in the Face of Severe Underdetermination (II): A Case Study on Causal Discovery without Faithfulness or the Like

**CV:**Hanti Lin (林翰迪) is Assistant Professor of Philosophy at the University of California, Davis. He received his PhD at Carnegie Mellon University, and was a post-doc at the School of Philosophy at the Australian National University, before joining UC Davis in 2015. His main areas of research are philosophy of science and formal epistemology. His recent work focuses on developing a new philosophical and mathematical foundation of learning theory, with applications to justifying some hard-to-justify learning algorithms in machine learning or inductive logic.