Wed Apr 11, 2018
234 Moses Hall, 6–7:30 PM
|Working Group in the History and Philosophy of Logic, Mathematics, and Science
Hanti Lin (UC Davis)
Modes of Convergence to the Truth: Steps toward a Better Epistemology of Induction
Those who engage in normative or evaluative studies of induction, such as formal epistemologists, statisticians, and computer scientists, have provided many positive results for justifying (to a certain extent) various kinds of inductive inferences. But they all have said little about a very familiar kind of induction. I call it full enumerative induction, of which an instance is this: “We’ve seen this many ravens and they all are black, so all ravens are black”—without a stronger premise such as IID or a weaker conclusion such as “all the ravens observed in the future will be black”. I explain why those theorists of induction all say little about full enumerative induction. To remedy this, I propose that Bayesians be learning-theoretic Bayesians and learning-theorists be truly learning-theoretic—in three steps. (i) Understand certain modes of convergence to the truth as epistemic ideals for an inquirer to achieve where possible. (ii) Require the norm that an inquirer ought to achieve the highest achievable epistemic ideal. (iii) See whether full enumerative induction can be justified as—that is, proved to be—a necessary means for achieving the highest epistemic ideal achievable for tackling the problem of whether all ravens are black. The answer is positive, thanks to a new theorem, whose Bayesian version is proved as well. The technical breakthrough consists in introducing a mode of convergence slightly weaker than Gold’s (1965) and Putnam’s (1965) identification in the limit; I call it almost everywhere convergence to the truth, where the conception of “almost everywhere” is borrowed from geometry and topology. The talk will not presuppose knowledge of topology or learning theory.