Berkeley Bounded Rationality Workshop

Berkeley Bounded Rationality Workshop

Date: May 16-17

Location: Howison Library, Philosophy Hall, Berkeley

Organizers: Geoffrey Lee & Snow Zhang


Thursday 5/16

10-10:30: Breakfast (Philosophy Hall 301)

10:30-12:30: David Thorstad (Vanderbilt), “The complexity-coherence tradeoff in cognition”

12:30-2: Lunch break

2-4: Jennifer Carr (UCSD): Epistemic “Can”

4-4:15: Coffee break (Philosophy Hall 301)

4:15-6:15: Francesca Zaffora Blando (CMU): Modestly Certain

7: Dinner (guest speakers and invited participants only)

Friday 5/17

9:30-10: Breakfast (Philosophy Hall 301)

10-12: Michael Rescorla (UCLA), “Mental Representation in Bayesian Cognitive Science”

12-1: Lunch break

1:15-3:15: Sara Aronowitz (Toronto), “What makes for a good division of possibilities?”

3:15-3:30: Coffee break (Philosophy Hall 301)

3:30-5:30: Thomas Icard (Stanford), “Bounded Rationality and the Normative-Descriptive Interplay”


David Thorstad, The complexity-coherence tradeoff in cognition

I argue that bounded agents face a systematic complexity-coherence tradeoff in cognition. Agents must choose whether to structure their cognition in more complex ways, or in ways more likely to promote coherence. I illustrate the complexity-coherence tradeoff by examining three types of complexity: procedural complexity, informational complexity, and state complexity. In each case, I show how feasible strategies for increasing complexity along the relevant dimension often come at the expense of a heightened vulnerability to incoherence. I discuss normative and descriptive implications of the complexity-coherence tradeoff, including a novel challenge to coherence-based theories of bounded rationality, renewed support for the rationality of heuristic cognition, and a deepening of traditional challenges to dual-process theories of cognition.

Sara Aronowitz, What makes for a good division of possibilities?

Most work on rational choice has focused on how we should decide, given a set of options. But setting up the options is often decisive in whether we act well or poorly, even assuming a perfectly rational decision-making procedure. In this talk, I’ll consider three answers to the question of what makes a division of the world into coarse-grained states and actions good. Each of these measures is insufficient as a full account of what makes a division of possibilities good. I end by considering a way in which the measures might be combined. Asking this question about imperfect rationality may also give us insight into how and why animals, including humans, divide up the world in the ways we do.

Thomas Icard: Bounded Rationality and the Normative-Descriptive Interplay

Distinctive of many approaches to bounded rationality is the way they mix together descriptive facts about agents with putative normative standards for those agents. Navigating this interplay can be delicate, and the goal of the talk will be to explore how it might be done most profitably.

Francesca Zaffora Blando: Modestly Certain

Bayesian convergence-to-the-truth theorems are often criticized in the literature for their “self-congratulatory” nature: convergence to the truth is guaranteed to happen with probability one, but this probability-one qualification is always relative to the learner’s subjective prior. In this talk, I will focus on an older argument due to John Earman and a more recent one due to Gordon Belot, both of which can be understood as criticizing the Bayesian framework because it allegedly forces Bayesian learners to treat all inductive problems alike: no matter how difficult an inductive problem is, a Bayesian learner will always be certain that their beliefs will converge to the truth with increasing evidence. Contra Earman and Belot, I will argue that the Bayesian framework does have tools in its arsenal that allow to differentiate between inductive problems of different difficulties, as well as show that Bayesian learners can display a certain type of epistemic modesty in the face of hard inductive problems. To do so, I will appeal to some recent joint results with Simon Huttegger and Sean Walsh. I will argue that, if we focus on computationally bounded Bayesian learners, the theory of algorithmic randomness can be put to use to a provide a finer-grained understanding of almost sure convergence to the truth. In particular, we will see that, as the inductive problems they face become harder and harder, computationally bounded Bayesian learners believe that the sets of data streams along which convergence to the truth occurs shrink in a systematic way that tracks the difficulty of the problem at hand.

Michael Rescorla: Mental Representation in Bayesian Cognitive Science

Bayesian decision theory is a mathematical framework that models reasoning and decision-making under uncertain conditions. The Bayesian paradigm originated as a theory of how people should operate, not a theory of how they actually operate. Nevertheless, researchers increasingly use it to describe the actual workings of the human mind. Over the past few decades, cognitive science has produced impressive Bayesian models of mental activity. The models postulate that certain mental processes conform, or approximately conform, to Bayesian norms. Bayesian models have illuminated numerous mental phenomena, such as perception, motor control, and navigation. In this talk, I will argue that Bayesian models of the mind assign a central explanatory role to representational mental states. I will then argue that representationality has significant implications for how we interpret the formal apparatus employed by Bayesian models.

Jennifer Carr: Epistemic “Can”

Updated on 2024-05-17 13:02:04 -0700 by Xueyin (Snow) Zhang