Location: Hackerman Hall B-17
Time: 10:30 am - 11:45 am
Characterizing human language processing as rational probabilistic inference has yielded a number of useful insights. For example, surprisal theory (Hale, Levy) represents an elegant formalization of incremental processing that has met with empirical success (and some challenges) in accounting for word-by-word reading times. A theoretical challenge now facing the field is integrating rational analyses with bounded computational/cognitive mechanisms, and with task-oriented perception and action. A standard approach to such challenges (Marr and others) is to posit (bounded) mechanisms/algorithms that approximate functions specified at a rational analysis level. I discuss an alternative approach, computational rationality, that incorporates the bounds themselves in the definition of rational problems of utility maximization. This approach naturally admits of two kinds of analyses: the derivation of control strategies (policies or programs) for bounded agents that are optimal in local task settings, and the identification of processing mechanisms that are optimal across a broad range of tasks. As an instance of the first kind of analysis, we consider the derivation of eye-movement strategies in a simple word reading task, given general assumptions about noisy lexical representations and oculomotor architecture. These analyses yield novel predictions of task and payoff effects on fixation durations that we have tested and confirmed in eye-tracking experiments. (The model can be seen as a kind of ideal-observer/actor model, and naturally extends to account for distractor-ratio and pop-out effects in visual search). As an instance of the second kind of analysis, we consider properties of an optimal short-term memory system for sentence parsing, given general assumptions about noisy representations of linguistic features. Such a system provides principled explanations of similarity-based interference slow-downs and certain speed-accuracy tradeoffs in sentence processing. I conclude by sketching steps required for an integrated theory that jointly derives task-driven parsing and eye-movement strategies constrained by noisy memory and perception.
Richard Lewis is a cognitive scientist at the University of Michigan, where he is Professor of Psychology and Linguistics. He received his PhD in Computer Science at Carnegie Mellon with Allen Newell, followed by a McDonnell Fellowship in Psychology at Princeton and a position as Assistant Professor of Computer Science at Ohio State. His research interests include sentence processing, eye-movements, short-term memory, cognitive architecture, reinforcement learning and intrisic reward, and optimal control approaches to modeling human behavior. He was elected a Fellow of the Association for Psychological Science in 2010.