September 23, 2014 - Roger Levy

Probabilistic Models of Human Language Comprehension, Production, and Acquisition

Location: Hackerman Hall B-17
Time: 10:30 am - 11:55 am


Human language acquisition and use are central problems for the advancement of machine intelligence, and pose some of the deepest scientific challenges in accounting for the capabilities of the human mind. In this talk I describe several major advances we have recently made in this domain made possible by combining leading ideas and techniques from computer science and cognitive science. Central to these advances is the use of generative probabilistic models over richly structured linguistic representations. In language comprehension, I describe how we have used these models to develop detailed theories of incremental parsing that unify the central problems of ambiguity resolution, prediction, and syntactic complexity, and that yield compelling quantitative fits to behavioral data from both controlled psycholinguistic experiments and reading of naturalistic text. I also describe noisy-channel models relating the accrual of uncertain perceptual input with sentence-level language comprehension that account for critical outstanding puzzles for previous theories, and that when combined with reinforcement learning yield state-of-the-art models of human eye movement control in reading. This work on comprehension sets the stage for a theory in language production of how speakers tend toward an optimal distribution of information content throughout their utterances, whose predictions we confirm in statistical analysis of a variety of types of optional function word omission. Finally, I conclude with examples of how we use nonparametric models to account for some of the most challenging problems in language acquisition, including how humans learn phonetic category inventories and acquire and rank phonological constraints.


Roger Levy is Associate Professor of Linguistics at the University of California, San Diego, where he directs the world's first Computational Psycholinguistics Laboratory. He received his B.S. from the University of Arizona and his M.S. and Ph.D. from Stanford University. He was a UK ESRC Postdoctoral Fellow at the University of Edinburgh before his current appointment. His awards include an NSF CAREER grant, an Alfred P. Sloan Research Fellowship, and a Fellowship at the Center for Advanced Study in the Behavioral Sciences. Levy's research program is devoted to theoretical and applied questions at the intersection of cognition and computation, focusing on human language processing and acquisition. Inherently, linguistic communication involves the resolution of uncertainty over a potentially unbounded set of possible signals and meanings. How can a fixed set of knowledge and resources be acquired and deployed to manage this uncertainty? To address these questions Levy uses a combination of computational modeling and psycholinguistic experimentation. This work furthers our foundational understanding of linguistic cognition, and helps lay the groundwork for future generations of intelligent machines that can communicate with humans via natural language.


Video of the seminar.