Computation in Nature, Biological Abstraction, and Lifelong Learning Machines

Hava Siegelmann, DARPA
Host: Yair Amir

Lifelong Learning encompasses computational methods that allow systems to learn in runtime and apply previous learning to new, unanticipated situations. As this sort of computation is found almost exclusively in nature, Lifelong Learning looks to nature for its underlying principles and mechanisms.

Biological type learning has not been demonstrated in extant computational systems. My DARPA L2M program is seeking to construct an operational Lifelong Learning system. We will discuss requirements for such a system, different computational concepts found in nature - including Super- Turing computation, stochastic and asynchronous communication, continual adaptivity, and interactive computation. While seemingly different, these varied computational attributes are, in fact, computationally equivalent, implying an underlying basis for biological learning, and computational Lifelong learning.

My BINDS lab has been studying neuroscience features and their translations to technology, including memory reconsolidation, oscillatory rhythms, and cognitively transparent interfaces for the last decade. We will discuss one of our recent findings: a property of the human brain connectome that leads to the capability of cognitive abstraction. We will describe our geometric method for massive data analysis, how we used it to parse tens of thousands of records of fMRI experiments, and the import of our results.

Speaker Biography

Dr. Siegelmann is a program manager at the MTO of DARPA, developing programs to advance the fields of Neural Networks and Machine Learning. She is on leave from the University of Massachusetts where she serves as the director of the Biologically Inspired Neural and Dynamical Systems (BINDS) Laboratory. A Professor of Computer Science and Core Member of the Neuroscience and Behavior Program, Siegelmann conducts cutting edge, interdisciplinary research in neural networks, machine learning, computational studies of the brain, intelligence and cognition, big data and industrial/biomedical applications.

Her research into neural processes has led to theoretical modeling and original algorithms capable of superior computation, and to more realistic, human-like intelligent systems. Siegelmann was named the 2016 Donald O. Hebb Award winner from the International Neural Network Society. Her work on an energy constrained brain activation paradigm with relation to performance and diet was awarded one of 16 Whitehouse sponsored 2015 BRAIN initiatives.

Siegelmann’s Super-Turing theory introduced a major alternative computational method. It became a sub-field of computation and the foundation of lifelong machine learning. Super-Turing opens a new way to interpret cognitive processes, as well as disease processes and their reversal. Her modeling of geometric neural clusters resulted in the highly utile and widely used Support Vector Clustering algorithm with Vladimir Vapnik and colleagues, specializing in the analysis of high-dimensional, big, complex data. Her neuroinformatics methods are used to identify overarching concepts of brain organization and function. A unifying theme underlying her research is the study of time and space dependent dynamical and complex systems. Her work is often interdisciplinary, combining complexity science, computational simulations, biological sciences and healthcare – focusing on better and more completely modeling human intelligence, and spanning medical, military and energy applications. Recent contributions include advanced human-machine interfaces that empower beyond human capabilities, dynamical studies of biological rhythm, and the study of brain structure that leads to abstract thought.

Dr. Siegelmann acts as a consultant internationally with industry, and in education. She remains very active in supporting young researchers and encouraging minorities and women to enter and advance in STEM.