Location: Hackerman Hall B-17
Time: 10:45 am - 12:00 pm
There is an extensive literature in machine learning demonstrating extraordinary ability to predict labels based off an abundance of data, such as object and voice recognition. Multiple scientific domains are poised to go through a data revolution, in which the quantity and quality of data will increase dramatically over the next several years. One such area is neuroscience, where novel devices will collect data orders of magnitude larger than current measurement technologies. In addition to being a “big data” problem, this data is incredibly complex. Machine learning approaches can adapt to this complexity to give state-of-the-art predictions. However, in many neurological disorders we are most interested in methods that are not only good at prediction, but also interpretable such that they can be used to design causal experiments and interventions.
Towards this end, I will discuss my work using machine learning to analyze local field potentials recorded from electrodes implanted at many sites of the brain concurrently. The machine learning techniques I developed learn predictive and interpretable features that can generate data-driven hypotheses. Specifically, I first use ideas from dimensionality reduction and factor analysis to map the collected high-dimensional signals to a low-dimensional feature space. Each feature is designed as a Gaussian Process with a novel kernel to capture multi-region spectral power and phase coherence, which have neural correlates. In addition, these interpretable features estimate directionality of information flow. By associating behavior outcomes with the learned features or brain networks, we can then generate a data-driven hypothesis about how the networks should be modulated in a causal experiment. Collaborators have developed optogenetic techniques to test these theories in a mouse model of depression, validating the machine learning approach. I will also discuss current efforts to incorporate additional information sources and apply these ideas to other data types.
David Carlson is currently a Postdoctoral Research Scientist at Duke University in the Department of Electrical and Computer Engineering and the Department of Psychiatry and Behavioral Sciences. From August 2015 to July 2016, he completed postdoctoral training in the Data Science Institute and the Department of Statistics at Columbia University focused on neural data science. He received his Ph.D., M.S., and B.S.E. in Electrical and Computer Engineering from Duke University in 2015, 2014, and 2010 respectively. He received the Charles R. Vail Memorial Outstanding Scholarship Award in 2013 and the Charles R. Vail Memorial Outstanding Graduate Teaching Award in 2014.