Jointly Learning Representations for Low-resource Information Extraction

Nanyun Peng, Johns Hopkins University

This thesis explores low-resource information extraction (IE), where a sufficient quantity of high-quality human annotations are unavailable for fitting statistical machine learning models. This setting receives increasing demands from domains where annotations are expensive to obtain, such as biomedicine, or domains that are rapidly changing, where annotations easily become out-of-date, such as social media. It is crucial to leverage as many learning signals and human knowledge as possible to mitigate the problem of inadequate supervision.

In this thesis, we explore two directions to help information extraction with limited supervision: 1). Learning representations/knowledge from heterogeneous sources with deep neural networks and transfer the knowledge; 2). Incorporating structure knowledge into the design of the models to learn robust representations and make holistic decisions. Specifically, for the application of named entity recognition (NER), we explore transfer learning including multi-task learning, domain adaptation, and multi-task domain adaptation in the context of neural representation learning, to help transfer learned knowledge from related tasks and domains to the problem of interest. For the applications of entity relation extraction and joint entity recognition and relation extraction, we explore incorporating linguistic structure and domain knowledge into the design of the models, to conduct joint inference and learning to make holistic decisions, and thus yield more robust systems with less supervision.

Speaker Biography

Nanyun Peng is a PhD candidate in the Department of Computer Science, affiliated with the Center for Language and Speech Processing. She is broadly interested in Natural Language Processing, Machine Learning, and Information Extraction. Her research focuses on using deep learning and joint models for low-resource information extraction. Nanyun is the recipient of the 2016 Fred Jelinek Fellowship. She had the fortune to work with great researchers at IBM T.J. Watson Research Center, and Microsoft Research Redmond in summers of 2014 and 2016. She holds a master’s degree in Computer Science and BAs in Computational Linguistics and Economics, all from Peking University.