General Information

Abstract

Meeting is a common way to collaborate, share information and exchange opinions. Many available meeting transcripts, however, are lengthy, unstructured, and thus difficult to navigate. It would be time-consuming for users to access important meeting output by reading the full transcripts. Consequently, automatically generated meeting summaries is of great value to people and businesses alike by providing quick access to the essential content of past meetings. The core objective of this research project is to automatically generate abstract-style focused meeting summaries to help users digest the vast amount of meeting content in an easy manner. It helps the research community to better understand the characteristics of the meeting domain, define the summarization task in meetings in a more consistent way, improve speech summarization evaluation metrics, and allow the wide use of speech summarization techniques in many applications (such as generating meeting minutes or lecture outlines). The broader impacts of this project includes sharing insights on conversational text with social scientists, providing natural language processing research training to students, and contributing effective methods for meeting summarization to the general public.

Keywords

Abstractive summarization, text generation, dialogue analysis

Funding Agency

NSF, Award Number: 1566382. Duration: September 1, 2016 - August 31, 2018 (extended to August 31, 2019).

People Involved

In addition to the PI, the following students work on the project.
  • Xinyu Hua
  • Lisa Fan
  • Eva Sharma
  • Luyang Huang

Publications

An Entity-Driven Framework for Abstractive Summarization. Eva Sharma, Luyang Huang, Zhe Hu, and Lu Wang. Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019.

Neural Keyphrase Generation via Reinforcement Learning with Adaptive Rewards. Hou Pong Chan, Wang Chen, Lu Wang, and Irwin King. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), 2019.

BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization. Eva Sharma, Chen Li, and Lu Wang. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), short paper, 2019.

Semi-Supervised Learning for Neural Keyphrase Generation. Hai Ye and Lu Wang. Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018.

Robust Neural Abstractive Summarization Systems and Evaluation against Adversarial Information. Lisa Fan, Dong Yu, and Lu Wang. NeurIPS Workshop on Interpretability and Robustness in Audio, Speech, and Language (IRASL), 2018.

Joint Modeling of Content and Discourse Relations in Dialogues. Kechen Qin, Lu Wang, and Joseph Kim. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), 2017.

A Pilot Study of Domain Adaptation Effect for Neural Abstractive Summarization. Xinyu Hua and Lu Wang. Proceedings of the EMNLP Workshop on New Frontiers in Summarization, 2017.