• image

    Text Summarization


    Since my PhD dissertation at Cornell, I have been working on automatic text summarization, with the goal of improving information accessibility through content compression and distillation (Wang et al., ACL 2013; Wang and Cardie, ACL 2013; Wang et al., NAACL 2015). Creative solutions are needed to tackle three key problems in the existing models: (1) lack of effective knowledge grounding, (2) inability to handle long documents, and (3) “hallucination” in the generated summaries.

      • To solve the first problem, our work enhances the document encoder with a knowledge graph encoder to connect relevant entities and events as well as maintain global context, such as topic flows (Huang et al., ACL 2020). We further design a question answering-based reward to drive the model to better capture entity-related knowledge using reinforcement learning. This work is the first to employ graph neural networks to explicitly summarize and encode entity-centered knowledge for abstractive summarization.

      • To address the second challenge of handling long documents, we propose an efficient encoder-decoder attention and conduct the first systematic study on efficient Transformers for long document summarization (Huang et al., NAACL 2021). In our model, the encoder-decoder attention heads follow a strided pattern and have varying starting positions, to maintain the power of emphasizing important tokens while reducing computational and memory costs. Our model can process documents that are 10 times longer than what previous models can handle and produce more informative summaries. Our recent study further formulates long document summarization as a hierarchical question-summary generation process to support varying information needs (Cao and Wang, ACL 2022).

      • Finally, to resolve the model “hallucination” problem, we design a contrastive learning formulation that teaches a summarizer to expand the margin between factual summaries (i.e., positive samples) and their incorrect peers (i.e., negative samples), to improve summary faithfulness and factuality (Cao and Wang, EMNLP 2021). Prior methods that reduce errors in summaries largely use three types of remedies: running a separate error correction model, removing noisy training samples, and creating new architectures on top of Transformer. Our system is end-to-end trained without the need of modifying model architecture. We rely on four types of newly designed strategies to construct negative samples by editing reference summaries via rewriting entity-/relation-anchored text, and using system generated summaries that may contain unfaithful errors. Our model improve summary quality especially on outputs that are more abstractive.

    Our methods have been applied to a variety of domains, including news articles (Sharma et al., EMNLP 2019), patents (Sharma et al., ACL 2019), scientific publications (Ye and Wang, EMNLP 2018), meeting transcripts (Qin et al., ACL 2017), and social media content (Wang and Ling, NAACL 2016). The summarization tools developed by us have been included in the IARPA MATERIAL program evaluation. In addition, we collected and annotated two large-scale long document summarization datasets (Sharma et al., EMNLP 2019; Huang et al., NAACL 2021), which are used as benchmarks to support long-form text summarization research. Our work (Wang and Cardie, SIGDIAL 2012) has been nominated for best paper award at SIGDIAL and has been recognized by an NSF CAREER award.
  • Natural Language Generation


    I view natural language generation (NLG) as a knowledge transformation process, where structured or unstructured information is transformed into human-readable languages to meet communicative goals. I aim to build practical NLG systems by addressing two challenges: (1) the incoherence and low relevance suffered by generations, and (2) the difficulty of invoking large pretrained models to produce text of varying styles.
    image

      • We propose neural NLG frameworks that use traditional generation components, such as content planning and style selection, to promote the control of content (Hua and Wang, ACL 2018; Hua and Wang, EMNLP 2020) and linguistic style (Hua and Wang, EMNLP 2019) of the produced text. In Hua et al., ACL 2019, we study the task of counterargument generation, where our goal is to generate an argument to refute a given statement on a controversial issue. Our model performs sentence-level content planning via talking point selection and ordering, and style-controlled surface realization based on model predicted language style to produce the final output. We also augment our generation model with passages retrieved from a large-scale search engine, which indexes 12 million articles from Wikipedia and four popular English news media of varying ideological leanings. This ensures our system has access to reliable evidence, high-quality reasoning, and diverse opinions from different sources.

      • We further address the challenge of producing coherent long-form text (Hua et al., ACL 2021), where even the large pretrained language models still fall short of producing coherent text due to the lack of efficient content planning and control. One potential issue with employing an explicit content planning component resides in the need for separate training signals, which are often unavailable. Therefore, we propose an end-to-end generation framework based on mixed language models to conduct content selection and ordering as text is produced, without requiring ground-truth content planning labels. Concretely, at each decoding step, our system selects which content to reflect, and predicts a word based on probabilities marginalized over all language models. This system can be built upon large pretrained models and offer an interface for system decision interpretation via the predicted content selection scores.

    Our NLG systems have been applied in varying domains using newly collected datasets, for generating persuasive arguments, news stories, Wikipedia articles (Hua and Wang, EMNLP 2019), and open-ended questions (Cao and Wang, ACL 2021). Our question generation models have been used to motivate our design of practical and versatile systems for creating active learning opportunities in education in different disciplines (Wang et al., NAACL 2022).
  • image

    Debate Prediction and Argument Mining


    Making sense of human reasoning can be a daunting task. I study debates and arguments, where I design algorithms to understand the composition and relations among human arguments, as well as to gather popular arguments of different considerations on controversial issues.

      • I aim to answer “what determines the outcome of a debate? (Wang et al., TACL 2017)" Most efforts of predicting the persuasiveness of debates have focused on linguistic features of the debate speech or on simple measures of topic control. In an ideal setting, however, we would expect the winning side to win based on the strength and merits of their arguments, not based on their skillful deployment of linguistic style. I hypothesize that, within a debate, some topics will be inherently more persuasive when deployed by one side than the other, such as the execution of innocents for those opposed to the death penalty, or the gory details of a murder for those in favor of it. I thus develop a latent variable model that simultaneously infers the latent persuasive strength inherent in debate topics and how it differs between opposing sides, as well as captures the interactive dynamics between topics of different strengths and the linguistic structures with which these topics are presented.

      • Moreover, I design data efficient methods such as transfer learning and active learning (Hua and Wang, ACL findings 2022) to parse peer reviews, which is the cornerstone of scientific discovery.

    Our work on supporting argument retrieval (Hua and Wang, ACL 2017) has been recognized by an outstanding paper award at ACL. Our research on debate analysis (Wang et al., TACL 2017) has been covered by several media (e.g., Digital Trends, Science Blog, etc.). We have released one of the largest datasets with high-quality argument type and structure annotations to support research in argument mining (Hua et al., NAACL 2019; Hua and Wang, ACL findings 2022).
  • Interdisciplinary Collaborations: Media Ideology and Bias


    News media play a vast role in public discourse not only by supplying information, but also by selecting, packaging, and shaping that information to inform and persuade the public. Previous computational research on article-level media bias has mostly considered linguistic-level strategies of content modification, such as word choices (e.g., "illegal immigrants" vs. "undocumented immigrants"). I aim to study how content selection and omission can introduce bias and sway the readers' opinions.
    image

      • Our study (Fan et al., EMNLP 2019) finds that the most important and subtle way by which media shape the views of their readers is through content selection or omission. Therefore, we examine and detect media bias that may occur as a result of systematic manipulation of news via the selection and organization of contents in each article. We have annotated news stories with bias spans, and released the first dataset containing 300 articles from media of different ideological leanings.

      • I also aim to create general-purpose tools for analyzing ideological content, to be used by researchers and practitioners in the broad communities. We thus study pretraining techniques to create representations that can better discern the embedded ideological content for different genres of text (Liu et al., NAACL findings 2022). Concretely, for pretraining, we design an ideology objective operating over clusters of same-story articles to compact articles with similar ideology and contrast them with articles of different ideology. Our model outperforms strong comparisons on 8 out of 11 ideology prediction and stance detection tasks, using datasets covering congressional speech, news articles, social media comments, and tweets.