My research activities focus on training and using language models (LMs) to answer the following language-related questions:
(1) Trustworthy LMs: How to build models to generate factual and attributable content (Cao and Wang, EMNLP 2021; Liu et al, EMNLP 2024)? And how to calibrate their confidence based on what they know and what they don't know (Liu et al., ICLR 2024)?
(2) Reasoning: How to train models with improved reasoning skills using self-verification and step-wise rewards (Zhang et al., ACL findings 2024; Khalifa et al., EMNLP findings 2023)?
(3) Evaluating LMs: How to evaluate models' performance on in-wild tasks beyond traditional benchmarks with short references (Bayat, et al., arXiv 2024)?
(4) Narrative understanding: How human values are reflected in the story-telling processes and how does that influence the target audience (Zhang et al., NAACL 2024; Wu et al., EMNLP 2023)?
For core natural language processing (NLP) problems, I have been building summarization systems for inputs of long documents (Huang et al., NAACL 2021) and from multiple sources (Peper et al., NAACL 2024) and developing controllable generation models (Liu et al., ACL 2023).
I am also interested in building AI applications to achieve domain impacts, including developing argument mining models (Hua and Wang, ACL findings 2022) to support writing assistants building (Nair et al., EMNLP 2024) and using information extraction models to understand how media informs and persuades the public by selecting and packaging information (Fan et al., EMNLP 2019).