GENSORT

Improving Temporal Reasoning of Language Models via Recounted Narratives

  • Advisor: Dr. Lu Wang
  • Duration: Dec. 2023 - June 2024
  • Affiliation: Computer Science and Engineering Division, University of Michigan
  • Summary: Reasoning about time and temporal relations is an integral aspect of human cognition, essential for perceiving the world and navigating our experiences. Though language models (LMs) have demonstrated impressive performance in many reasoning tasks, temporal reasoning remains challenging due to its intrinsic complexity. In this work, we first study an essential task of temporal reasoning—temporal graph generation, to unveil LMs’ inherent, global reasoning capabilities. We show that this task presents great challenges even for the most powerful large language models (LLMs), such as GPT3.5/4. We also notice a significant performance gap by small LMs (< 10B) that lag behind LLMs by 50%. Next, we study how to close this gap with a budget constraint, e.g., not using model finetuning. We propose a new prompting technique tailored for temporal reasoning, GENSORT, that first converts the events set to a Python class, then prompts an LM to generate a temporally grounded narrative, guiding the final generation of a temporal graph. Extensive experiments showcase the efficacy of GENSORT in improving various metrics. Notably, GENSORT attains the highest F1 on the Schema-11 evaluation set, while securing an overall F1 on par with GPT-3.5. GENSORT also achieves the best structural similarity across the board, even compared with GPT-3.5/4.