Long-Horizon Dialogue Understanding for Role Identification in the Game of Avalon with Large Language Models

Simon Stepputtis1, Joseph Campbell1, Yaqi Xie1, Zhengyang Qi1, Wenxin Sharon Zhang1, Ruiyi Wang1, Sanketh Rangreji1, Charles Michael Lewis2, Katia P. Sycara1
1Carnegie Melon University, 2University of Pittsburgh

Presented at the Findings of the Association for Computational Linguistics - EMNLP 2023

Abstract

Deception and persuasion play a critical role in long-horizon dialogues between multiple parties, especially when the interests, goals, and motivations of the participants are not aligned. Such complex tasks pose challenges for current Large Language Models (LLM) as deception and persuasion can easily mislead them, especially in long-horizon multi-party dialogues. To this end, we explore the game of Avalon: The Resistance, a social deduction game in which players must determine each other's hidden identities to complete their team's objective. We introduce an online test bed and a dataset containing 20 carefully collected and labeled games among human players that exhibit long-horizon deception in a cooperative-competitive setting. We discuss the capabilities of LLMs to utilize deceptive long-horizon conversations between six human players to determine each player's goal and motivation. Particularly, we discuss the multimodal integration of the chat between the players and the game's state that grounds the conversation, providing further insights into the true player identities. We find that even current state-of-the-art LLMs do not reach human performance, making our dataset a compelling benchmark to investigate the decision-making and language-processing capabilities of LLMs.

Avalon Dataset

Avalon Dataset

20 games played by 30 users across 19 unique teams. Over 24 hours of playtime with full game chat, game state, player beliefs, as well as persuasion and deception strategies.

Game Representation

State Representation

The game's context (i.e., chat and game state) is represented either from the beginning of the game (green), or in a round-based manner (yellow), utilizing a carried over belief of player roles.

Role Prediction

Role Prediction

We evaluate GPT-4, GPT-3.5, and Llama-2 (including fine-tune versions of GPT-3.5 and Llama-2) predicting each palyer's role and find that such model struggle with such complex NLU tasks.

Poster

EMNLP Poster

A high-resolution PDF.

Video

6-minute video presentation of our work.

Dataset Overview

We present a novel benchmark, associated dataset, and testbed (see our GitHub) for long-horizon dialogue understanding in scenarios of conflicting interests among multiple participants. This task combines utterances from six human players at a time, hand-labeled persuasion and deception strategies, player beliefs, and comprehensive game states recorded over more than 24 hours of gameplay. We demonstrate that current state-of-the-art LLMs do not reach human-level performance in environments that require the understanding and tracking of long-horizon dialogue between multiple participants in challenging social cooperative-competitive settings. Compared to similar datasets, our benchmark contains longer context horizons, stricter game rules, and high-quality dialogue, making it well-suited for NLU research as all game-relevant communication has been captured and is thus, available to learning algorithms. In our interactive demo below, you can check out one the data collected during one of our games in an interactive manner.

Our main contributions are as follows:
  • A testbed and dataset containing 2384 utterances from 20 human player games hand-annotated with strategies (persuasion and deception), player beliefs, and game state.
  • A comprehensive analysis of LLM performance in our proposed multimodal long-horizon dialogue understanding benchmark, including persuasive and deceptive behavior.
  • An exploration of the limitations of current models and the introduction of state representations that can improve long-horizon dialogue modeling.

Interactive Example

The following is an example game from our dataset of current 20 games. Please utilize the respective round and turn selectors to "scroll" through the game. In the dataset available on our GitHub, each game is represented as a JSON file that containing all the information presented in the demonstration below.
Our dataset of currently 20 games contains the following information: Chat among the six players, hand-labeled persuasion strategies, hand-labeled deception strategies for all evil players, beliefs over what players though about other players at different stages of the game, as well as the full game state containing proposed parties and vote outcomes.
Note: For non-mobile devices, this website provides an interactive demo of our dataset.
The following is an excerpt of our dataset, demonstrating the conversation amongst participants in round one. Chat Example

BibTeX

If you find this work useful, please add the following citation to your work.
@inproceedings{stepputtis-etal-2023-long,
                title = "Long-Horizon Dialogue Understanding for Role Identification in the Game of Avalon with Large Language Models",
                author = "Stepputtis, Simon  and
                  Campbell, Joseph  and
                  Xie, Yaqi  and
                  Qi, Zhengyang  and
                  Zhang, Wenxin  and
                  Wang, Ruiyi  and
                  Rangreji, Sanketh  and
                  Lewis, Charles  and
                  Sycara, Katia",
                editor = "Bouamor, Houda  and
                  Pino, Juan  and
                  Bali, Kalika",
                booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
                month = dec,
                year = "2023",
                address = "Singapore",
                publisher = "Association for Computational Linguistics",
                url = "https://aclanthology.org/2023.findings-emnlp.748",
                doi = "10.18653/v1/2023.findings-emnlp.748",
                pages = "11193--11208",
            }