Enhancing Hallucination Detection in Large Language Models through a Dual-Position Debate Multi-Agent Framework

Authors: Qile He and Siting Le
Conference: ICIC 2025 Posters, Ningbo, China, July 26-29, 2025
Pages: 292-305
Keywords: Large language models, Hallucination Detection, Multi-Agent Debate

Abstract

Large language models LLMs have demonstrated remarkable capabilities but are prone to generating factual inconsistencies, or hallucinations. Addressing this challenge is crucial for the reliable deployment of LLMs. This paper introduces a novel Dual-Position Debate DPD framework designed to enhance the veracity of LLM-generated content and mitigate hallucinations. DPD simulates a human debate by organizing agents into affirmative and negative teams, each comprising information gatherers, rebutters, analysts, and a summarizer. These agents collaboratively construct arguments, critique opposing viewpoints, and synthesize their findings. Furthermore, multiple independent LLMs act as referees, evaluating the debate and rendering judgments to ensure fairness and encourage rigorous information scrutiny. Extensive experiments across question answering, summarization, and dialogue tasks demonstrate the efficacy of the DPD framework, outperforming existing baseline methods in reducing hallucinations.
📄 View Full Paper (PDF) 📋 Show Citation