Multi-Stage Hallucination Mitigation and Structured Output Generation via Guided Inference and Model Synergy

Authors: Wang Ruan, Tengda Qi, Jun He, Bo Sun, and Guomin Zheng
Conference: ICIC 2025 Posters, Ningbo, China, July 26-29, 2025
Pages: 1097-1110
Keywords: Large Language Models LLMs Multi-step Logical Reasoning Hallucinations Small Language Models Model Synergy.

Abstract

In the pursuit of advanced artificial intelligence capabilities, the challenges posed by Large Language Models LLMs cannot be overlooked. LLMs, despite their capacity for multi-step logical reasoning through CoT prompting, often encounter issues such as hallucinations that undermine the accuracy of results and inefficiency in processing. Recognizing the complementary strengths of small Language models, we introduce the MS-HM Synergy Multi-Stage Hallucination Mitigation Synergy framework. This novel framework, centered around guided inference and model synergy, comprises three essential stages. Firstly, Guided Inference utilizes LLMs for initial reasoning, tapping into their language understanding. Secondly, Hallucination Detection acts as a safeguard, meticulously identifying and eliminating unreliable outputs. Lastly, Result Standardization ensures the generation of coherent and structured outputs. Methodologically, LLMs are tasked with complex reasoning, while small Language models play a crucial verification role. Empirical results on benchmarks like MMLU, MATH, and LogiQA exhibit substantial performance improvements. The MS-HM Synergy not only effectively mitigates hallucinations for enhanced reliability but also boosts efficiency and flexibility, heralding a new era of leveraging combined model strengths to overcome LLM limitations.
📄 View Full Paper (PDF) 📋 Show Citation