Although Large Language Models (LLMs) are becoming increasingly powerful, they still exhibit significant but subtle weaknesses, such as mistakes in instruction-following or coding tasks. As these unexpected errors could lead to severe consequences in practical deployments, it is crucial to investigate the limitations within LLMs systematically. Traditional benchmarking approaches cannot thoroughly pinpoint specific model deficiencies, while manual inspections are costly and not scalable. In this paper, we introduce a unified framework, AutoDetect, to automatically expose weaknesses in LLMs across various tasks. Inspired by the educational assessment process that measures students' learning outcomes, AutoDetect consists of three LLM-powered agents: Examiner, Questioner, and Assessor. The collaboration among these three agents is designed to realize comprehensive and in-depth weakness identification. Our framework demonstrates significant success in uncovering flaws, with an identification success rate exceeding 30% in prominent models such as ChatGPT and Claude. More importantly, these identified weaknesses can guide specific model improvements, proving more effective than untargeted data augmentation methods like Self-Instruct. Our approach has led to substantial enhancements in popular LLMs, including the Llama series and Mistral-7b, boosting their performance by over 10% across several benchmarks. Code and data are publicly available at https://github.com/thu-coai/AutoDetect.
翻译:尽管大语言模型(LLMs)正变得日益强大,它们仍存在显著且微妙的弱点,例如在指令遵循或代码任务中出现错误。由于这些意外错误在实际部署中可能导致严重后果,系统性地探究大语言模型的局限性至关重要。传统的基准测试方法无法彻底定位具体模型缺陷,而人工检查则成本高昂且难以扩展。本文提出一个统一框架AutoDetect,用于自动揭示大语言模型在不同任务中的弱点。该框架受衡量学生学习成果的教育评估过程启发,由三个基于大语言模型的智能体构成:审查员(Examiner)、提问者(Questioner)和评估员(Assessor)。这三个智能体的协同工作旨在实现全面深入的弱点识别。我们的框架在揭示模型缺陷方面成效显著,在ChatGPT、Claude等主流模型中的识别成功率超过30%。更重要的是,这些被识别的弱点能够指导具体的模型改进,其效果优于Self-Instruct等无针对性的数据增强方法。我们的方法已使Llama系列、Mistral-7b等主流大语言模型获得显著提升,在多项基准测试中性能提高超过10%。代码与数据已公开于https://github.com/thu-coai/AutoDetect。