A systematic, reliable, and low-cost evaluation of Conversational Information Access (CIA) systems remains an open challenge. Existing reference-based evaluation methods are proven insufficient for evaluating the dynamic nature of information access conversations, while existing LLM-based reference-free methods suffer from evaluation bias and limited generalizability. This work proposes FACE: a Fine-grained, Aspect-based Conversation Evaluation method that provides evaluation scores for diverse turn and dialogue-level aspects of conversations. FACE leverages beam search and bandit optimization to select optimized LLM instructions per evaluation aspect. It assigns scores to atomic information units (particles) using the selected instructions and then aggregates them into a single score. We show that FACE achieves a strong correlation with human judgments, achieving system correlation of 0.9, outperforming state-of-the-art conversation evaluation methods by a large margin. We further demonstrate its optimized instructions are transferable across various LLMs and datasets. Additionally, unlike existing LLM-based methods that provide single uninterpretable scores, FACE provides insights into the system performance and enables identifying and locating problems within conversations.
翻译:暂无翻译