Instruction tuning large language models (LLMs) remains a challenging task, owing to the complexity of hyperparameter selection and the difficulty involved in evaluating the tuned models. To determine the optimal hyperparameters, an automatic, robust, and reliable evaluation benchmark is essential. However, establishing such a benchmark is not a trivial task due to the challenges associated with evaluation accuracy and privacy protection. In response to these challenges, we introduce a judge large language model, named PandaLM, which is trained to distinguish the superior model given several LLMs. PandaLM's focus extends beyond just the objective correctness of responses, which is the main focus of traditional evaluation datasets. It addresses vital subjective factors such as relative conciseness, clarity, adherence to instructions, comprehensiveness, and formality. To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences. Our results indicate that PandaLM-7B achieves 93.75% of GPT-3.5's evaluation ability and 88.28% of GPT-4's in terms of F1-score on our test dataset. PandaLM enables the evaluation of LLM to be fairer but with less cost, evidenced by significant improvements achieved by models tuned through PandaLM compared to their counterparts trained with default Alpaca's hyperparameters. In addition, PandaLM does not depend on API-based evaluations, thus avoiding potential data leakage. All resources of PandaLM are released at https://github.com/WeOpenML/PandaLM.
翻译:大语言模型(LLM)的指令调优仍是一项具有挑战性的任务,这主要源于超参数选择的复杂性以及调优后模型评估的困难。为确定最优超参数,一个自动、稳健且可靠的评估基准至关重要。然而,由于评估准确性与隐私保护方面的挑战,建立这样的基准并非易事。针对这些挑战,我们提出了一种名为PandaLM的裁判大语言模型,该模型经过训练,能够在给定多个大语言模型的情况下判别出更优的模型。PandaLM的关注点不仅限于传统评估数据集主要关注的响应客观正确性,还涵盖了关键的主观因素,如相对简洁性、清晰度、指令遵循程度、全面性和正式性。为确保PandaLM的可靠性,我们收集了一个多样化的人工标注测试数据集,其中所有上下文均由人类生成,且标注结果与人类偏好保持一致。实验结果表明,在我们的测试数据集上,PandaLM-7B在F1分数方面达到了GPT-3.5评估能力的93.75%和GPT-4评估能力的88.28%。PandaLM能够以更低的成本实现更公平的大语言模型评估,证据显示:通过PandaLM调优的模型相较于使用默认Alpaca超参数训练的对应模型取得了显著提升。此外,PandaLM不依赖于基于API的评估,从而避免了潜在的数据泄露风险。PandaLM的全部资源已发布于https://github.com/WeOpenML/PandaLM。