Recently, numerous new benchmarks have been established to evaluate the performance of large language models (LLMs) via either computing a holistic score or employing another LLM as a judge. However, these approaches suffer from data leakage due to the open access of the benchmark and inflexible evaluation process. To address this issue, we introduce $\textbf{TreeEval}$, a benchmark-free evaluation method for LLMs that let a high-performance LLM host an irreproducible evaluation session and essentially avoids the data leakage. Moreover, this LLM performs as an examiner to raise up a series of questions under a topic with a tree planing strategy, which considers the current evaluation status to decide the next question generation and ensures the completeness and efficiency of the evaluation process. We evaluate $6$ models of different parameter sizes, including $7$B, $13$B, and $33$B, and ultimately achieved the highest correlation coefficient with AlpacaEval2.0 using only around $45$ questions. We also conduct more analysis to show the robustness and reliability of TreeEval. Our code can be accessed via the provided https://github.com/Ashura5/TreeEval.
翻译:近年来,通过计算整体得分或利用另一大语言模型(LLM)作为评判,学术界已建立大量新基准来评估大语言模型性能。然而,这些方法因基准测试集的开放访问性和评估流程僵化,存在数据泄露风险。为解决该问题,我们提出$\textbf{TreeEval}$——一种免基准的LLM评估方法,通过让高性能LLM自主主持不可复现的评估会话,从根本上规避数据泄露风险。此外,该LLM作为考官,采用树规划策略在特定主题下生成系列问题:通过实时评估状态动态决策后续问题生成,确保评估过程的完整性与高效性。我们对含7B、13B、33B参数的6种不同规模模型进行评估,最终仅用约45个问题便实现了与AlpacaEval2.0最高的相关系数。我们通过更多分析验证了TreeEval的鲁棒性与可靠性。代码可通过https://github.com/Ashura5/TreeEval 获取。