We introduce LLM SELECTOR, the first framework for active model selection of Large Language Models (LLMs). Unlike prior evaluation and benchmarking approaches that rely on fully annotated datasets, LLM SELECTOR efficiently identifies the best LLM with limited annotations. In particular, for any given task, LLM SELECTOR adaptively selects a small set of queries to annotate that are most informative about the best model for the task. To further reduce annotation cost, we leverage a judge-based oracle annotation model. Through extensive experiments on 6 benchmarks with 151 LLMs, we show that LLM SELECTOR reduces annotation costs by up to 59.62% when selecting the best and near-best LLM for the task.
翻译:我们提出了LLM SELECTOR,这是首个用于大型语言模型(LLMs)主动模型选择的框架。与以往依赖完全标注数据集的评估和基准测试方法不同,LLM SELECTOR能够在有限标注条件下高效识别最佳LLM。具体而言,对于任意给定任务,LLM SELECTOR会自适应地选择少量最具信息量的查询进行标注,这些查询最能揭示该任务的最佳模型。为进一步降低标注成本,我们采用了基于评判器的预言机标注模型。通过在涵盖151个LLM的6个基准测试上进行大量实验,我们证明LLM SELECTOR在为任务选择最佳及次佳LLM时,可将标注成本降低高达59.62%。