As comprehensive large model evaluation becomes prohibitively expensive, predicting model performance from limited observations has become essential. However, existing statistical methods struggle with pattern shifts, data sparsity, and lack of explanation, while pure LLM methods remain unreliable. We propose STAR, a framework that bridges data-driven STatistical expectations with knowledge-driven Agentic Reasoning. STAR leverages specialized retrievers to gather external knowledge and embeds semantic features into Constrained Probabilistic Matrix Factorization (CPMF) to generate statistical expectations with uncertainty. A reasoning module guided by Expectation Violation Theory (EVT) then refines predictions through intra-family analysis, cross-model comparison, and credibility-aware aggregation, producing adjustments with traceable explanations. Extensive experiments show that STAR consistently outperforms all baselines on both score-based and rank-based metrics, delivering a 14.46% gain in total score over the strongest statistical method under extreme sparsity, with only 1--2 observed scores per test model.
翻译:随着全面大模型评估的成本日益高昂,从有限观测中预测模型性能变得至关重要。然而,现有统计方法难以应对模式偏移、数据稀疏性和解释性不足的问题,而纯大语言模型方法仍不可靠。我们提出STAR框架,该框架将数据驱动的统计期望与知识驱动的智能体推理相结合。STAR利用专用检索器收集外部知识,并将语义特征嵌入约束概率矩阵分解(CPMF)中,以生成带有不确定性的统计期望。随后,一个由期望违背理论(EVT)指导的推理模块通过家族内分析、跨模型比较和可信度感知聚合来优化预测,产生具有可追溯解释的调整项。大量实验表明,STAR在基于分数和基于排名的指标上均持续优于所有基线方法;在极端稀疏条件下(每个测试模型仅观测1-2个分数),其总分比最强的统计方法提升了14.46%。