Human forecasting accuracy in practice relies on the 'wisdom of the crowd' effect, in which predictions about future events are significantly improved by aggregating across a crowd of individual forecasters. Past work on the forecasting ability of large language models (LLMs) suggests that frontier LLMs, as individual forecasters, underperform compared to the gold standard of a human crowd forecasting tournament aggregate. In Study 1, we expand this research by using an LLM ensemble approach consisting of a crowd of twelve LLMs. We compare the aggregated LLM predictions on 31 binary questions to that of a crowd of 925 human forecasters from a three-month forecasting tournament. Our preregistered main analysis shows that the LLM crowd outperforms a simple no-information benchmark and is not statistically different from the human crowd. In exploratory analyses, we find that these two approaches are equivalent with respect to medium-effect-size equivalence bounds. We also observe an acquiescence effect, with mean model predictions being significantly above 50%, despite an almost even split of positive and negative resolutions. Moreover, in Study 2, we test whether LLM predictions (of GPT-4 and Claude 2) can be improved by drawing on human cognitive output. We find that both models' forecasting accuracy benefits from exposure to the median human prediction as information, improving accuracy by between 17% and 28%: though this leads to less accurate predictions than simply averaging human and machine forecasts. Our results suggest that LLMs can achieve forecasting accuracy rivaling that of human crowd forecasting tournaments: via the simple, practically applicable method of forecast aggregation. This replicates the 'wisdom of the crowd' effect for LLMs, and opens up their use for a variety of applications throughout society.
翻译:人类实际预测精度依赖于“群体智慧”效应,即通过对个体预测者组成的群体进行聚合,显著提升对未来事件的预测能力。以往关于大型语言模型(LLMs)预测能力的研究表明,前沿LLM作为独立预测者的表现,不及人类群体预测竞赛的黄金标准聚合结果。在研究1中,我们通过采用由十二个LLM构成的集成方法扩展了该研究。我们将LLM群体对31个二元问题的聚合预测,与为期三个月的预测竞赛中925名人类预测者群体的结果进行比较。我们预先注册的主分析显示,LLM群体的表现优于简单的无信息基准,且与人类群体无统计显著差异。在探索性分析中,我们发现这两种方法在中效应量等价界内具有等价性。同时观察到了默许效应——尽管正面与负面结果比例几乎均等,但模型平均预测值显著高于50%。此外,在研究2中,我们测试了GPT-4与Claude 2的LLM预测能否通过借鉴人类认知输出得到改善。结果表明,两个模型在获取人类中位数预测作为信息后,预测精度提升了17%至28%;但这一方法获得的预测精度仍低于简单平均人类与机器预测的结果。我们的研究结果表明,LLM通过简单且具实践性的预测聚合方法,可达到与人类群体预测竞赛相当的预测精度。这为LLM复现了“群体智慧”效应,并为其在社会各领域的广泛应用开辟了可能性。