Human forecasting accuracy in practice relies on the 'wisdom of the crowd' effect, in which predictions about future events are significantly improved by aggregating across a crowd of individual forecasters. Past work on the forecasting ability of large language models (LLMs) suggests that frontier LLMs, as individual forecasters, underperform compared to the gold standard of a human crowd forecasting tournament aggregate. In Study 1, we expand this research by using an LLM ensemble approach consisting of a crowd of twelve LLMs. We compare the aggregated LLM predictions on 31 binary questions to that of a crowd of 925 human forecasters from a three-month forecasting tournament. Our preregistered main analysis shows that the LLM crowd outperforms a simple no-information benchmark and is not statistically different from the human crowd. In exploratory analyses, we find that these two approaches are equivalent with respect to medium-effect-size equivalence bounds. We also observe an acquiescence effect, with mean model predictions being significantly above 50%, despite an almost even split of positive and negative resolutions. Moreover, in Study 2, we test whether LLM predictions (of GPT-4 and Claude 2) can be improved by drawing on human cognitive output. We find that both models' forecasting accuracy benefits from exposure to the median human prediction as information, improving accuracy by between 17% and 28%: though this leads to less accurate predictions than simply averaging human and machine forecasts. Our results suggest that LLMs can achieve forecasting accuracy rivaling that of human crowd forecasting tournaments: via the simple, practically applicable method of forecast aggregation. This replicates the 'wisdom of the crowd' effect for LLMs, and opens up their use for a variety of applications throughout society.
翻译:实践中的预测准确性依赖于"群体智慧"效应,即通过聚合多个独立预测者的判断,对未来事件的预测能力得到显著提升。过去关于大语言模型预测能力的研究表明,前沿LLM作为独立预测者,其表现不及人类群体预测竞赛的聚合结果这一黄金标准。在研究1中,我们通过采用包含12个LLM的集成方法扩展了这项研究。我们将LLM群体对31个二元问题的聚合预测结果与来自为期三个月预测竞赛的925名人类预测者群体进行了比较。我们预先注册的主要分析表明,LLM群体表现优于简单的无信息基准,且与人类群体在统计上无显著差异。在探索性分析中,我们发现这两种方法在中等效应量等价边界内具有等效性。我们还观察到顺从效应:尽管正负结果几乎各占一半,但模型预测均值显著高于50%。此外,在研究2中,我们测试了LLM预测(GPT-4和Claude 2)能否通过借鉴人类认知输出得到改进。研究发现,两种模型的预测准确性都能从获取人类预测中位数信息中获益,准确率提升幅度在17%至28%之间:尽管这种方法的预测准确性仍低于简单平均人类与机器预测的结果。我们的研究结果表明,通过简单且实际可用的预测聚合方法,LLM能够达到媲美人类群体预测竞赛的预测准确性。这为LLM复现了"群体智慧"效应,并为其在社会各领域的广泛应用开辟了道路。