We systematically evaluate the reproducibility of data analysis conducted by Large Language Models (LLMs). We evaluate two prompting strategies, six models, and four temperature settings, with ten independent executions per configuration, yielding 480 total attempts. We assess the completion, concordance, validity, and consistency of each attempt and find considerable variation in the analytical results even for consistent configurations. This suggests, as with human data analysis, the data analysis conducted by LLMs can vary, even given the same task, data, and settings. Our results mean that if an LLM is being used to conduct data analysis, then it should be run multiple times independently and the distribution of results considered.
翻译:本研究系统评估了大语言模型(LLMs)进行数据分析的可复现性。我们评估了两种提示策略、六种模型和四种温度设置,每种配置独立执行十次,共计产生480次尝试。我们评估了每次尝试的完成度、一致性、有效性和稳定性,发现即使在相同配置下,分析结果也存在显著差异。这表明,与人类的数据分析类似,即使给定相同的任务、数据和设置,LLMs进行的数据分析也可能产生变化。我们的研究结果意味着,若使用LLM进行数据分析,则应独立运行多次并考虑结果的分布情况。