Large language models (LLMs), trained on diverse data effectively acquire a breadth of information across various domains. However, their computational complexity, cost, and lack of transparency hinder their direct application for specialised tasks. In fields such as clinical research, acquiring expert annotations or prior knowledge about predictive models is often costly and time-consuming. This study proposes using LLMs to elicit expert prior distributions for predictive models. This approach also provides an alternative to in-context learning, where language models are tasked with making predictions directly. We compare LLM-elicited and uninformative priors, evaluate whether LLMs truthfully generate parameter distributions, and propose a model selection strategy for in-context learning and prior elicitation. Our findings show that LLM-elicited prior parameter distributions significantly reduce predictive error compared to uninformative priors in low-data settings. Applied to clinical problems, this translates to fewer required biological samples, lowering cost and resources. Prior elicitation also consistently outperforms and proves more reliable than in-context learning at a lower cost, making it a preferred alternative in our setting. We demonstrate the utility of this method across various use cases, including clinical applications. For infection prediction, using LLM-elicited priors reduced the number of required labels to achieve the same accuracy as an uninformative prior by 55%, at 200 days earlier in the study.
翻译:大语言模型(LLMs)经过多样化数据训练,能有效获取跨领域的广泛知识。然而,其计算复杂性、高昂成本以及缺乏透明度,阻碍了其在专业任务中的直接应用。在临床研究等领域,获取专家标注或关于预测模型的先验知识通常成本高昂且耗时。本研究提出利用LLMs来提取预测模型的专家先验分布。该方法也为上下文学习提供了一种替代方案,后者直接要求语言模型进行预测。我们比较了LLM提取的先验分布与无信息先验分布,评估了LLMs是否真实生成参数分布,并提出了一种用于上下文学习和先验提取的模型选择策略。我们的研究结果表明,在低数据量场景下,与无信息先验相比,LLM提取的先验参数分布能显著降低预测误差。应用于临床问题,这意味着需要更少的生物样本,从而降低了成本和资源消耗。先验提取不仅始终优于上下文学习,且被证明在更低成本下更为可靠,使其成为我们研究场景中的优选替代方案。我们在包括临床应用在内的多种用例中展示了该方法的实用性。例如,在感染预测中,使用LLM提取的先验分布,与研究中使用无信息先验达到相同准确度所需标签数量相比减少了55%,且提前了200天。