Large language models (LLMs), trained on diverse data effectively acquire a breadth of information across various domains. However, their computational complexity, cost, and lack of transparency hinder their direct application for specialised tasks. In fields such as clinical research, acquiring expert annotations or prior knowledge about predictive models is often costly and time-consuming. This study proposes the use of LLMs to elicit expert prior distributions for predictive models. This approach also provides an alternative to in-context learning, where language models are tasked with making predictions directly. In this work, we compare LLM-elicited and uninformative priors, evaluate whether LLMs truthfully generate parameter distributions, and propose a model selection strategy for in-context learning and prior elicitation. Our findings show that LLM-elicited prior parameter distributions significantly reduce predictive error compared to uninformative priors in low-data settings. Applied to clinical problems, this translates to fewer required biological samples, lowering cost and resources. Prior elicitation also consistently outperforms and proves more reliable than in-context learning at a lower cost, making it a preferred alternative in our setting. We demonstrate the utility of this method across various use cases, including clinical applications. For infection prediction, using LLM-elicited priors reduced the number of required labels to achieve the same accuracy as an uninformative prior by 55%, 200 days earlier in the study.
翻译:大型语言模型(LLMs)通过多样化的数据训练,能够有效获取跨领域的广泛知识。然而,其计算复杂性、高昂成本以及缺乏透明度,限制了其在专业任务中的直接应用。在临床研究等领域,获取专家标注或关于预测模型的先验知识通常成本高昂且耗时。本研究提出利用LLMs来提取预测模型的专家先验分布。该方法也为上下文学习提供了一种替代方案——后者通常要求语言模型直接进行预测。在本工作中,我们比较了LLM提取的先验分布与无信息先验分布,评估了LLMs是否真实生成参数分布,并提出了一种适用于上下文学习与先验提取的模型选择策略。研究结果表明,在低数据量场景下,相较于无信息先验,LLM提取的先验参数分布能显著降低预测误差。应用于临床问题中,这意味着所需生物样本量减少,从而降低了成本与资源消耗。先验提取方法在更低成本下不仅持续优于上下文学习,且表现出更高的可靠性,使其成为本研究场景中的优选替代方案。我们通过包括临床应用在内的多种用例证明了该方法的实用性。在感染预测任务中,使用LLM提取的先验分布将达成与无信息先验相同准确率所需的标注数量减少了55%,且在研究时间线上提前了200天。