Large language models (LLMs), trained on diverse data effectively acquire a breadth of information across various domains. However, their computational complexity, cost, and lack of transparency hinder their direct application for specialised tasks. In fields such as clinical research, acquiring expert annotations or prior knowledge about predictive models is often costly and time-consuming. This study proposes the use of LLMs to elicit expert prior distributions for predictive models. This approach also provides an alternative to in-context learning, where language models are tasked with making predictions directly. In this work, we compare LLM-elicited and uninformative priors, evaluate whether LLMs truthfully generate parameter distributions, and propose a model selection strategy for in-context learning and prior elicitation. Our findings show that LLM-elicited prior parameter distributions significantly reduce predictive error compared to uninformative priors in low-data settings. Applied to clinical problems, this translates to fewer required biological samples, lowering cost and resources. Prior elicitation also consistently outperforms and proves more reliable than in-context learning at a lower cost, making it a preferred alternative in our setting. We demonstrate the utility of this method across various use cases, including clinical applications. For infection prediction, using LLM-elicited priors reduced the number of required labels to achieve the same accuracy as an uninformative prior by 55%, 200 days earlier in the study.
翻译:大型语言模型(LLMs)通过在不同数据上进行训练,能够有效获取跨领域的广泛知识。然而,其计算复杂性、高昂成本以及缺乏透明度的特点,阻碍了其在专业任务中的直接应用。在临床研究等领域,获取专家标注或关于预测模型的先验知识通常成本高昂且耗时。本研究提出利用LLMs来提取预测模型的专家先验分布。该方法也为上下文学习提供了一种替代方案,后者通常要求语言模型直接进行预测。在本工作中,我们比较了LLM提取的先验分布与无信息先验分布,评估了LLMs是否真实生成参数分布,并提出了一种适用于上下文学习与先验提取的模型选择策略。我们的研究结果表明,在低数据量场景下,相较于无信息先验,LLM提取的先验参数分布能显著降低预测误差。应用于临床问题中,这意味着所需生物样本数量减少,从而降低了成本与资源消耗。先验提取方法在更低成本下,其性能始终优于上下文学习且可靠性更高,使其成为我们研究场景中的优选替代方案。我们在多个应用案例(包括临床应用)中验证了该方法的实用性。在感染预测任务中,使用LLM提取的先验分布将达到与无信息先验相同准确度所需的标注数量减少了55%,且在研究进程中提前了200天实现这一目标。