The advent of Large Language Models (LLMs) has ushered in a new era for design science in Information Systems, demanding a paradigm shift in tailoring LLMs design for business contexts. We propose and test a novel framework to customize LLMs for general business contexts that aims to achieve three fundamental objectives simultaneously: (1) aligning conversational patterns, (2) integrating in-depth domain knowledge, and (3) embodying theory-driven soft skills and core principles. We design methodologies that combine domain-specific theory with Supervised Fine Tuning (SFT) to achieve these objectives simultaneously. We instantiate our proposed framework in the context of medical consultation. Specifically, we carefully construct a large volume of real doctors' consultation records and medical knowledge from multiple professional databases. Additionally, drawing on medical theory, we identify three soft skills and core principles of human doctors: professionalism, explainability, and emotional support, and design approaches to integrate these traits into LLMs. We demonstrate the feasibility of our framework using online experiments with thousands of real patients as well as evaluation by domain experts and consumers. Experimental results show that the customized LLM model substantially outperforms untuned base model in medical expertise as well as consumer satisfaction and trustworthiness, and it substantially reduces the gap between untuned LLMs and human doctors, elevating LLMs to the level of human experts. Additionally, we delve into the characteristics of textual consultation records and adopt interpretable machine learning techniques to identify what drives the performance gain. Finally, we showcase the practical value of our model through a decision support system designed to assist human doctors in a lab experiment.
翻译:大型语言模型(LLMs)的出现为信息系统设计科学开启了新时代,要求在设计LLMs以适应业务场景时进行范式转变。我们提出并测试了一个新颖的通用业务场景LLMs定制框架,旨在同时实现三个基本目标:(1)对齐对话模式,(2)整合深度领域知识,(3)体现理论驱动的软技能与核心原则。我们设计了将领域特定理论与监督微调(SFT)相结合的方法论,以同步达成这些目标。我们在医疗咨询场景中实例化所提框架。具体而言,我们精心构建了来自多个专业数据库的大量真实医生咨询记录与医学知识。此外,基于医学理论,我们识别出人类医生的三项软技能与核心原则:专业性、可解释性及情感支持,并设计了将这些特质融入LLMs的方法。我们通过包含数千名真实患者的在线实验以及领域专家与消费者的评估,验证了框架的可行性。实验结果表明,定制LLM模型在医学专业知识、消费者满意度及可信度方面显著优于未调优的基础模型,大幅缩小了未调优LLMs与人类医生之间的差距,使LLMs达到人类专家水平。此外,我们深入分析了文本咨询记录的特征,并采用可解释机器学习技术识别性能提升的驱动因素。最终,我们通过实验室实验中辅助人类医生的决策支持系统,展示了该模型的实践价值。