The ability of large language models (LLMs) to transform, interpret, and comprehend vast quantities of heterogeneous data presents a significant opportunity to enhance data-driven care delivery. However, the sensitive nature of protected health information (PHI) raises valid concerns about data privacy and trust in remote LLM platforms. In addition, the cost associated with cloud-based artificial intelligence (AI) services continues to impede widespread adoption. To address these challenges, we propose a shift in the LLM execution environment from opaque, centralized cloud providers to a decentralized and dynamic fog computing architecture. By executing open-weight LLMs in more trusted environments, such as the user's edge device or a fog layer within a local network, we aim to mitigate the privacy, trust, and financial challenges associated with cloud-based LLMs. We further present SpeziLLM, an open-source framework designed to facilitate rapid and seamless leveraging of different LLM execution layers and lowering barriers to LLM integration in digital health applications. We demonstrate SpeziLLM's broad applicability across six digital health applications, showcasing its versatility in various healthcare settings.
翻译:大型语言模型(LLM)在转换、解析和理解海量异构数据方面的能力,为增强数据驱动的医疗服务提供了重要机遇。然而,受保护健康信息(PHI)的敏感性引发了人们对远程LLM平台数据隐私与可信度的合理担忧。此外,基于云端的人工智能(AI)服务成本持续阻碍着其广泛采用。为应对这些挑战,我们提出将LLM执行环境从不透明的集中式云服务商转向去中心化的动态雾计算架构。通过在更可信的环境中(例如用户边缘设备或本地网络内的雾层)执行开放权重的LLM,我们旨在缓解基于云端的LLM所面临的隐私、信任和成本挑战。我们进一步提出SpeziLLM——一个旨在促进快速无缝利用不同LLM执行层、降低数字健康应用中LLM集成门槛的开源框架。我们通过六个数字健康应用案例展示了SpeziLLM的广泛适用性,体现了其在多样化医疗场景中的多功能性。