Access to large-scale high-quality healthcare databases is key to accelerate medical research and make insightful discoveries about diseases. However, access to such data is often limited by patient privacy concerns, data sharing restrictions and high costs. To overcome these limitations, synthetic patient data has emerged as an alternative. However, synthetic data generation (SDG) methods typically rely on machine learning (ML) models trained on original data, leading back to the data scarcity problem. We propose an approach to generate synthetic tabular patient data that does not require access to the original data, but only a description of the desired database. We leverage prior medical knowledge and in-context learning capabilities of large language models (LLMs) to generate realistic patient data, even in a low-resource setting. We quantitatively evaluate our approach against state-of-the-art SDG models, using fidelity, privacy, and utility metrics. Our results show that while LLMs may not match the performance of state-of-the-art models trained on the original data, they effectively generate realistic patient data with well-preserved clinical correlations. An ablation study highlights key elements of our prompt contributing to high-quality synthetic patient data generation. This approach, which is easy to use and does not require original data or advanced ML skills, is particularly valuable for quickly generating custom-designed patient data, supporting project implementation and providing educational resources.
翻译:获取大规模高质量医疗数据库是加速医学研究并深入洞察疾病的关键。然而,此类数据的获取常受限于患者隐私问题、数据共享限制及高昂成本。为克服这些限制,合成患者数据已成为一种替代方案。但现有合成数据生成方法通常依赖于在原始数据上训练的机器学习模型,这又回到了数据稀缺的根本问题。本文提出一种生成合成表格化患者数据的方法,该方法无需访问原始数据,仅需目标数据库的描述即可。我们利用先验医学知识与大语言模型的上下文学习能力,即使在低资源环境下也能生成逼真的患者数据。我们通过保真度、隐私性和可用性指标,将本方法与前沿合成数据生成模型进行定量比较。结果表明,虽然LLM在性能上可能无法媲美基于原始数据训练的前沿模型,但其能有效生成具有良好临床相关性的真实患者数据。消融研究揭示了提示设计中促成高质量合成患者数据生成的关键要素。该方法易于使用,且无需原始数据或高级机器学习技能,对于快速生成定制化患者数据、支持项目落地及提供教育资源具有重要价值。