Despite significant advances in foundation models like DeepSeek-R1 and ChatGPT, their deployment in medical settings faces critical challenges including computational requirements and professional knowledge barriers. This paper presents an efficient lightweight medical large language model architecture that systematically addresses these challenges through three-dimensional optimization: knowledge acquisition, model compression, and computational enhancement. We design a knowledge transfer pipeline from DeepSeek-R1-Distill-70B to DeepSeek-R1-Distill-7B using Low-Rank Adaptation (LoRA) for precise medical knowledge retention. Through 4-bit quantization and mixed-precision strategies, we achieve substantial model compression while preserving medical reasoning capabilities. The inference framework incorporates Flash Attention acceleration and continuous batching, complemented by specialized prompt templates for diverse medical queries. Experimental evaluation on medical benchmarks demonstrates that our approach maintains 92.1% accuracy on USMLE examinations while reducing memory consumption by 64.7% and inference latency by 12.4% compared to baseline models. This work provides a practical solution for deploying advanced language models in resource-constrained medical environments, enabling broader accessibility of AI-assisted healthcare.
翻译:尽管DeepSeek-R1和ChatGPT等基础模型取得了显著进展,但其在医疗场景中的部署仍面临计算需求与专业知识壁垒等关键挑战。本文提出一种高效的轻量化医疗大语言模型架构,通过知识获取、模型压缩与计算增强的三维优化系统性地应对这些挑战。我们设计了基于低秩自适应(LoRA)的从DeepSeek-R1-Distill-70B到DeepSeek-R1-Distill-7B的知识迁移管道,实现精准的医学知识保留。通过4位量化与混合精度策略,在保持医疗推理能力的同时实现了显著的模型压缩。推理框架融合了Flash Attention加速与连续批处理技术,并辅以针对多样化医疗问询的专业提示模板。在医疗基准测试上的实验评估表明,相较于基线模型,我们的方法在保持USMLE考试92.1%准确率的同时,内存消耗降低64.7%,推理延迟减少12.4%。这项工作为在资源受限的医疗环境中部署先进语言模型提供了实用解决方案,有助于推动人工智能辅助医疗的更广泛普及。