With the increasing availability of multimodal data, many fields urgently require advanced architectures capable of effectively integrating these diverse data sources to address specific problems. This study proposes a hybrid recommendation model that combines the Mixture of Experts (MOE) framework with large language models to enhance the performance of recommendation systems in the healthcare domain. We built a small dataset for recommending healthy food based on patient descriptions and evaluated the model's performance on several key metrics, including Precision, Recall, NDCG, and MAP@5. The experimental results show that the hybrid model outperforms the baseline models, which use MOE or large language models individually, in terms of both accuracy and personalized recommendation effectiveness. The paper finds image data provided relatively limited improvement in the performance of the personalized recommendation system, particularly in addressing the cold start problem. Then, the issue of reclassification of images also affected the recommendation results, especially when dealing with low-quality images or changes in the appearance of items, leading to suboptimal performance. The findings provide valuable insights into the development of powerful, scalable, and high-performance recommendation systems, advancing the application of personalized recommendation technologies in real-world domains such as healthcare.
翻译:随着多模态数据的日益丰富,许多领域迫切需要能够有效整合这些多样化数据源以解决特定问题的先进架构。本研究提出了一种混合推荐模型,该模型将混合专家框架与大语言模型相结合,以提升医疗领域推荐系统的性能。我们构建了一个基于患者描述推荐健康食品的小型数据集,并在多个关键指标上评估了模型的性能,包括精确率、召回率、归一化折损累计增益和平均准确率@5。实验结果表明,该混合模型在准确性和个性化推荐效果方面均优于单独使用混合专家框架或大语言模型的基线模型。本文发现,图像数据对个性化推荐系统性能的提升相对有限,尤其是在解决冷启动问题方面。此外,图像的重新分类问题也影响了推荐结果,特别是在处理低质量图像或物品外观发生变化时,导致性能欠佳。这些发现为开发强大、可扩展且高性能的推荐系统提供了有价值的见解,推动了个性化推荐技术在医疗等现实领域的应用。