Personalized nutrition management aims to tailor dietary guidance to an individual's intake and phenotype, but most existing systems handle food logging, nutrient analysis and recommendation separately. We present a next-generation mobile nutrition assistant that combines image based meal logging with an LLM driven multi agent controller to provide meal level closed loop support. The system coordinates vision, dialogue and state management agents to estimate nutrients from photos and update a daily intake budget. It then adapts the next meal plan to user preferences and dietary constraints. Experiments with SNAPMe meal images and simulated users show competitive nutrient estimation, personalized menus and efficient task plans. These findings demonstrate the feasibility of multi agent LLM control for personalized nutrition and reveal open challenges in micronutrient estimation from images and in large scale real world studies.
翻译:个性化营养管理旨在根据个体摄入量和表型定制膳食指导,但现有系统大多将食物记录、营养分析和推荐功能分离处理。本文提出一种新一代移动营养助手,它结合基于图像的餐食记录与大语言模型驱动的多智能体控制器,提供餐级闭环支持。该系统协调视觉、对话和状态管理智能体,通过照片估算营养素并更新每日摄入预算,随后根据用户偏好与饮食限制调整下一餐计划。在SNAPMe餐食图像与模拟用户上的实验表明,该系统在营养估算、个性化菜单生成和高效任务规划方面具有竞争力。这些发现验证了多智能体大语言模型控制在个性化营养管理中的可行性,同时揭示了图像微量营养素估算及大规模现实世界研究中存在的开放挑战。