Many aging individuals encounter challenges in effectively tracking their dietary intake, exacerbating their susceptibility to nutrition-related health complications. Self-reporting methods are often inaccurate and suffer from substantial bias; however, leveraging intelligent prediction methods can automate and enhance precision in this process. Recent work has explored using computer vision prediction systems to predict nutritional information from food images. Still, these methods are often tailored to specific situations, require other inputs in addition to a food image, or do not provide comprehensive nutritional information. This paper aims to enhance the efficacy of dietary intake estimation by leveraging various neural network architectures to directly predict a meal's nutritional content from its image. Through comprehensive experimentation and evaluation, we present NutritionVerse-Direct, a model utilizing a vision transformer base architecture with three fully connected layers that lead to five regression heads predicting calories (kcal), mass (g), protein (g), fat (g), and carbohydrates (g) present in a meal. NutritionVerse-Direct yields a combined mean average error score on the NutritionVerse-Real dataset of 412.6, an improvement of 25.5% over the Inception-ResNet model, demonstrating its potential for improving dietary intake estimation accuracy.
翻译:许多老年人在有效追踪其饮食摄入方面面临挑战,这加剧了他们罹患与营养相关健康并发症的风险。自我报告方法通常不准确且存在显著偏差;然而,利用智能预测方法可以自动化并提升该过程的精确度。近期研究探索了使用计算机视觉预测系统从食物图像中预测营养信息。然而,这些方法往往针对特定场景设计,除食物图像外还需额外输入,或未能提供全面的营养信息。本文旨在通过利用多种神经网络架构直接从膳食图像预测其营养成分,以提升饮食摄入估算的有效性。通过全面的实验与评估,我们提出了NutritionVerse-Direct,该模型采用视觉变换器基础架构,配备三个全连接层,连接五个回归头,分别预测膳食中的热量(千卡)、质量(克)、蛋白质(克)、脂肪(克)和碳水化合物(克)。在NutritionVerse-Real数据集上,NutritionVerse-Direct的综合平均绝对误差得分为412.6,较Inception-ResNet模型提升了25.5%,展现了其在提高饮食摄入估算准确性方面的潜力。