Personalization is a critical yet often overlooked factor in boosting productivity and wellbeing in knowledge-intensive workplaces to better address individual preferences. Existing tools typically offer uniform guidance whether auto-generating email responses or prompting break reminders without accounting for individual behavioral patterns or stress triggers. We introduce AdaptAI, a multimodal AI solution combining egocentric vision and audio, heart and motion activities, and the agentic workflow of Large Language Models LLMs to deliver highly personalized productivity support and context-aware well-being interventions. AdaptAI not only automates peripheral tasks (e.g. drafting succinct document summaries, replying to emails etc.) but also continuously monitors the users unique physiological and situational indicators to dynamically tailor interventions such as micro-break suggestions or exercise prompts, at the exact point of need. In a preliminary study with 15 participants, AdaptAI demonstrated significant improvements in task throughput and user satisfaction by anticipating user stressors and streamlining daily workflows.
翻译:个性化是提升知识密集型工作场所生产力与福祉的关键因素,却常被忽视,其核心在于更好地满足个体偏好。现有工具通常提供统一指导,无论是自动生成邮件回复还是提示休息提醒,均未考虑个体行为模式或压力诱因。我们提出AdaptAI,一种融合第一人称视觉与音频、心电与运动活动监测,以及大语言模型(LLMs)智能工作流的多模态人工智能解决方案,旨在提供高度个性化的生产力支持与情境感知的健康干预。AdaptAI不仅能自动化处理外围任务(如生成文档摘要、回复邮件等),还能持续监测用户独特的生理与情境指标,在需求发生的精确时点动态调整干预策略(如微休息建议或运动提示)。一项针对15名参与者的初步研究表明,AdaptAI通过预测用户压力源并优化日常工作流,显著提升了任务处理效率与用户满意度。