Personal devices are omnipresent in our lives, seamlessly monitoring our activities, from smart rings tracking sleep patterns to smartwatches keeping an eye on missed heartbeats. The rich data streams from such devices fuel advanced Artificial Intelligence (AI) applications. Instead of solely relying on direct sensor measurements, these applications are increasingly leveraging Machine Learning (ML) model estimates to derive insights. But are these estimates biased or not? This literature review delivers compelling evidence about the impact of hidden biases that creep into ML models deployed on personal devices. We discuss critical bias issues drawn from prior work such as racial bias in pulse oximeters, weight bias in optical heart rate sensors, and sex bias in audio-based diagnostics. In response to these challenges, we advocate for a shift from prioritizing performance-oriented evaluations of personal devices to adopting assessments grounded in a human-centered approach. To facilitate this transition, we provide guidelines for the design, development, evaluation, and use of unbiased AI in personal devices, recognizing their potential impact on improving our health, lifestyle, and productivity -- more than any other technology.
翻译:个人设备在我们的生活中无处不在,从追踪睡眠模式的智能戒指到监测漏搏的智能手表,它们无缝地监控着我们的活动。来自这些设备的丰富数据流推动了先进人工智能应用的发展。这些应用不再仅仅依赖直接的传感器测量,而是越来越多地利用机器学习模型估计来获取洞察。但这些估计是否存在偏差?本文献综述提供了令人信服的证据,揭示了个人设备上部署的机器学习模型中潜藏的隐性偏差所产生的影响。我们讨论了先前研究中提出的关键偏差问题,例如脉搏血氧仪中的种族偏差、光学心率传感器中的体重偏差以及基于音频的诊断中的性别偏差。针对这些挑战,我们主张将个人设备的评估重点从以性能为导向转向采用以人为中心的评估方法。为了促进这一转变,我们为个人设备中无偏差人工智能的设计、开发、评估和使用提供了指导原则,认识到其在改善我们的健康、生活方式和生产力方面——比任何其他技术都更具潜力。