Children are increasingly using technologies powered by Artificial Intelligence (AI). However, there are growing concerns about privacy risks, particularly for children. Although existing privacy regulations require companies and organizations to implement protections, doing so can be challenging in practice. To address this challenge, this article proposes a framework based on Privacy-by-Design (PbD), which guides designers and developers to take on a proactive and risk-averse approach to technology design. Our framework includes principles from several privacy regulations, such as the General Data Protection Regulation (GDPR) from the European Union, the Personal Information Protection and Electronic Documents Act (PIPEDA) from Canada, and the Children's Online Privacy Protection Act (COPPA) from the United States. We map these principles to various stages of applications that use Large Language Models (LLMs), including data collection, model training, operational monitoring, and ongoing validation. For each stage, we discuss the operational controls found in the recent academic literature to help AI service providers and developers reduce privacy risks while meeting legal standards. In addition, the framework includes design guidelines for children, drawing from the United Nations Convention on the Rights of the Child (UNCRC), the UK's Age-Appropriate Design Code (AADC), and recent academic research. To demonstrate how this framework can be applied in practice, we present a case study of an LLM-based educational tutor for children under 13. Through our analysis and the case study, we show that by using data protection strategies such as technical and organizational controls and making age-appropriate design decisions throughout the LLM life cycle, we can support the development of AI applications for children that provide privacy protections and comply with legal requirements.
翻译:儿童越来越多地使用由人工智能(AI)驱动的技术。然而,人们对隐私风险,尤其是针对儿童的隐私风险日益担忧。尽管现有的隐私法规要求公司和组织实施保护措施,但在实践中实现这些措施可能具有挑战性。为应对这一挑战,本文提出一个基于"隐私保护设计"(PbD)的框架,该框架指导设计者和开发者在技术设计中采取主动且规避风险的方法。我们的框架整合了多项隐私法规的原则,例如欧盟的《通用数据保护条例》(GDPR)、加拿大的《个人信息保护与电子文档法》(PIPEDA)以及美国的《儿童在线隐私保护法》(COPPA)。我们将这些原则映射到使用大型语言模型(LLMs)应用的各个阶段,包括数据收集、模型训练、运行监控和持续验证。针对每个阶段,我们讨论了近期学术文献中提出的操作控制措施,以帮助AI服务提供商和开发者在满足法律标准的同时降低隐私风险。此外,该框架还包含了针对儿童的设计指南,这些指南借鉴了《联合国儿童权利公约》(UNCRC)、英国的《适龄设计规范》(AADC)以及近期的学术研究。为了展示该框架如何在实践中应用,我们提供了一个针对13岁以下儿童的LLM教育辅导应用的案例研究。通过我们的分析和案例研究,我们表明,通过在LLM的整个生命周期中采用技术和组织控制等数据保护策略,并做出适龄的设计决策,我们可以支持开发出为儿童提供隐私保护并符合法律要求的AI应用。