Generative Artificial Intelligence (AI) is enabling unprecedented automation in content creation and decision support, but it also raises novel risks. This paper presents a first-principles risk assessment framework underlying the IEEE P3396 Recommended Practice for AI Risk, Safety, Trustworthiness, and Responsibility. We distinguish between process risks (risks arising from how AI systems are built or operated) and outcome risks (risks manifest in the AI system's outputs and their real-world effects), arguing that generative AI governance should prioritize outcome risks. Central to our approach is an information-centric ontology that classifies AI-generated outputs into four fundamental categories: (1) Perception-level information, (2) Knowledge-level information, (3) Decision/Action plan information, and (4) Control tokens (access or resource directives). This classification allows systematic identification of harms and more precise attribution of responsibility to stakeholders (developers, deployers, users, regulators) based on the nature of the information produced. We illustrate how each information type entails distinct outcome risks (e.g. deception, misinformation, unsafe recommendations, security breaches) and requires tailored risk metrics and mitigations. By grounding the framework in the essence of information, human agency, and cognition, we align risk evaluation with how AI outputs influence human understanding and action. The result is a principled approach to AI risk that supports clear accountability and targeted safeguards, in contrast to broad application-based risk categorizations. We include example tables mapping information types to risks and responsibilities. This work aims to inform the IEEE P3396 Recommended Practice and broader AI governance with a rigorous, first-principles foundation for assessing generative AI risks while enabling responsible innovation.
翻译:生成式人工智能(AI)正在实现内容创作和决策支持领域前所未有的自动化,但也引发了新的风险。本文提出了支撑IEEE P3396《人工智能风险、安全、可信度与责任推荐实践》的第一性原理风险评估框架。我们区分了过程风险(源于AI系统构建或运行方式的风险)与结果风险(体现在AI系统输出及其现实世界影响中的风险),并主张生成式AI治理应优先关注结果风险。我们方法的核心是一个以信息为中心的本体论,它将AI生成的输出划分为四个基本类别:(1)感知级信息,(2)知识级信息,(3)决策/行动计划信息,以及(4)控制令牌(访问或资源指令)。这种分类允许系统性地识别危害,并根据所产生信息的性质,将责任更精确地归因于相关方(开发者、部署者、用户、监管者)。我们阐述了每种信息类型如何对应不同的结果风险(例如欺骗、错误信息、不安全建议、安全漏洞),并需要定制化的风险度量和缓解措施。通过将框架建立在信息本质、人类能动性和认知的基础上,我们将风险评估与AI输出如何影响人类理解和行动的方式相统一。其结果是一种原则性的AI风险处理方法,支持明确的责任划分和有针对性的保障措施,这与基于应用的宽泛风险分类形成对比。我们提供了将信息类型映射到风险与责任的示例表格。本工作旨在为IEEE P3396推荐实践及更广泛的AI治理提供信息,通过一个严谨的、基于第一性原理的评估生成式AI风险的基础,同时促进负责任的创新。