Harm is invoked everywhere from cybersecurity, ethics, risk analysis, to adversarial AI, yet there exists no systematic or agreed upon list of harms, and the concept itself is rarely defined with the precision required for serious analysis. Current discourse relies on vague, under specified notions of harm, rendering nuanced, structured, and qualitative assessment effectively impossible. This paper challenges that gap directly. We introduce a structured and expandable taxonomy of harms, grounded in an ensemble of contemporary ethical theories, that makes harm explicit, enumerable, and analytically tractable. The proposed framework identifies 66+ distinct harm types, systematically organized into two overarching domains human and nonhuman, and eleven major categories, each explicitly aligned with eleven dominant ethical theories. While extensible by design, the upper levels are intentionally stable. Beyond classification, we introduce a theory-aware taxonomy of victim entities and formalize normative harm attributes, including reversibility and duration that materially alter ethical severity. Together, these contributions transform harm from a rhetorical placeholder into an operational object of analysis, enabling rigorous ethical reasoning and long term safety evaluation of AI systems and other sociotechnical domains where harm is a first order concern.
翻译:危害概念在网络安全、伦理、风险分析乃至对抗性人工智能等领域被广泛提及,然而目前既不存在系统性的、公认的危害清单,其概念本身也鲜有以严肃分析所需的精确性予以界定。当前讨论依赖于模糊且定义不清的危害概念,致使细致、结构化及定性的评估实际上无法进行。本文直面这一空白。我们提出一种结构化且可扩展的危害分类体系,其植根于一系列当代伦理理论,使危害得以显式化、可枚举化并具备分析可操作性。该框架识别出66种以上的具体危害类型,系统性地组织在两个总体领域(人类与非人类)及十一大类别之下,每个类别均明确对应十一种主流伦理理论。该体系虽在设计上具备可扩展性,但其顶层结构保持稳定。除分类外,我们进一步提出一种具备理论意识的受害实体分类法,并形式化定义了包括可逆性与持续时间在内的规范性危害属性,这些属性实质性地影响伦理严重性。这些贡献共同将危害从一个修辞性占位符转变为可操作的分析对象,从而能够对人工智能系统及其他以危害为首要关切的社技领域进行严谨的伦理推理与长期安全性评估。