As artificial intelligence (AI) systems become increasingly embedded in ethically sensitive domains such as education, healthcare, and transportation, the need to balance accuracy and interpretability in decision-making has become a central concern. Coarse Ethics (CE) is a theoretical framework that justifies coarse-grained evaluations, such as letter grades or warning labels, as ethically appropriate under cognitive and contextual constraints. However, CE has lacked mathematical formalization. This paper introduces Coarse Set Theory (CST), a novel mathematical framework that models coarse-grained decision-making using totally ordered structures and coarse partitions. CST defines hierarchical relations among sets and uses information-theoretic tools, such as Kullback-Leibler Divergence, to quantify the trade-off between simplification and information loss. We demonstrate CST through applications in educational grading and explainable AI (XAI), showing how it enables more transparent and context-sensitive evaluations. By grounding coarse evaluations in set theory and probabilistic reasoning, CST contributes to the ethical design of interpretable AI systems. This work bridges formal methods and human-centered ethics, offering a principled approach to balancing comprehensibility, fairness, and informational integrity in AI-driven decisions.
翻译:随着人工智能系统日益融入教育、医疗和交通等伦理敏感领域,在决策过程中平衡准确性与可解释性已成为核心关切。粗糙伦理是一种理论框架,其论证在认知与情境约束下,采用粗粒度评估(如字母等级或警示标签)具有伦理适切性。然而,该框架长期缺乏数学形式化。本文提出粗糙集合论——一种基于全序结构与粗糙划分对粗粒度决策进行建模的新型数学框架。该理论通过定义集合间的层次关系,并利用信息论工具(如Kullback-Leibler散度)来量化简化操作与信息损失之间的权衡。我们通过教育评分和可解释人工智能领域的应用案例展示该框架,说明其如何实现更透明且情境敏感的评估。通过将粗粒度评估建立在集合论与概率推理的基础上,粗糙集合论为可解释人工智能系统的伦理设计提供了理论支撑。本研究连接了形式化方法与以人为本的伦理观,为在人工智能驱动决策中平衡可理解性、公平性与信息完整性提供了原则性路径。