As artificial intelligence (AI) systems are increasingly used in ethically sensitive domains such as education, healthcare, and transportation, balancing accuracy and interpretability has become a central concern. Coarse ethics (CE) motivates coarse-grained evaluations under cognitive, institutional, and contextual constraints, but it still lacks a simple mathematical formalization of admissible coarse-graining and its informational consequences. This paper introduces coarse-grained partitions (CGPs) as a discrete framework for modeling coarse evaluation on a finite totally ordered score scale. A CGP represents coarse evaluation as a partition into grains with an index assignment, and induces a coarse-grained distribution by pushforward. To compare admissible coarse-grainings, we introduce categorical unification (CU), which constructs a canonical fine-scale reconstruction from the coarse representation under minimal assumptions. On this basis, we define a KL-based measure of information loss, $D_{\mathrm{KL\text{-}CU}}$, as the divergence between the original fine-grained distribution and its CU-based reconstruction. We prove that $D_{\mathrm{KL\text{-}CU}}=0$ if and only if the original distribution is already uniform within each grain. This shows that zero loss, in the sense of the proposed measure, is a highly exceptional limiting case rather than a realistic benchmark for ordinary evaluative practice. We also show that the framework leads naturally to an optimization problem for comparing alternative admissible CGPs. Applications to educational grading and explainable AI (XAI) illustrate how the framework clarifies trade-offs among informational fidelity, interpretability, and coarsening cost.
翻译:暂无翻译