Safety and assurance cases risk becoming detached from the understanding needed for responsible engineering and governance decisions. More broadly, the production and evaluation of critical socio-technical systems increasingly face an understanding challenge: pressures for increased tempo, reduced scrutiny, software complexity, and growing use of AI generated artefacts may produce outputs that appear coherent without supporting genuine human comprehension. We argue that understanding should become an explicit, assessable, and defensible component of decision making: what developers, assessors, and decision makers grasp about system behavior, evidence, assumptions, risks, and residual uncertainty. Drawing on Catherine Elgin's epistemology of understanding, we outline a conceptual foundation and then use Assurance 2.0 as an engineering route to operationalize using structured argumentation, evidence, confidence, defeaters, and theory based automation. This leads to two linked artefacts: an Understanding Basis, which justifies why available understanding is sufficient for a decision, and a Personal Understanding Statement, through which participants make their grasp explicit and challengeable. We also identify risks that automation may improve artefact production while weakening understanding, and we propose initial directions for evaluating both efficacy and epistemic impact.
翻译:暂无翻译