The Cybersecurity Maturity Model Certification (CMMC) framework provides a common standard for protecting sensitive unclassified information in defense contracting. While CMMC defines assessment objectives and control requirements, limited formal guidance exists regarding evidence sampling, the process by which assessors select, review, and validate artifacts to substantiate compliance. Analyzing data collected through an anonymous survey of CMMC-certified assessors and lead assessors, this exploratory study investigates whether inconsistencies in evidence sampling practices exist within the CMMC assessment ecosystem and evaluates the need for a risk-informed standardized sampling methodology. Across 17 usable survey responses, results indicate that evidence sampling practices are predominantly driven by assessor judgment, perceived risk, and environmental complexity rather than formalized standards, with formal statistical sampling models rarely referenced. Participants frequently reported inconsistencies across assessments and expressed broad support for the development of standardized guidance, while generally opposing rigid percentage-based requirements. The findings support the conclusion that the absence of a uniform evidence sampling framework introduces variability that may affect assessment reliability and confidence in certification outcomes. Recommendations are provided to inform future CMMC assessment methodology development and further empirical research.
翻译:网络安全成熟度模型认证(CMMC)框架为国防承包领域的敏感非密信息保护提供了通用标准。尽管CMMC明确了评估目标与控制要求,但关于证据采样——即评估人员为验证合规性而选择、审查与确认工件的过程——仍缺乏正式指导规范。本研究通过对CMMC认证评估人员及首席评估人员的匿名调查数据进行分析,以探索性研究方法考察CMMC评估生态系统中是否存在证据采样实践的不一致现象,并评估建立风险导向标准化采样方法的必要性。在17份有效调查反馈中,结果显示证据采样实践主要受评估人员主观判断、感知风险和环境复杂性驱动,而非依据规范化标准,且正式统计抽样模型极少被采用。参与者普遍反映不同评估间存在不一致性,广泛支持制定标准化指导原则,同时普遍反对僵化的百分比要求。研究结果表明,缺乏统一的证据采样框架会引入差异性,可能影响评估可靠性及认证结果的可信度。本文提出的建议可为未来CMMC评估方法学发展及进一步实证研究提供参考。