Most frameworks for assessing the openness of AI systems use narrow criteria such as availability of data, model, code, documentation, and licensing terms. However, to evaluate whether the intended effects of openness - such as democratization and autonomy - are realized, we need a more holistic approach that considers the context of release: who will reuse the system, for what purposes, and under what conditions. To this end, we adapt five lessons from system safety that offer guidance on how openness can be evaluated at the system level.
翻译:当前大多数评估人工智能系统开放性的框架采用较为狭隘的标准,如数据、模型、代码、文档和许可条款的可用性。然而,要评估开放性的预期效果——例如民主化与自主性——是否得以实现,我们需要一种更全面的方法,综合考虑系统发布的具体情境:谁将复用该系统、出于何种目的、以及在何种条件下复用。为此,我们借鉴了系统安全领域的五项经验教训,为如何在系统层面评估开放性提供了指导框架。