Generative artificial intelligence (Gen AI) systems represent a critical technology with far-reaching implications across multiple domains of society. However, their deployment entails a range of risks and challenges that require careful evaluation. To date, there has been a lack of comprehensive, interdisciplinary studies offering a systematic comparison between open-source and proprietary (closed) generative AI systems, particularly regarding their respective advantages and drawbacks. This study aims to: i) critically evaluate and compare the characteristics, opportunities, and challenges of open and closed generative AI models; and ii) propose foundational elements for the development of an Open, Public, and Safe Gen AI framework. As a methodology, we adopted a combined approach that integrates three methods: literature review, critical analysis, and comparative analysis. The proposed framework outlines key dimensions, openness, public governance, and security, as essential pillars for shaping the future of trustworthy and inclusive Gen AI. Our findings reveal that open models offer greater transparency, auditability, and flexibility, enabling independent scrutiny and bias mitigation. In contrast, closed systems often provide better technical support and ease of implementation, but at the cost of unequal access, accountability, and ethical oversight. The research also highlights the importance of multi-stakeholder governance, environmental sustainability, and regulatory frameworks in ensuring responsible development.
翻译:生成式人工智能(Gen AI)系统是一项具有深远社会影响的关键技术,但其部署涉及一系列需要审慎评估的风险与挑战。迄今为止,学界尚缺乏全面、跨学科的综合性研究,对开源与专有(封闭)生成式AI系统进行系统性比较,特别是在各自优势与缺陷方面。本研究旨在:一)批判性评估并比较开放与封闭生成式AI模型的特征、机遇与挑战;二)为构建开放、公共、安全的生成式AI框架提出基础要素。研究方法上,我们采用文献综述、批判性分析与比较分析相结合的综合方法。所提出的框架以开放性、公共治理和安全性为核心维度,作为构建可信赖且包容性生成式AI未来的关键支柱。研究发现表明,开放模型具有更高的透明度、可审计性与灵活性,有利于独立审查与偏见缓解;而封闭系统通常能提供更完善的技术支持与更便捷的部署,但代价是访问不平等、问责缺失与伦理监督不足。本研究同时强调了多方利益相关者治理、环境可持续性及监管框架对确保负责任发展的重要性。