This study provides an in_depth analysis of the ethical and trustworthiness challenges emerging alongside the rapid advancement of generative artificial intelligence (AI) technologies and proposes a comprehensive framework for their systematic evaluation. While generative AI, such as ChatGPT, demonstrates remarkable innovative potential, it simultaneously raises ethical and social concerns, including bias, harmfulness, copyright infringement, privacy violations, and hallucination. Current AI evaluation methodologies, which mainly focus on performance and accuracy, are insufficient to address these multifaceted issues. Thus, this study emphasizes the need for new human_centered criteria that also reflect social impact. To this end, it identifies key dimensions for evaluating the ethics and trustworthiness of generative AI_fairness, transparency, accountability, safety, privacy, accuracy, consistency, robustness, explainability, copyright and intellectual property protection, and source traceability and develops detailed indicators and assessment methodologies for each. Moreover, it provides a comparative analysis of AI ethics policies and guidelines in South Korea, the United States, the European Union, and China, deriving key approaches and implications from each. The proposed framework applies across the AI lifecycle and integrates technical assessments with multidisciplinary perspectives, thereby offering practical means to identify and manage ethical risks in real_world contexts. Ultimately, the study establishes an academic foundation for the responsible advancement of generative AI and delivers actionable insights for policymakers, developers, users, and other stakeholders, supporting the positive societal contributions of AI technologies.
翻译:本研究深入分析了伴随生成式人工智能技术快速发展而出现的伦理与可信度挑战,并提出了一个系统性的综合评估框架。尽管以ChatGPT为代表的生成式人工智能展现出卓越的创新潜力,但同时也引发了偏见、危害性、版权侵犯、隐私泄露及幻觉等伦理与社会关切。当前主要关注性能与准确性的AI评估方法不足以应对这些多维度问题。因此,本研究强调需要建立新的、以人为本且能反映社会影响的评估标准。为此,研究明确了评估生成式人工智能伦理与可信度的关键维度——公平性、透明度、问责性、安全性、隐私性、准确性、一致性、鲁棒性、可解释性、版权与知识产权保护以及来源可追溯性,并为每个维度制定了详细的指标与评估方法。此外,研究对韩国、美国、欧盟及中国的AI伦理政策与指南进行了比较分析,从中提炼出关键路径与启示。所提出的框架适用于AI全生命周期,并将技术评估与多学科视角相结合,从而为识别和管理现实场景中的伦理风险提供了实用手段。最终,本研究为生成式人工智能的负责任发展奠定了学术基础,并为政策制定者、开发者、用户及其他利益相关方提供了可操作的见解,以支持人工智能技术对社会的积极贡献。