Generative AI (GenAI) now produces text, images, audio, and video that can be perceptually convincing at scale and at negligible marginal cost. While public debate often frames the associated harms as "deepfakes" or incremental extensions of misinformation and fraud, this view misses a broader socio-technical shift: GenAI enables synthetic realities; coherent, interactive, and potentially personalized information environments in which content, identity, and social interaction are jointly manufactured and mutually reinforcing. We argue that the most consequential risk is not merely the production of isolated synthetic artifacts, but the progressive erosion of shared epistemic ground and institutional verification practices as synthetic content, synthetic identity, and synthetic interaction become easy to generate and hard to audit. This paper (i) formalizes synthetic reality as a layered stack (content, identity, interaction, institutions), (ii) expands a taxonomy of GenAI harms spanning personal, economic, informational, and socio-technical risks, (iii) articulates the qualitative shifts introduced by GenAI (cost collapse, throughput, customization, micro-segmentation, provenance gaps, and trust erosion), and (iv) synthesizes recent risk realizations (2023-2025) into a compact case bank illustrating how these mechanisms manifest in fraud, elections, harassment, documentation, and supply-chain compromise. We then propose a mitigation stack that treats provenance infrastructure, platform governance, institutional workflow redesign, and public resilience as complementary rather than substitutable, and outline a research agenda focused on measuring epistemic security. We conclude with the Generative AI Paradox: as synthetic media becomes ubiquitous, societies may rationally discount digital evidence altogether.
翻译:生成式人工智能(GenAI)现已能以可感知的规模化和近乎零边际成本生成文本、图像、音频和视频。尽管公共讨论常将相关危害框定为"深度伪造"或虚假信息与欺诈的渐进式延伸,这种观点忽略了更广泛的社会技术变革:GenAI催生了合成现实——一种连贯、可交互且可能个性化的信息环境,其中内容、身份与社会互动被共同制造并相互强化。我们认为最具深远影响的风险并非仅是孤立合成制品的生产,而是随着合成内容、合成身份与合成互动变得易于生成却难以审计,共享认知基础与制度性验证实践正遭受渐进式侵蚀。本文(i)将合成现实形式化为分层架构(内容层、身份层、互动层、制度层),(ii)拓展了涵盖个人、经济、信息及社会技术风险的GenAI危害分类体系,(iii)阐明了GenAI引发的质性转变(成本坍缩、吞吐量、定制化、微观细分、溯源鸿沟与信任侵蚀),(iv)将近期风险案例(2023-2025)整合为精要案例库,揭示这些机制如何在欺诈、选举、骚扰、文档与供应链渗透中显现。继而提出将溯源基础设施、平台治理、机构工作流重构与公众韧性视为互补而非替代的缓解框架,并规划了以认知安全测量为核心的研究议程。最终揭示生成式人工智能悖论:当合成媒介无处不在时,社会可能理性地彻底贬抑数字证据的价值。