Generative AI systems are increasingly embedded in everyday life, yet empirical understanding of how psychological risk associated with AI use emerges, is experienced, and is regulated by users remains limited. We present a large-scale computational thematic analysis of posts collected between 2023 and 2025 from two Reddit communities, r/AIDangers and r/ChatbotAddiction, explicitly focused on AI-related harm and distress. Using a multi-agent, LLM-assisted thematic analysis grounded in Braun and Clarke's reflexive framework, we identify 14 recurring thematic categories and synthesize them into five higher-order experiential dimensions. To further characterize affective patterns, we apply emotion labeling using a BERT-based classifier and visualize emotional profiles across dimensions. Our findings reveal five empirically derived experiential dimensions of AI-related psychological risk grounded in real-world user discourse, with self-regulation difficulties emerging as the most prevalent and fear concentrated in concerns related to autonomy, control, and technical risk. These results provide early empirical evidence from lived user experience of how AI safety is perceived and emotionally experienced outside laboratory or speculative contexts, offering a foundation for future AI safety research, evaluation, and responsible governance.
翻译:生成式AI系统正日益融入日常生活,然而关于AI使用相关的心理风险如何产生、被体验以及被用户调节的实证理解仍然有限。本文对2023年至2025年间从两个Reddit社区(r/AIDangers和r/ChatbotAddiction)收集的帖子进行了大规模计算主题分析,这些社区明确关注AI相关伤害与困扰。采用基于Braun和Clarke反思性框架的多智能体、LLM辅助主题分析方法,我们识别出14个反复出现的主题类别,并将其综合为五个高阶体验维度。为进一步刻画情感模式,我们应用基于BERT分类器的情感标注技术,并可视化跨维度的情感分布。研究结果揭示了基于真实用户话语的五个AI相关心理风险实证维度,其中自我调节困难最为普遍,而恐惧情绪主要集中在自主性、控制力和技术风险相关议题中。这些发现从用户真实体验出发,为理解实验室或推测性语境之外AI安全性如何被感知和情感体验提供了早期实证证据,为未来AI安全研究、评估及负责任治理奠定了基础。