Large Language Models (LLMs) such as ChatGPT have rendered visible the fragility of contemporary knowledge infrastructures by simulating coherence while bypassing traditional modes of citation, authority, and validation. This paper introduces the Situated Epistemic Infrastructures (SEI) framework as a diagnostic tool for analyzing how knowledge becomes authoritative across hybrid human-machine systems under post-coherence conditions. Rather than relying on stable scholarly domains or bounded communities of practice, SEI traces how credibility is mediated across institutional, computational, and temporal arrangements. Integrating insights from infrastructure studies, platform theory, and epistemology, the framework foregrounds coordination over classification, emphasizing the need for anticipatory and adaptive models of epistemic stewardship. The paper contributes to debates on AI governance, knowledge production, and the ethical design of information systems by offering a robust alternative to representationalist models of scholarly communication.
翻译:以ChatGPT为代表的大型语言模型(LLM)通过模拟连贯性同时绕过了传统的引用、权威与验证机制,从而揭示了当代知识基础设施的脆弱性。本文提出情境化认知基础设施(SEI)框架作为诊断工具,用于分析在后连贯性条件下知识如何在人机混合系统中获得权威性。该框架不依赖稳定的学术领域或界限分明的实践社群,而是追踪可信度如何在制度性、计算性与时间性安排中被中介传递。通过整合基础设施研究、平台理论与认识论的洞见,本框架将协调置于分类之上进行考察,强调建立预见性与适应性的认知管理模型之必要性。本文通过为学术交流的表征主义模型提供一种稳健的替代方案,为人工智能治理、知识生产与信息系统的伦理设计等相关辩论作出贡献。