Large Language Models (LLMs) are reshaping organizational knowing by unsettling the epistemological foundations of representational and practice-based perspectives. We conceptualize LLMs as Haraway-ian monsters, that is, hybrid, boundary-crossing entities that destabilize established categories while opening new possibilities for inquiry. Focusing on analogizing as a fundamental driver of knowledge, we examine how LLMs generate connections through large-scale statistical inference. Analyzing their operation across the dimensions of surface/deep analogies and near/far domains, we highlight both their capacity to expand organizational knowing and the epistemic risks they introduce. Building on this, we identify three challenges of living with such epistemic monsters: the transformation of inquiry, the growing need for dialogical vetting, and the redistribution of agency. By foregrounding the entangled dynamics of knowing-with-LLMs, the paper extends organizational theory beyond human-centered epistemologies and invites renewed attention to how knowledge is created, validated, and acted upon in the age of intelligent technologies.
翻译:大型语言模型(LLMs)正通过动摇表征主义与实践认识论的哲学基础,重塑组织的认知方式。我们将LLMs概念化为哈拉维式的“怪物”——即一种混合的、跨越边界的实体,它们既瓦解既有的分类体系,又为探究开辟新的可能性。聚焦于类比作为知识生成的根本驱动力,我们考察LLMs如何通过大规模统计推断建立关联。通过分析其在表层/深层类比及邻近/遥远领域维度上的运作机制,我们既揭示了其拓展组织认知的潜力,也指出其带来的认识论风险。在此基础上,我们提出与这类认知怪物共存的三大挑战:探究方式的转型、对话式验证需求的增长,以及能动性的重新分配。通过凸显“与LLMs共生”的纠缠动力学,本文将组织理论拓展至以人类为中心的认识论之外,并呼吁重新审视智能技术时代中知识如何被创造、验证与实践。