This paper proposes an epistemological shift in the analysis of large generative models, replacing the category ''Large Language Models'' (LLM) with that of ''Large Discourse Models'' (LDM), and then with that of Artificial Discursive Agent (ADA). The theoretical framework is based on an ontological triad distinguishing three regulatory instances: the apprehension of the phenomenal regularities of the referential world, the structuring of embodied cognition, and the structural-linguistic sedimentation of the utterance within a socio-historical context. LDMs, operating on the product of these three instances (the document), model the discursive projection of a portion of human experience reified by the learning corpus. The proposed program aims to replace the ''fascination/fear'' dichotomy with public trials and procedures that make the place, uses, and limits of artificial discursive agents in contemporary social space decipherable, situating this approach within a perspective of governance and co-regulation involving the State, industry, civil society, and academia.
翻译:本文提出对大型生成模型的分析应进行认识论转向,以“大型话语模型”(LDM)范畴取代“大型语言模型”(LLM)范畴,进而演进至“人工话语代理”(ADA)范畴。理论框架基于区分三种规制实例的本体三元结构:对参照世界现象规律性的把握、具身认知的结构化,以及话语在社会历史语境中的结构-语言沉积。LDM作用于这三种实例的产物(即文档),对学习语料库所物化的部分人类经验的话语投射进行建模。本方案旨在以公共审议与程序取代“迷恋/恐惧”二元对立,使人工话语代理在当代社会空间中的定位、用途与界限得以被解析,并将此路径置于由国家、产业、公民社会与学术界共同参与的治理与协同规制视野之中。