This paper proposes an epistemological shift in the analysis of large generative models, replacing the category ''Large Language Models'' (LLM) with that of ''Large Discourse Models'' (LDM), and then with that of Artificial Discursive Agent (ADA). The theoretical framework is based on an ontological triad distinguishing three regulatory instances: the apprehension of the phenomenal regularities of the referential world, the structuring of embodied cognition, and the structural-linguistic sedimentation of the utterance within a socio-historical context. LDMs, operating on the product of these three instances (the document), model the discursive projection of a portion of human experience reified by the learning corpus. The proposed program aims to replace the ''fascination/fear'' dichotomy with public trials and procedures that make the place, uses, and limits of artificial discursive agents in contemporary social space decipherable, situating this approach within a perspective of governance and co-regulation involving the State, industry, civil society, and academia.
翻译:本文提出对大型生成模型分析的认识论转变,以“大型话语模型”(LDM)范畴取代“大型语言模型”(LLM)范畴,进而发展为“人工话语代理”(ADA)范畴。理论框架基于区分三种规制实例的本体三元结构:对参照世界现象规律性的把握、具身认知的结构化,以及话语在社会历史语境中的结构-语言沉积。LDM通过处理这三种实例的产物(文档),对学习语料库所物化的人类经验片段进行话语投射建模。本方案旨在用公开试验与程序取代“迷恋/恐惧”二元对立,使人工话语代理在当代社会空间中的定位、用途与限度变得可解读,并将此方法置于国家、产业、公民社会与学术界共同参与的治理与协同规制视角之中。