Current approaches to AI training treat reasoning as an emergent property of scale. We argue instead that robust reasoning emerges from linguistic self-reflection, itself internalized from high-quality social interaction. Drawing on Vygotskian developmental psychology, we advance three core positions centered on Introspection. First, we argue for the Social Genesis of the Private Mind: learning from conversational environments rises to prominence as a new way to make sense of the world; the friction of aligning with another agent, internal or not, refines and crystallizes the reasoning process. Second, we argue that dialogically scaffolded introspective experiences allow agents to engage in sense-making that decouples learning from immediate data streams, transforming raw environmental data into rich, learnable narratives. Finally, we contend that Dialogue Quality is the New Data Quality: the depth of an agent's private reasoning, and its efficiency regarding test-time compute, is determined by the diversity and rigor of the dialogues it has mastered. We conclude that optimizing these conversational scaffolds is the primary lever for the next generation of general intelligence.
翻译:当前的人工智能训练方法将推理视为规模扩展的涌现特性。我们则认为,稳健的推理能力源于语言层面的自我反思,而这种反思本身是从高质量社会互动中内化而来的。借鉴维果茨基的发展心理学理论,我们提出以"内省"为核心的三个基本立场。首先,我们主张"私人思维的社会起源":从对话环境中学习正成为理解世界的新兴重要方式;与另一智能体(无论其是否内化于自身)进行对齐所产生的摩擦,能够精炼并固化推理过程。其次,我们认为通过对话搭建的内省体验框架,使智能体能够进行意义建构,从而将学习过程与即时数据流解耦,将原始环境数据转化为丰富且可学习的叙事结构。最后,我们主张"对话质量即新型数据质量":智能体内部推理的深度及其在测试阶段的计算效率,取决于其所掌握对话的多样性与严谨性。我们的结论是:优化这些对话框架将成为下一代通用智能发展的核心杠杆。