As large language models (LLMs) become integrated into everyday and high-stakes decision-making, they inherit the ambiguity and biases of human language. While they produce fluent and coherent outputs, they rely on statistical pattern prediction rather than grounded reasoning, creating a risk of outputs that are plausible but incorrect. This paper argues that these failures are not only technical but cognitive. LLMs reproduce associative patterns similar to intuitive human reasoning, amplifying systematic misinterpretations when combined with human users. To analyse this, we introduce the Rose-Frame, a cognitive-epistemological framework for diagnosing breakdowns in human-AI interaction. The framework identifies three recurrent traps: (i) map vs territory, distinguishing representations from reality; (ii) intuition vs reason, separating fast associative judgments from reflective reasoning; and (iii) conflict vs confirmation, examining whether ideas are critically tested or mutually reinforced. These mechanisms can compound into epistemic drift when human and model reasoning interact. We show how these failures emerge in practice and propose human-side interventions, including interpretive cues, reflective prompts, and structured disagreement, to stabilise reasoning. Rather than modifying models, the framework focuses on governing interaction. The central claim is that fluency can create an illusion of understanding. Aligning AI therefore requires not only technical improvements but structures that enable reflective and falsifiable human oversight.
翻译:暂无翻译