The ubiquity of Large Language Models (LLMs) is driving a paradigm shift where user convenience supersedes computational efficiency. This article defines the "Plausibility Trap": a phenomenon where individuals with access to Artificial Intelligence (AI) models deploy expensive probabilistic engines for simple deterministic tasks-such as Optical Character Recognition (OCR) or basic verification-resulting in significant resource waste. Through micro-benchmarks and case studies on OCR and fact-checking, we quantify the "efficiency tax"-demonstrating a ~6.5x latency penalty-and the risks of algorithmic sycophancy. To counter this, we introduce Tool Selection Engineering and the Deterministic-Probabilistic Decision Matrix, a framework to help developers determine when to use Generative AI and, crucially, when to avoid it. We argue for a curriculum shift, emphasizing that true digital literacy relies not only in knowing how to use Generative AI, but also on knowing when not to use it.
翻译:大型语言模型(LLM)的普及正在引发一场范式转变,其中用户便利性超越了计算效率。本文定义了“可信性陷阱”:一种现象,即拥有人工智能(AI)模型访问权限的个人,将昂贵的概率引擎部署于简单的确定性任务——例如光学字符识别(OCR)或基本验证——从而导致显著的资源浪费。通过对OCR和事实核查的微观基准测试和案例研究,我们量化了“效率税”——展示了约6.5倍的延迟惩罚——以及算法奉承的风险。为了应对此问题,我们引入了工具选择工程和确定性-概率决策矩阵,这是一个帮助开发者决定何时使用生成式AI,以及至关重要的,何时避免使用它的框架。我们主张课程设置的转变,强调真正的数字素养不仅依赖于知道如何使用生成式AI,也依赖于知道何时不使用它。