Generative AI systems often display highly uneven performance across tasks that appear ``nearby'': they can be excellent on one prompt and confidently wrong on another with only small changes in wording or context. We call this phenomenon Artificial Jagged Intelligence (AJI). This paper develops a tractable economic model of AJI that treats adoption as an information problem: users care about \emph{local} reliability, but typically observe only coarse, global quality signals. In a baseline one-dimensional landscape, truth is a rough Brownian process, and the model ``knows'' scattered points drawn from a Poisson process. The model interpolates optimally, and the local error is measured by posterior variance. We derive an adoption threshold for a blind user, show that experienced errors are amplified by the inspection paradox, and interpret scaling laws as denser coverage that improves average quality without eliminating jaggedness. We then study mastery and calibration: a calibrated user who can condition on local uncertainty enjoys positive expected value even in domains that fail the blind adoption test. Modelling mastery as learning a reliability map via Gaussian process regression yields a learning-rate bound driven by information gain, clarifying when discovering ``where the model works'' is slow. Finally, we study how scaling interacts with discoverability: when calibrated signals and user mastery accelerate the harvesting of scale improvements, and when opacity can make gains from scaling effectively invisible.
翻译:生成式人工智能系统常在看似“邻近”的任务间表现出高度不均衡的性能:它们可能在某个提示上表现卓越,却在仅存在细微措辞或语境差异的另一个提示上出现自信的错误。我们将此现象称为人工锯齿智能(AJI)。本文构建了一个可处理的AJI经济模型,将技术采纳视为信息问题:用户关注局部可靠性,但通常仅能观测到粗略的全局质量信号。在基准的一维场景中,真实值被建模为粗糙的布朗运动过程,而模型“已知”从泊松过程中抽取的离散点集。模型通过最优插值进行推断,其局部误差由后验方差度量。我们推导出盲目用户的采纳阈值,证明经验误差会因检查悖论而被放大,并将扩展定律解释为更密集的覆盖——这虽能提升平均质量却无法消除锯齿特性。继而我们研究掌握程度与校准机制:能够依据局部不确定性进行条件判断的校准用户,即使在未通过盲目采纳测试的领域也能获得正向期望价值。通过高斯过程回归将技术掌握度建模为可靠性图谱的学习过程,我们推导出由信息增益驱动的学习率上界,从而阐明“发现模型适用场景”这一过程何时进展缓慢。最后,我们探究模型扩展与可发现性的交互机制:分析校准信号与用户掌握度何时能加速规模效益的获取,以及系统不透明性何时会使扩展带来的增益实质上不可见。