Current artificial intelligence systems exhibit strong performance on narrow tasks, while existing evaluation frameworks provide limited insight into generality across domains. We introduce the Artificial General Intelligence Testbed (AGITB), a complementary benchmarking framework grounded in twelve explicitly stated axioms and implemented as a suite of twelve automated, simple, and reusable tests. AGITB evaluates models on their ability to learn and to predict the next input in a temporal sequence whose semantic content is initially unknown to the model. The framework targets core computational properties, such as determinism, adaptability, and generalisation, that parallel principles observed in biological information processing. Designed to resist brute-force or memorisation-based strategies, AGITB requires autonomous learning across previously unseen environments, in a manner broadly inspired by cortical computation. Preliminary application of AGITB suggests that no contemporary system evaluated to date satisfies all test criteria, indicating that the benchmark provides a structured and interpretable means of assessing progress toward more general learning capabilities. A reference implementation of AGITB is freely available on GitHub.
翻译:当前人工智能系统在特定任务上表现出色,但现有评估框架难以衡量其跨领域泛化能力。本文提出通用人工智能测试平台(AGITB),该框架基于十二条明确定义的公理构建,通过十二项自动化、简洁且可复用的测试套件实现。AGITB通过模型在时序序列中学习和预测下一输入信号的能力进行评估,该序列的语义内容对模型初始未知。该框架聚焦确定性、适应性与泛化性等核心计算特性,这些特性与生物信息处理中观察到的原理相呼应。AGITB设计上抵制暴力破解或基于记忆的策略,要求模型在全新环境中进行自主学习,其设计理念广泛受皮层计算机制启发。初步实验表明,现有系统均未满足所有测试标准,这证明该基准能为评估通用学习能力的进展提供结构化且可解释的衡量工具。AGITB参考实现已在GitHub开源发布。