Large language models (LLMs) have demonstrated remarkable capability in function-level code generation tasks. Unlike isolated functions, real-world applications demand reasoning over the entire software system: developers must orchestrate how different components interact, maintain consistency across states over time, and ensure the application behaves correctly within the lifecycle and framework constraints. Yet, no existing benchmark adequately evaluates whether LLMs can bridge this gap and construct entire software systems from scratch. To address this gap, we propose APPFORGE, a benchmark consisting of 101 software development problems drawn from real-world Android apps. Given a natural language specification detailing the app functionality, a language model is tasked with implementing the functionality into an Android app from scratch. Developing an Android app from scratch requires understanding and coordinating app states, lifecycle management, and asynchronous operations, calling for LLMs to generate context-aware, robust, and maintainable code. To construct APPFORGE, we design a multi-agent system to automatically summarize the main functionalities from app documents and navigate the app to synthesize test cases validating the functional correctness of app implementation. Following rigorous manual verification by Android development experts, APPFORGE incorporates the test cases within an automated evaluation framework that enables reproducible assessment without human intervention, making it easily adoptable for future research. Our evaluation on 12 flagship LLMs show that all evaluated models achieve low effectiveness, with the best-performing model (GPT-5) developing only 18.8% functionally correct applications, highlighting fundamental limitations in current models' ability to handle complex, multi-component software engineering challenges.
翻译:大型语言模型(LLM)在函数级代码生成任务中已展现出卓越能力。然而与孤立函数不同,实际应用程序需要对整个软件系统进行推理:开发者必须协调不同组件的交互方式,在时间维度上保持状态一致性,并确保应用在生命周期和框架约束下正确运行。目前尚无基准测试能充分评估LLM能否跨越这一鸿沟,从头构建完整软件系统。为此,我们提出APPFORGE基准测试集,包含101个源自真实Android应用的软件开发问题。给定描述应用功能的自然语言规约,语言模型需从头实现完整的Android应用功能。从头开发Android应用需要理解并协调应用状态、生命周期管理和异步操作,要求LLM生成具备上下文感知、鲁棒且可维护的代码。为构建APPFORGE,我们设计多智能体系统自动从应用文档中归纳核心功能,并通过应用导航合成测试用例以验证功能实现的正确性。经过Android开发专家严格人工核验后,APPFORGE将测试用例整合至自动化评估框架,支持无需人工干预的可复现评估,便于未来研究采用。对12个主流LLM的评估表明,所有模型均表现欠佳,最佳模型(GPT-5)仅成功开发18.8%功能正确的应用,这凸显出现有模型在处理复杂多组件软件工程挑战时的根本性局限。