We introduce MARKET-BENCH, a benchmark that evaluates large language models (LLMs) on introductory quantitative trading tasks by asking them to construct executable backtesters from natural-language strategy descriptions and market assumptions. Each instance specifies one of three canonical strategies -- scheduled trading on Microsoft (NASDAQ: MSFT), pairs trading on Coca-Cola (NASDAQ: KO) and Pepsi (NASDAQ: PEP), or delta hedging on MSFT -- and models must produce code whose P\&L, drawdown, and position paths match a verifiable reference implementation. We assess twelve state-of-the-art models using a multi-round pass@k metric that separates structural reliability (whether the backtest runs) from numerical accuracy (mean absolute error of the backtest metrics). While most models reliably execute the simplest strategy (average pass@3 of 0.80), errors vary by orders of magnitude across models and tasks: Gemini 3 Pro and Claude 4.5 Sonnet combine strong reliability with low error on simpler strategies, GPT-5.1 Codex-Max achieves perfect pass@1 on the first two strategies and the lowest best-run error on the easiest task, and Qwen3 Max attains perfect pass@3 yet sometimes produces inaccurate P\&L paths. These results show that current LLMs can scaffold basic trading infrastructure but still struggle to reason robustly about prices, inventory, and risk; we release MARKET-BENCH and a public leaderboard at https://marketbench.ai.
翻译:我们提出了MARKET-BENCH这一基准测试,用于评估大语言模型在入门级量化交易任务中的能力,要求模型根据自然语言策略描述和市场假设构建可执行的回测程序。每个测试实例指定三种经典策略之一——微软(NASDAQ: MSFT)的定时交易、可口可乐(NASDAQ: KO)与百事(NASDAQ: PEP)的配对交易,或微软的Delta对冲策略——模型必须生成代码,使其盈亏、回撤和持仓路径与可验证的参考实现相匹配。我们采用多轮pass@k指标评估了十二个前沿模型,该指标将结构可靠性(回测能否运行)与数值准确性(回测指标的均方绝对误差)分开考量。尽管大多数模型能可靠执行最简单的策略(平均pass@3为0.80),但不同模型和任务间的误差存在数量级差异:Gemini 3 Pro和Claude 4.5 Sonnet在较简单策略上兼具高可靠性与低误差,GPT-5.1 Codex-Max在前两种策略上实现完美pass@1并在最简单任务中取得最低最佳运行误差,而Qwen3 Max虽达到完美pass@3但有时生成不准确的盈亏路径。这些结果表明,当前大语言模型能够搭建基础交易框架,但在价格、库存和风险的稳健推理方面仍存在困难;我们在https://marketbench.ai发布了MARKET-BENCH及公开排行榜。