This paper introduces a reinforcement learning framework that employs Proximal Policy Optimization (PPO) to dynamically optimize the weights of multiple large language model (LLM)-generated formulaic alphas for stock trading strategies. Formulaic alphas are mathematically defined trading signals derived from price, volume, sentiment, and other data. Although recent studies have shown that LLMs can generate diverse and effective alphas, a critical challenge lies in how to adaptively integrate them under varying market conditions. To address this gap, we leverage a DeepSeek model to generate fifty alphas for ten stocks, and then use PPO to adjust their weights in real time. Experimental results indicate that the PPO-optimized strategy does not consistently deliver the highest cumulative returns across all stocks, but it achieves comparatively higher Sharpe ratios and smaller maximum drawdowns in most cases. When compared with baseline strategies, including equal-weighted, buy-and-hold, random entry/exit, and momentum approaches, PPO demonstrates more stable risk-adjusted performance. The findings highlight the importance of reinforcement learning in the allocation of alpha weights and show the potential of combining LLM-generated signals with adaptive optimization for robust financial forecasting and trading.
翻译:本文提出了一种强化学习框架,采用近端策略优化(PPO)动态优化股票交易策略中多个大语言模型(LLM)生成的公式化Alpha的权重。公式化Alpha是从价格、成交量、情绪及其他数据中提取的数学定义的交易信号。尽管近期研究表明LLM能够生成多样化且有效的Alpha,但关键挑战在于如何在不同市场条件下自适应地整合这些信号。为填补这一空白,我们利用DeepSeek模型为十只股票生成五十个Alpha,随后使用PPO实时调整其权重。实验结果表明,PPO优化策略虽未在所有股票中均取得最高累计收益,但在多数情况下实现了相对更高的夏普比率和更小的最大回撤。与等权重、买入持有、随机进出场及动量策略等基线方法相比,PPO展现出更稳定的风险调整后表现。研究结果凸显了强化学习在Alpha权重配置中的重要性,并揭示了将LLM生成信号与自适应优化相结合以实现稳健金融预测与交易的潜力。