In this paper, we investigate the problem of sequential decision-making for Sharpe ratio (SR) maximization in a stochastic bandit setting. We focus on the Thompson Sampling (TS) algorithm, a Bayesian approach celebrated for its empirical performance and exploration efficiency, under the assumption of Gaussian rewards with unknown parameters. Unlike conventional bandit objectives focusing on maximizing cumulative reward, Sharpe ratio optimization instead introduces an inherent tradeoff between achieving high returns and controlling risk, demanding careful exploration of both mean and variance. Our theoretical contributions include a novel regret decomposition specifically designed for the Sharpe ratio, highlighting the role of information acquisition about the reward distribution in driving learning efficiency. Then, we establish fundamental performance limits for the proposed algorithm \texttt{SRTS} in terms of an upper bound on regret. We also derive the matching lower bound and show the order-optimality. Our results show that Thompson Sampling achieves logarithmic regret over time, with distribution-dependent factors capturing the difficulty of distinguishing arms based on risk-adjusted performance. Empirical simulations show that our algorithm significantly outperforms existing algorithms.
翻译:本文研究了随机多臂老虎机框架下夏普比率最大化的序贯决策问题。我们聚焦于汤普森采样算法——一种以卓越经验性能和探索效率著称的贝叶斯方法,并在奖励服从参数未知的高斯分布假设下展开分析。与专注于最大化累积奖励的传统老虎机目标不同,夏普比率优化在获取高收益与控制风险之间引入了内在权衡,需要对均值和方差进行精细探索。我们的理论贡献包括:为夏普比率专门设计的新型遗憾分解方法,揭示了奖励分布信息获取对学习效率的驱动作用;建立了所提算法 \texttt{SRTS} 在遗憾上界方面的基本性能极限;推导了匹配的下界并证明了阶最优性。研究结果表明,汤普森采样能够实现随时间增长的对数遗憾,其分布依赖因子捕捉了基于风险调整后性能区分臂的难度。实证模拟显示,我们的算法显著优于现有算法。