We study reinforcement learning for revenue management with delayed feedback, where a substantial fraction of value is determined by customer cancellations and modifications observed days after booking. We propose \emph{choice-model-assisted RL}: a calibrated discrete choice model is used as a fixed partial world model to impute the delayed component of the learning target at decision time. In the fixed-model deployment regime, we prove that tabular Q-learning with model-imputed targets converges to an $O(\varepsilon/(1-γ))$ neighborhood of the optimal Q-function, where $\varepsilon$ summarizes partial-model error, with an additional $O(t^{-1/2})$ sampling term. Experiments in a simulator calibrated from 61{,}619 hotel bookings (1{,}088 independent runs) show: (i) no statistically detectable difference from a maturity-buffer DQN baseline in stationary settings; (ii) positive effects under in-family parameter shifts, with significant gains in 5 of 10 shift scenarios after Holm--Bonferroni correction (up to 12.4\%); and (iii) consistent degradation under structural misspecification, where the choice model assumptions are violated (1.4--2.6\% lower revenue). These results characterize when partial behavioral models improve robustness under shift and when they introduce harmful bias.
翻译:本研究探讨了延迟反馈条件下的收益管理强化学习问题,其中大部分价值由预订后数日观测到的客户取消与修改行为决定。我们提出\emph{选择模型辅助的强化学习}方法:通过校准的离散选择模型作为固定部分世界模型,在决策时推算学习目标的延迟分量。在固定模型部署机制下,我们证明了采用模型推算目标的表格Q学习能够收敛至最优Q函数的$O(\varepsilon/(1-γ))邻域,其中$\varepsilon$表征部分模型误差,并附加$O(t^{-1/2})$采样项。基于61,619条酒店预订数据校准的模拟器实验(1,088次独立运行)显示:(i)在稳态环境下与成熟缓冲DQN基线无统计学显著差异;(ii)在族内参数偏移下呈现积极效应,经Holm-Bonferroni校正后10个偏移场景中有5个获得显著收益提升(最高达12.4%);(iii)在结构误设情况下(即选择模型假设被违反)出现持续性能衰减(收益降低1.4-2.6%)。这些结果明确了部分行为模型在何种偏移条件下能提升系统鲁棒性,以及在何种情况下会引入有害偏差。