We present BanditLP, a scalable multi-stakeholder contextual bandit framework that unifies neural Thompson Sampling for learning objective-specific outcomes with a large-scale linear program for constrained action selection at serving time. The methodology is application-agnostic, compatible with arbitrary neural architectures, and deployable at web scale, with an LP solver capable of handling billions of variables. Experiments on public benchmarks and synthetic data show consistent gains over strong baselines. We apply this approach in LinkedIn's email marketing system and demonstrate business win, illustrating the value of integrated exploration and constrained optimization in production.
翻译:本文提出BanditLP,一个可扩展的多利益相关者上下文赌博机框架。该框架将面向特定学习目标的神经汤普森采样与服务于约束行动选择的大规模线性规划相统一。该方法具有应用无关性,兼容任意神经架构,并可部署于网络规模——其线性规划求解器能够处理数十亿变量。在公共基准测试和合成数据上的实验表明,该方法相较于强基线模型取得了持续的性能提升。我们将此方法应用于领英的电子邮件营销系统,并展示了业务成效,从而说明了在生产环境中整合探索与约束优化的价值。