Bandits with noncompliance separate the learner's recommendation from the treatment actually delivered, so the learning target itself must be chosen. A platform may care about recommendation welfare in the current mediated workflow, treatment learning for a future direct-control regime, or anytime-valid uncertainty for one of those targets. These objectives need not agree. We formalize this objective-choice problem, identify the direct-control regime in which recommendation and treatment objectives collapse, and show by example that recommendation welfare can strictly exceed every learner-measurable treatment policy when downstream actors use private information. For finite-context square-IV problems we propose BRACE, a parameter-free phase-doubling algorithm that performs IV inversion only after matrix certification and otherwise returns full-range but honest structural intervals. BRACE delivers simultaneous policy-value validity, fixed-gap identification of the operationally optimal recommendation policy, and fixed-gap identification of the structurally optimal treatment policy under contextual homogeneity and invertibility. We complement the theory with a finite-context empirical benchmark spanning direct control, mediated present-versus-future tradeoffs, weak identification, homogeneity failure, and rectangular overidentification. The experiments show that safety appears as regret on easy problems, as abstention and wide valid intervals under weak identification, as a reason to prefer recommendation welfare under homogeneity failure, and as tighter structural uncertainty when extra instruments are available. For rich contexts, we also derive an orthogonal score whose conditional bias factorizes into compliance-model and outcome-model errors, clarifying what must be stabilized for anytime-valid semiparametric IV inference.
翻译:非遵从性赌博机将学习者的推荐与实际实施的处理分离,因此学习目标本身必须被选择。平台可能关注当前中介化工作流程中的推荐福利、面向未来直接控制机制的处理学习,或针对任一目标的任意时间有效不确定性。这些目标未必一致。我们形式化这一目标选择问题,识别出推荐目标与处理目标合一的直接控制机制,并通过示例证明当下游参与者利用私有信息时,推荐福利可能严格超越所有学习者可测的处理策略。针对有限情境的方阵工具变量问题,我们提出BRACE——一种无参数的双阶段倍增算法,仅在矩阵认证后执行工具变量反演,否则返回全范围但诚实结构区间。BRACE在情境同质性与可逆性条件下,同时提供策略值的有效性保障、操作最优推荐策略的固定间隙识别,以及结构最优处理策略的固定间隙识别。我们通过涵盖直接控制、中介化当前与未来权衡、弱识别、同质性失效及矩形过度识别的有限情境实证基准对理论进行补充。实验表明:安全性在简单问题上表现为遗憾,在弱识别下表现为弃权与宽泛有效区间,在同质性失效时表现为优先选择推荐福利的依据,在额外工具变量可用时表现为更紧缩的结构不确定性。针对丰富情境,我们进一步推导出条件偏差可分解为遵从模型误差与结果模型误差的正交评分函数,从而阐明了实现任意时间有效半参数工具变量推断必须稳定的要素。