We consider a problem of placing generators of rewards to be collected by randomly moving agents in a network. In many settings, the precise mobility pattern may be one of several possible, based on parameters outside our control, such as weather conditions. The placement should be robust to this uncertainty, to gain a competent total reward across possible networks. To study such scenarios, we introduce the Robust Reward Placement problem (RRP). Agents move randomly by a Markovian Mobility Model with a predetermined set of locations whose connectivity is chosen adversarially from a known set $\Pi$ of candidates. We aim to select a set of reward states within a budget that maximizes the minimum ratio, among all candidates in $\Pi$, of the collected total reward over the optimal collectable reward under the same candidate. We prove that RRP is NP-hard and inapproximable, and develop $\Psi$-Saturate, a pseudo-polynomial time algorithm that achieves an $\epsilon$-additive approximation by exceeding the budget constraint by a factor that scales as $O(\ln |\Pi|/\epsilon)$. In addition, we present several heuristics, most prominently one inspired by a dynamic programming algorithm for the max-min 0-1 KNAPSACK problem. We corroborate our theoretical analysis with an experimental evaluation on synthetic and real data.
翻译:我们考虑在网络中为随机移动的智能体设置奖励生成器的问题。在许多场景中,精确的移动模式可能因外部不可控参数(如天气条件)而在多种可能性中变化。奖励放置需对这种不确定性具有鲁棒性,以确保在不同可能网络中均能获得可观的总奖励。为研究此类场景,我们提出了鲁棒奖励放置问题。智能体通过马尔可夫移动模型随机运动,其位置集合的连通性从已知候选集合 $\Pi$ 中由对抗方式选定。我们的目标是在预算约束下选择一组奖励状态,使得在所有 $\Pi$ 中的候选网络下,所获总奖励与对应候选网络最优可获奖励的最小比值最大化。我们证明了RRP是NP难且不可近似的,并提出了$\Psi$-Saturate算法——一种伪多项式时间算法,通过以$O(\ln |\Pi|/\epsilon)$因子超预算的方式实现$\epsilon$加性近似。此外,我们提出了多种启发式算法,其中最核心的算法受最大最小0-1背包问题的动态规划解法启发。我们通过合成数据与真实数据的实验验证了理论分析。