Motivated by information sharing in online platforms, we study repeated persuasion between a sender and a stream of receivers where at each time, the sender observes a payoff-relevant state drawn independently and identically from an unknown distribution, and shares state information with the receivers who each choose an action. The sender seeks to persuade the receivers into taking actions aligned with the sender's preference by selectively sharing state information. However, in contrast to the standard models, neither the sender nor the receivers know the distribution, and the sender has to persuade while learning the distribution on the fly. We study the sender's learning problem of making persuasive action recommendations to achieve low regret against the optimal persuasion mechanism with the knowledge of the distribution. To do this, we first propose and motivate a persuasiveness criterion for the unknown distribution setting that centers robustness as a requirement in the face of uncertainty. Our main result is an algorithm that, with high probability, is robustly-persuasive and achieves $O(\sqrt{T\log T})$ regret, where $T$ is the horizon length. Intuitively, at each time our algorithm maintains a set of candidate distributions, and chooses a signaling mechanism that is simultaneously persuasive for all of them. Core to our proof is a tight analysis about the cost of robust persuasion, which may be of independent interest. We further prove that this regret order is optimal (up to logarithmic terms) by showing that no algorithm can achieve regret better than $\Omega(\sqrt{T})$.
翻译:受在线平台信息共享的启发,本文研究了发送者与连续接收者之间的重复说服问题。在每个时间步,发送者观察到一个独立同分布但分布未知的、与收益相关的状态,并向选择各自行动的接收者共享状态信息。发送者试图通过选择性共享状态信息,说服接收者采取符合自身偏好的行动。然而,与标准模型不同,发送者和接收者均未知该分布,发送者必须边学习分布边进行说服。我们研究了发送者的学习问题:在未知分布条件下,通过做出有说服力的行动建议,实现相对于已知分布最优说服机制的低遗憾。为此,我们首先提出并论证了未知分布场景下以鲁棒性为核心要求的说服性准则。我们的主要成果是一种算法,该算法以高概率实现鲁棒说服,并达到$O(\sqrt{T\log T})$的遗憾界,其中$T$为时间跨度。直观而言,该算法在每个时间步维护一组候选分布,并选择一种对所有分布同时具备说服力的信号机制。我们证明的核心在于对鲁棒说服代价的紧致分析,这本身可能具有独立意义。我们进一步证明该遗憾阶是最优的(对数项除外):没有任何算法能实现优于$\Omega(\sqrt{T})$的遗憾。