Graph path search is a classic computer science problem that has been recently approached with Reinforcement Learning (RL) due to its potential to outperform prior methods. Existing RL techniques typically assume a global view of the network, which is not suitable for large-scale, dynamic, and privacy-sensitive settings. An area of particular interest is search in social networks due to its numerous applications. Inspired by seminal work in experimental sociology, which showed that decentralized yet efficient search is possible in social networks, we frame the problem as a collaborative task between multiple agents equipped with a limited local view of the network. We propose a multi-agent approach for graph path search that successfully leverages both homophily and structural heterogeneity. Our experiments, carried out over synthetic and real-world social networks, demonstrate that our model significantly outperforms learned and heuristic baselines. Furthermore, our results show that meaningful embeddings for graph navigation can be constructed using reward-driven learning.
翻译:图路径搜索是计算机科学中的经典问题,近期因强化学习(RL)具有超越现有方法的潜力而被应用于该领域。现有RL技术通常假设网络具有全局视图,这不适用于大规模、动态且对隐私敏感的场景。社交网络搜索因其广泛的应用而成为特别受关注的领域。受实验社会学开创性研究的启发——该研究表明社交网络中可能存在去中心化但高效的搜索——我们将该问题构建为多个智能体之间的协作任务,每个智能体仅具备网络的有限局部视图。我们提出了一种用于图路径搜索的多智能体方法,该方法成功利用了同质性和结构异质性。我们在合成与真实社交网络上进行的实验表明,我们的模型显著优于学习型和启发式基线方法。此外,我们的结果表明,通过奖励驱动学习可以构建用于图导航的有效嵌入表示。