In human social systems, debates are often seen as a means to resolve differences of opinion. However, in reality, debates frequently incur significant communication costs, especially when dealing with stubborn opponents. Inspired by this phenomenon, this paper examines the impact of malicious agents on the evolution of normal agents' opinions from the perspective of opinion evolution cost, and proposes corresponding solutions for the scenario in which malicious agents hold different opinions in multi-agent systems(MASs). First, the paper analyzes the negative impact of malicious agents on the opinion evolution process, revealing the evolutionary cost they bring, which provides the theoretical foundation for the proposed solution. Next, based on the process of opinion evolution, a strategy is introduced where agents dynamically adjust trust values during the opinion evolution process, gradually isolating malicious agents and achieving this even when malicious agents are in the majority. Additionally, an evolution rate adjustment mechanism is introduced, allowing the system to flexibly regulate the evolution process in complex situations, effectively achieving the trade-off between opinion evolution rate and cost. Extensive numerical simulations demonstrate that the algorithm can effectively isolate the negative influence of malicious agents and achieve a balance between opinion evolution costs and convergence speed.
翻译:在人类社交系统中,辩论常被视为解决意见分歧的手段。然而现实中,辩论往往伴随着显著的沟通成本,尤其在面对固执的反对者时。受此现象启发,本文从意见演化成本的角度,研究恶意智能体对正常智能体意见演化的影响,并针对多智能体系统中恶意智能体持不同意见的场景提出相应解决方案。首先,本文分析了恶意智能体对意见演化过程的负面影响,揭示了其带来的演化成本,为所提解决方案提供了理论基础。接着,基于意见演化过程,引入了一种智能体在意见演化过程中动态调整信任值的策略,逐步隔离恶意智能体,并能在恶意智能体占多数的情况下实现该目标。此外,本文还引入了演化速率调节机制,使系统能够在复杂情况下灵活调控演化过程,有效实现意见演化速率与成本之间的权衡。大量数值模拟表明,该算法能有效隔离恶意智能体的负面影响,并在意见演化成本与收敛速度之间取得平衡。