This effort is focused on examining the behavior of reinforcement learning systems in personalization environments and detailing the differences in policy entropy associated with the type of learning algorithm utilized. We demonstrate that Policy Optimization agents often possess low-entropy policies during training, which in practice results in agents prioritizing certain actions and avoiding others. Conversely, we also show that Q-Learning agents are far less susceptible to such behavior and generally maintain high-entropy policies throughout training, which is often preferable in real-world applications. We provide a wide range of numerical experiments as well as theoretical justification to show that these differences in entropy are due to the type of learning being employed.
翻译:本工作聚焦于分析强化学习系统在个性化环境中的行为,并详述因采用不同学习算法类型而导致的策略熵差异。我们证明策略优化智能体在训练过程中通常具有低熵策略,这在实际应用中会导致智能体优先选择某些动作而回避其他动作。反之,我们也表明Q学习智能体对此类行为的敏感度远低于前者,且能在整个训练过程中保持高熵策略,这在现实应用中往往更受青睐。我们通过广泛的数值实验与理论论证证明,这些熵值差异源于所采用的学习类型。