Large discrete action spaces (LDAS) remain a central challenge in reinforcement learning. Existing solution approaches can handle unstructured LDAS with up to a few million actions. However, many real-world applications in logistics, production, and transportation systems have combinatorial action spaces, whose size grows well beyond millions of actions, even on small instances. Fortunately, such action spaces exhibit structure, e.g., equally spaced discrete resource units. With this work, we focus on handling structured LDAS (SLDAS) with sizes that cannot be handled by current benchmarks: we propose Dynamic Neighborhood Construction (DNC), a novel exploitation paradigm for SLDAS. We present a scalable neighborhood exploration heuristic that utilizes this paradigm and efficiently explores the discrete neighborhood around the continuous proxy action in structured action spaces with up to $10^{73}$ actions. We demonstrate the performance of our method by benchmarking it against three state-of-the-art approaches designed for large discrete action spaces across two distinct environments. Our results show that DNC matches or outperforms state-of-the-art approaches while being computationally more efficient. Furthermore, our method scales to action spaces that so far remained computationally intractable for existing methodologies.
翻译:大规模离散动作空间仍然是强化学习中的一个核心挑战。现有解决方案能够处理包含最多几百万个动作的非结构化大规模离散动作空间。然而,许多物流、生产和运输系统中的实际应用具有组合动作空间,即使在小型实例中,其规模也远超数百万个动作。幸运的是,这类动作空间具有结构性,例如等间距的离散资源单元。在本工作中,我们专注于处理当前基准测试无法应对的结构化大规模离散动作空间:我们提出动态邻域构建(DNC),一种针对结构化大规模离散动作空间的新型探索范式。我们提出了一种可扩展的邻域探索启发式方法,利用该范式在动作空间规模高达$10^{73}$的结构化动作空间中高效探索连续代理动作周围的离散邻域。我们通过在两种不同环境中与三种针对大规模离散动作空间设计的最新方法进行基准测试,证明了我们方法的性能。结果表明,DNC在计算效率更高的同时,能够匹配或超越最先进的方法。此外,我们的方法可扩展至现有方法迄今难以从计算上处理的动作空间。