Modern scientific data acquisition generates petabytes of data that must be transferred to geographically distant computing clusters. Conventional tools either rely on preconfigured sessions, which are difficult to tune for users without domain expertise, or they adaptively optimize only concurrency while ignoring other important parameters. We present \name, an adaptive data transfer method that jointly considers multiple parameters. Our solution incorporates heuristic-based parallelism, infinite pipelining, and a deep reinforcement learning based concurrency optimizer. To make agent training practical, we introduce a lightweight network simulator that reduces training time to less than four minutes and provides a $2750\times$ speedup compared to online training. Experimental evaluation shows that \name consistently outperforms existing methods across diverse datasets, achieving up to 9.5x higher throughput compared to state-of-the-art solutions.
翻译:现代科学数据采集产生海量数据,需传输至地理上分散的计算集群。传统工具要么依赖预配置会话(缺乏领域知识的用户难以调优),要么仅自适应优化并发度而忽略其他关键参数。本文提出一种自适应数据传输方法,联合考虑多参数优化。我们的解决方案融合了基于启发式的并行机制、无限流水线技术以及基于深度强化学习的并发优化器。为使智能体训练实用化,我们引入轻量级网络模拟器,将训练时间缩短至四分钟以内,相比在线训练实现2750倍加速。实验评估表明,该方法在不同数据集上均优于现有方案,相比前沿解决方案最高可实现9.5倍的吞吐量提升。