This paper presents a detailed case study of the T2_BR_SPRACE storage frontend architecture and its observed performance in high-intensity data transfers. The architecture is composed of a heterogeneous cluster of XRootD [1] Virtual Machines (VMs) with 10 Gb/s and 40 Gb/s links, which aggregate data from a 77 Gb/s dCache [2] backend via pNFS to an external 100 Gb/s WAN link. We describe the system configuration, including the use of the BBR [3] congestion control algorithm and TCP extensions [4]. Under peak production conditions, we observed the system sustaining an aggregate throughput of 51.3 Gb/s. An analysis of a specific data flow to Fermilab (FNAL) showed peaks of 41.5 Gb/s, validated by external monitoring tools (CERN). This study documents the performance of a complex virtualized architecture under real load.
翻译:本文对T2_BR_SPRACE存储前端架构及其在高强度数据传输中观测到的性能进行了详细的案例研究。该架构由一个异构的XRootD [1]虚拟机集群组成,集群配备10 Gb/s和40 Gb/s链路,通过pNFS从77 Gb/s的dCache [2]后端聚合数据,并连接至外部100 Gb/s广域网链路。我们描述了系统配置,包括BBR [3]拥塞控制算法和TCP扩展 [4]的使用。在峰值生产条件下,我们观测到系统维持了51.3 Gb/s的总聚合吞吐量。对流向费米实验室(FNAL)的特定数据流分析显示,其峰值达到41.5 Gb/s,该结果已通过外部监控工具(CERN)验证。本研究记录了一个复杂虚拟化架构在实际负载下的性能表现。