Fifth-generation (5G) systems are increasingly studied as shared communication and computing infrastructure for connected vehicles, roadside edge platforms, and future unmanned-system applications. Yet results from simulators, host-OS emulators, digital twins, and hardware-in-the-loop testbeds are often compared as if timing, input/output (I/O), and control-loop behavior were equivalent across them. They are not. Consequently, apparent limits in throughput, latency, scalability, or real-time behavior may reflect the execution harness rather than the wireless design itself. This paper presents \textit{AtlasRAN}, a capability-oriented framework for modeling and performance evaluation of 5G Open Radio Access Network (O-RAN) platforms. It introduces two reference architectures, terminology that separates functional compatibility from timing fidelity, and a capability matrix that maps research questions to evaluation environments that can support them credibly. O-RAN is used here as an experimental coordinate system spanning Centralized Unit (CU)/Distributed Unit (DU) partitioning, fronthaul transport, control exposure, and core-network anchoring. We validate \textit{AtlasRAN} through a CU-DU uplink load study on a coherent CPU-GPU edge platform. For both a CPU-only baseline and a GPU-accelerated low-density parity-check decoding variant, aggregate goodput drops sharply as user count rises from 1 to 12, while fairness remains near ideal and compute utilization decreases rather than increases. This pattern indicates time-scale dilation and online I/O starvation in the emulation harness, not decoder saturation, as the dominant scaling limit. The key lesson is that timing, memory, and transport semantics must be reported as first-class experimental variables when evaluating ubiquitous 5G infrastructure.
翻译:暂无翻译