Exponential increases in scientific experimental data are outstripping the rate of progress in silicon technology. As a result, heterogeneous combinations of architectures and process or device technologies are increasingly important to meet the computing demands of future scientific experiments. However, the complexity of heterogeneous computing systems requires systematic modeling to understand performance. We present a model which addresses this need by framing key aspects of data collection pipelines and constraints, and combines them with the important vectors of technology that shape alternatives, computing metrics that allow complex alternatives to be compared. For instance, a data collection pipeline may be characterized by parameters such as sensor sampling rates, amount of data collected, and the overall relevancy of retrieved samples. Alternatives to this pipeline are enabled by hardware development vectors including advancing CMOS, GPUs, neuromorphic computing, and edge computing. By calculating metrics for each alternative such as overall F1 score, power, hardware cost, and energy expended per relevant sample, this model allows alternate data collection systems to be rigorously compared. To demonstrate this model's capability, we apply it to the CMS experiment (and planned HL-LHC upgrade) to evaluate and compare the application of novel technologies in the data acquisition system (DAQ). We demonstrate that improvements to early stages in the DAQ are highly beneficial, greatly reducing the resources required at later stages of processing (such as a 60% power reduction) and increasing the amount of relevant data retrieved from the experiment per unit power (improving from 0.065 to 0.31 samples/kJ) However, we predict further advances will be required in order to meet overall power and cost constraints for the DAQ.
翻译:科学实验数据的指数级增长正超越硅技术的进步速度。因此,采用架构与工艺或器件技术的异构组合对于满足未来科学实验的计算需求日益重要。然而,异构计算系统的复杂性需要系统化建模以理解其性能。我们提出了一个模型来满足这一需求,该模型通过构建数据采集流水线的关键方面与约束条件,并将其与塑造备选方案的重要技术向量、以及允许复杂备选方案进行比较的计算指标相结合。例如,一个数据采集流水线可通过传感器采样率、采集数据量以及检索样本的整体相关性等参数来表征。该流水线的备选方案则由硬件发展向量所驱动,包括先进的CMOS技术、GPU、神经形态计算和边缘计算。通过计算每个备选方案的整体F1分数、功耗、硬件成本以及每相关样本消耗的能量等指标,该模型允许对不同数据采集系统进行严格比较。为展示该模型的能力,我们将其应用于CMS实验(及规划中的HL-LHC升级),以评估和比较新型技术在数据采集系统中的应用。我们证明,对数据采集系统早期阶段的改进极为有益,可大幅减少后续处理阶段所需的资源(例如功耗降低60%),并提高单位功耗下从实验中检索到的相关数据量(从0.065样本/千焦提升至0.31样本/千焦)。然而,我们预测仍需进一步的技术进步,才能满足数据采集系统整体的功耗与成本约束。