Large models have achieved remarkable performance across various tasks, yet they incur significant computational costs and privacy concerns during both training and inference. Distributed deployment has emerged as a potential solution, but it necessitates the exchange of intermediate information between model segments, with feature representations serving as crucial information carriers. To optimize information exchange, feature coding methods are applied to reduce transmission and storage overhead. Despite its importance, feature coding for large models remains an under-explored area. In this paper, we draw attention to large model feature coding and make three contributions to this field. First, we introduce a comprehensive dataset encompassing diverse features generated by three representative types of large models. Second, we establish unified test conditions, enabling standardized evaluation pipelines and fair comparisons across future feature coding studies. Third, we introduce two baseline methods derived from widely used image coding techniques and benchmark their performance on the proposed dataset. These contributions aim to advance the field of feature coding, facilitating more efficient large model deployment. All source code and the dataset will be made available on GitHub.
翻译:大模型已在各类任务中取得卓越性能,但其训练与推理过程均伴随着高昂的计算成本与隐私风险。分布式部署被视为一种潜在的解决方案,然而这需要在模型分段间交换中间信息,其中特征表示作为关键的信息载体。为优化信息交换,特征编码方法被应用于降低传输与存储开销。尽管特征编码至关重要,针对大模型的特征编码研究仍处于探索不足的阶段。本文聚焦于大模型特征编码问题,并为该领域做出三项贡献。首先,我们引入一个涵盖三类代表性大模型所生成多样化特征的综合性数据集。其次,我们建立统一的测试条件,为未来特征编码研究提供标准化评估流程与公平比较基准。第三,我们基于广泛使用的图像编码技术提出两种基线方法,并在所构建的数据集上对其性能进行基准测试。这些贡献旨在推动特征编码领域的发展,促进大模型的高效部署。所有源代码与数据集均将在GitHub平台公开。