The integration of artificial intelligence (AI) and mobile networks is regarded as one of the most important scenarios for 6G. In 6G, a major objective is to realize the efficient transmission of task-relevant data. Then a key problem arises, how to design collaborative AI models for the device side and the network side, so that the transmitted data between the device and the network is efficient enough, which means the transmission overhead is low but the AI task result is accurate. In this paper, we propose the multi-link information bottleneck (ML-IB) scheme for such collaborative models design. We formulate our problem based on a novel performance metric, which can evaluate both task accuracy and transmission overhead. Then we introduce a quantizer that is adjustable in the quantization bit depth, amplitudes, and breakpoints. Given the infeasibility of calculating our proposed metric on high-dimensional data, we establish a variational upper bound for this metric. However, due to the incorporation of quantization, the closed form of the variational upper bound remains uncomputable. Hence, we employ the Log-Sum Inequality to derive an approximation and provide a theoretical guarantee. Based on this, we devise the quantized multi-link information bottleneck (QML-IB) algorithm for collaborative AI models generation. Finally, numerical experiments demonstrate the superior performance of our QML-IB algorithm compared to the state-of-the-art algorithm.
翻译:人工智能与移动网络的融合被视为6G最重要的场景之一。在6G中,一个主要目标是实现任务相关数据的高效传输。由此产生一个关键问题:如何设计设备端与网络端的协同AI模型,使设备与网络间传输的数据足够高效——即传输开销低而AI任务结果准确。本文针对此类协同模型设计,提出多链路信息瓶颈方案。我们基于一种新颖的性能指标构建问题,该指标可同时评估任务准确性与传输开销,并引入一种可在量化比特深度、幅值和断点上调节的量化的量化器。由于所提指标在高维数据上不可计算,我们建立了该指标的变分上界。然而,因量化机制的引入,该变分上界的闭式表达式仍不可计算。于是我们采用对数求和不等式推导近似解并给出理论保证,据此设计了面向协同AI模型生成的量化多链路信息瓶颈算法。最终数值实验表明,相比现有最优算法,我们的QML-IB算法具有更优越的性能。