We consider lossy compression of an information source when decoder-only side information may be absent. This setup, also referred to as the Heegard-Berger or Kaspi problem, is a special case of robust distributed source coding. Building upon previous works on neural network-based distributed compressors developed for the decoder-only side information (Wyner-Ziv) case, we propose learning-based schemes that are amenable to the availability of side information. We find that our learned compressors mimic the achievability part of the Heegard-Berger theorem and yield interpretable results operating close to information-theoretic bounds. Depending on the availability of the side information, our neural compressors recover characteristics of the point-to-point (i.e., with no side information) and the Wyner-Ziv coding strategies that include binning in the source space, although no structure exploiting knowledge of the source and side information was imposed into the design.
翻译:我们考虑当解码器端辅助信息可能缺失时信息源的有损压缩问题。该设置亦称为Heegard-Berger或Kaspi问题,是鲁棒分布式信源编码的一种特殊情形。基于先前针对仅解码器辅助信息(Wyner-Ziv)情形开发的神经网络分布式压缩器研究成果,我们提出适用于辅助信息可用性的学习型方案。研究发现:所提出的学习型压缩器能够复现Heegard-Berger定理的可达性部分,并产生可解释的结果,其性能接近信息论界限。根据辅助信息的可用性差异,我们的神经压缩器能复现点对点(即无辅助信息)和Wyner-Ziv编码策略的特征(包括信源空间中的分箱操作),尽管在设计过程中未强制利用信源与辅助信息先验知识的任何结构。