Training and evaluating vision encoders on Multi-Channel Imaging (MCI) data remains challenging as channel configurations vary across datasets, preventing fixed-channel training and limiting reuse of pre-trained encoders on new channel settings. Prior work trains MCI encoders but typically evaluates them via full fine-tuning, leaving probing with frozen pre-trained encoders comparatively underexplored. Existing studies that perform probing largely focus on improving representations, rather than how to best leverage fixed representations for downstream tasks. Although the latter problem has been studied in other domains, directly transferring those strategies to MCI yields weak results, even worse than training from scratch. We therefore propose Channel-Aware Probing (CAP), which exploits the intrinsic inter-channel diversity in MCI datasets by controlling feature flow at both the encoder and probe levels. CAP uses Independent Feature Encoding (IFE) to encode each channel separately, and Decoupled Pooling (DCP) to pool within channels before aggregating across channels. Across three MCI benchmarks, CAP consistently improves probing performance over the default probing protocol, matches fine-tuning from scratch, and largely reduces the gap to full fine-tuning from the same MCI pre-trained checkpoints. Code can be found in https://github.com/umarikkar/CAP.
翻译:在多通道成像(MCI)数据上训练和评估视觉编码器仍然具有挑战性,因为不同数据集之间的通道配置各不相同,这阻碍了固定通道训练,并限制了预训练编码器在新通道设置上的复用。先前的研究虽然训练了MCI编码器,但通常通过完全微调进行评估,相比之下,使用冻结的预训练编码器进行探测的研究则相对不足。现有进行探测的研究主要集中于改进表征,而非如何最佳地利用固定表征进行下游任务。尽管后一问题已在其他领域得到研究,但直接将那些策略迁移到MCI中效果不佳,甚至比从头开始训练更差。因此,我们提出了通道感知探测(CAP),该方法通过在编码器和探测层两个层面控制特征流,利用MCI数据中固有的通道间多样性。CAP使用独立特征编码(IFE)分别编码每个通道,并采用解耦池化(DCP)在跨通道聚合之前先在通道内进行池化。在三个MCI基准测试中,与默认探测协议相比,CAP持续提升了探测性能,与从头开始的微调结果相当,并显著缩小了与基于相同MCI预训练检查点进行完全微调之间的差距。代码可在 https://github.com/umarikkar/CAP 找到。