Vision Transformers rely on positional embeddings and class tokens that encode fixed spatial priors. While effective for natural images, these priors may hinder generalization when spatial layout is weakly informative or inconsistent, a frequent condition in medical imaging and edge-deployed clinical systems. We introduce ZACH-ViT (Zero-token Adaptive Compact Hierarchical Vision Transformer), a compact Vision Transformer that removes both positional embeddings and the [CLS] token, achieving permutation invariance through global average pooling over patch representations. The term "Zero-token" specifically refers to removing the dedicated [CLS] aggregation token and positional embeddings; patch tokens remain unchanged and are processed normally. Adaptive residual projections preserve training stability in compact configurations while maintaining a strict parameter budget. Evaluation is performed across seven MedMNIST datasets spanning binary and multi-class tasks under a strict few-shot protocol (50 samples per class, fixed hyperparameters, five random seeds). The empirical analysis demonstrates regime-dependent behavior: ZACH-ViT (0.25M parameters, trained from scratch) achieves its strongest advantage on BloodMNIST and remains competitive with TransMIL on PathMNIST, while its relative advantage decreases on datasets with strong anatomical priors (OCTMNIST, OrganAMNIST), consistent with the architectural hypothesis. These findings support the view that aligning architectural inductive bias with data structure can be more important than pursuing universal benchmark dominance. Despite its minimal size and lack of pretraining, ZACH-ViT achieves competitive performance while maintaining sub-second inference times, supporting deployment in resource-constrained clinical environments. Code and models are available at https://github.com/Bluesman79/ZACH-ViT.
翻译:视觉Transformer依赖于编码固定空间先验的位置嵌入和类别标记。尽管这些先验对于自然图像处理有效,但当空间布局信息量较弱或不一致时(这在医学成像和边缘部署的临床系统中是常见情况),它们可能会阻碍模型的泛化能力。我们提出了ZACH-ViT(零标记自适应紧凑分层视觉Transformer),这是一种紧凑型视觉Transformer,它移除了位置嵌入和[CLS]标记,通过对图像块表示进行全局平均池化来实现排列不变性。“零标记”特指移除专用的[CLS]聚合标记和位置嵌入;图像块标记保持不变并正常处理。自适应残差投影在紧凑配置下保持了训练稳定性,同时严格遵守参数预算。评估在七个MedMNIST数据集上进行,涵盖二分类和多分类任务,并遵循严格的少样本协议(每类50个样本,固定超参数,五个随机种子)。实证分析展示了机制依赖性行为:ZACH-ViT(0.25M参数,从头开始训练)在BloodMNIST上表现出最强的优势,在PathMNIST上与TransMIL保持竞争力,而在具有强解剖学先验的数据集(OCTMNIST、OrganAMNIST)上其相对优势减弱,这与架构假设一致。这些发现支持了以下观点:将架构的归纳偏置与数据结构对齐,可能比追求通用基准测试的统治地位更为重要。尽管ZACH-ViT规模极小且未进行预训练,它仍能实现具有竞争力的性能,同时保持亚秒级的推理时间,这支持了其在资源受限的临床环境中的部署。代码和模型可在 https://github.com/Bluesman79/ZACH-ViT 获取。