Modern microscopy routinely produces gigapixel images that contain structures across multiple spatial scales, from fine cellular morphology to broader tissue organization. Many analysis tasks require combining these scales, yet most vision models operate at a single resolution or derive multi-scale features from one view, limiting their ability to exploit the inherently multi-resolution nature of microscopy data. We introduce MuViT, a transformer architecture built to fuse true multi-resolution observations from the same underlying image. MuViT embeds all patches into a shared world-coordinate system and extends rotary positional embeddings to these coordinates, enabling attention to integrate wide-field context with high-resolution detail within a single encoder. Across synthetic benchmarks, kidney histopathology, and high-resolution mouse-brain microscopy, MuViT delivers consistent improvements over strong ViT and CNN baselines. Multi-resolution MAE pretraining further produces scale-consistent representations that enhance downstream tasks. These results demonstrate that explicit world-coordinate modelling provides a simple yet powerful mechanism for leveraging multi-resolution information in large-scale microscopy analysis.
翻译:现代显微镜技术常规生成包含从精细细胞形态到宏观组织结构的跨多空间尺度结构的千兆像素图像。许多分析任务需要整合这些尺度,然而大多数视觉模型仅以单一分辨率运行或从单一视图中提取多尺度特征,这限制了其利用显微镜数据固有多分辨率特性的能力。我们提出MuViT,一种专为融合来自同一底层图像的真实多分辨率观测而构建的Transformer架构。MuViT将所有图像块嵌入共享的世界坐标系,并将旋转位置编码扩展至这些坐标,使得注意力机制能够在单一编码器内整合宽视场上下文与高分辨率细节。在合成基准测试、肾脏组织病理学及高分辨率小鼠脑显微镜数据上的实验表明,MuViT相较于强大的ViT和CNN基线模型实现了持续的性能提升。多分辨率掩码自编码预训练进一步生成具有尺度一致性的表征,显著提升了下游任务的性能。这些结果表明,显式的世界坐标建模为大规模显微镜分析中的多分辨率信息利用提供了一种简洁而强大的机制。