Insect vision supports complex behaviors including associative learning, navigation, and object detection, and has long motivated computational models for understanding biological visual processing. However, many contemporary models prioritize task performance while neglecting biologically grounded processing pathways. Here, we introduce a bio-inspired vision model that captures principles of the insect visual system to transform dense visual input into sparse, discriminative codes. The model is trained using a fully self-supervised contrastive objective, enabling representation learning without labeled data and supporting reuse across tasks without reliance on domain-specific classifiers. We evaluated the resulting representations on flower recognition tasks and natural image benchmarks. The model consistently produced reliable sparse codes that distinguish visually similar inputs. To support different modelling and deployment uses, we have implemented the model as both an artificial neural network and a spiking neural network. In a simulated localization setting, our approach outperformed a simple image downsampling comparison baseline, highlighting the functional benefit of incorporating neuromorphic visual processing pathways. Collectively, these results advance insect computational modelling by providing a generalizable bio-inspired vision model capable of sparse computation across diverse tasks.
翻译:昆虫视觉支持包括联想学习、导航和物体检测在内的复杂行为,并长期为理解生物视觉处理的计算模型提供启发。然而,许多当代模型优先考虑任务性能,却忽视了基于生物学的处理通路。本文提出一种受生物启发的视觉模型,该模型捕捉昆虫视觉系统原理,将密集视觉输入转换为稀疏且具有区分度的编码。该模型采用完全自监督的对比目标进行训练,实现了无需标注数据的表征学习,并支持跨任务复用而无需依赖领域特定的分类器。我们在花卉识别任务和自然图像基准测试中评估了所得表征。该模型始终能生成可靠的稀疏编码,以区分视觉上相似的输入。为支持不同的建模与部署需求,我们将模型同时实现为人工神经网络和脉冲神经网络。在模拟定位场景中,本方法优于简单的图像下采样基线,突显了融合神经形态视觉处理通路的功能优势。总体而言,这些成果通过提供一个可泛化的、受生物启发的视觉模型推进了昆虫计算建模领域的发展,该模型能够在多样化任务中实现稀疏计算。