Retinal prostheses restore limited visual perception, but low spatial resolution and temporal persistence make reading difficult. In sequential letter presentation, the afterimage of one symbol can interfere with perception of the next, leading to systematic recognition errors. Rather than relying on future hardware improvements, we investigate whether optimizing the visual symbols themselves can mitigate this temporal interference. We present SymbolSight, a computational framework that selects symbol-to-letter mappings to minimize confusion among frequently adjacent letters. Using simulated prosthetic vision (SPV) and a neural proxy observer, we estimate pairwise symbol confusability and optimize assignments using language-specific bigram statistics. Across simulations in Arabic, Bulgarian, and English, the resulting heterogeneous symbol sets reduced predicted confusion by a median factor of 22 relative to native alphabets. These results suggest that standard typography is poorly matched to serial, low-bandwidth prosthetic vision and demonstrate how computational modeling can efficiently narrow the design space of visual encodings to generate high-potential candidates for future psychophysical and clinical evaluation.
翻译:视网膜假体能够恢复有限的视觉感知,但低空间分辨率与时间持续性使得阅读变得困难。在顺序字母呈现过程中,一个符号的残像会干扰下一个符号的感知,导致系统性的识别错误。我们并非依赖未来的硬件改进,而是探究通过优化视觉符号本身能否缓解这种时间干扰。我们提出了SymbolSight,一个计算框架,通过选择符号到字母的映射以最小化频繁相邻字母间的混淆。利用模拟假体视觉(SPV)和神经代理观察者,我们估计了符号间的两两混淆度,并使用语言特定的二元语法统计量对映射分配进行优化。在阿拉伯语、保加利亚语和英语的模拟实验中,所生成的异构符号集相对于原生字母表,将预测混淆度中值降低了22倍。这些结果表明,标准字体排版与串行、低带宽的假体视觉模式匹配不佳,并证明了计算建模如何能够高效地缩小视觉编码的设计空间,为未来的心理物理学和临床评估生成具有高潜力的候选方案。