Transformer-based OCR models have shown strong performance on Latin and CJK scripts, but their application to African syllabic writing systems remains limited. We present the first adaptation of TrOCR for printed Tigrinya using the Ge'ez script. Starting from a pre-trained model, we extend the byte-level BPE tokenizer to cover 230 Ge'ez characters and introduce Word-Aware Loss Weighting to resolve systematic word-boundary failures that arise when applying Latin-centric BPE conventions to a new script. The unmodified model produces no usable output on Ge'ez text. After adaptation, the TrOCR-Printed variant achieves 0.22% Character Error Rate and 97.20% exact match accuracy on a held-out test set of 5,000 synthetic images from the GLOCR dataset. An ablation study confirms that Word-Aware Loss Weighting is the critical component, reducing CER by two orders of magnitude compared to vocabulary extension alone. The full pipeline trains in under three hours on a single 8 GB consumer GPU. All code, model weights, and evaluation scripts are publicly released.
翻译:基于Transformer的光学字符识别模型在拉丁语和CJK文字上表现优异,但在非洲音节文字系统中的应用仍十分有限。我们首次针对使用吉兹文字的印刷体提格里尼亚语提出了TrOCR模型适配方案。基于预训练模型,我们将字节级BPE分词器扩展至覆盖230个吉兹字符,并引入词感知损失加权机制,以解决将拉丁语中心化BPE规则应用于新文字时产生的系统性词边界识别失败问题。未经修改的模型在吉兹文本上无法生成有效输出。经适配后,TrOCR-Printed变体在来自GLOCR数据集的5000张合成图像测试集上,实现了0.22%的字符错误率和97.20%的精确匹配准确率。消融实验证实,词感知损失加权是核心改进组件:相较于仅扩展词汇表,该方法将字符错误率降低了两个数量级。完整训练流程可在单张8 GB消费级GPU上于三小时内完成。所有代码、模型权重及评估脚本均已公开。