Recognition of text on word or line images, without the need for sub-word segmentation has become the mainstream of research and development of text recognition for Indian languages. Modelling unsegmented sequences using Connectionist Temporal Classification (CTC) is the most commonly used approach for segmentation-free OCR. In this work we present a comprehensive empirical study of various neural network models that uses CTC for transcribing step-wise predictions in the neural network output to a Unicode sequence. The study is conducted for 13 Indian languages, using an internal dataset that has around 1000 pages per language. We study the choice of line vs word as the recognition unit, and use of synthetic data to train the models. We compare our models with popular publicly available OCR tools for end-to-end document image recognition. Our end-to-end pipeline that employ our recognition models and existing text segmentation tools outperform these public OCR tools for 8 out of the 13 languages. We also introduce a new public dataset called Mozhi for word and line recognition in Indian language. The dataset contains more than 1.2 million annotated word images (120 thousand text lines) across 13 Indian languages. Our code, trained models and the Mozhi dataset will be made available at http://cvit.iiit.ac.in/research/projects/cvit-projects/
翻译:无需子词分割即可识别单词或行图像上的文本,已成为印度语言文本识别研究与开发的主流。使用连接时序分类(CTC)对未分割序列进行建模,是实现无分割光学字符识别最常用的方法。本研究对多种神经网络模型进行了全面的实证分析,这些模型利用CTC将神经网络输出的逐步预测转录为Unicode序列。该研究涵盖13种印度语言,采用内部数据集(每种语言约1000页)。我们探究了以行或单词作为识别单元的选择策略,以及使用合成数据训练模型的效果。通过端到端文档图像识别任务,将我们的模型与当前主流的公开OCR工具进行对比。采用我们识别模型与现有文本分割工具构建的端到端流程,在13种语言中的8种语言上表现优于现有公开OCR工具。同时,我们发布了名为Mozhi的印度语言单词与行识别公开数据集,包含13种印度语言超过120万张标注单词图像(对应12万文本行)。相关代码、训练模型及Mozhi数据集将通过http://cvit.iiit.ac.in/research/projects/cvit-projects/ 公开。