While coreference resolution is a well-established research area in Natural Language Processing (NLP), research focusing on Thai language remains limited due to the lack of large annotated corpora. In this work, we introduce ThaiCoref, a dataset for Thai coreference resolution. Our dataset comprises 777,271 tokens, 44,082 mentions and 10,429 entities across four text genres: university essays, newspapers, speeches, and Wikipedia. Our annotation scheme is built upon the OntoNotes benchmark with adjustments to address Thai-specific phenomena. Utilizing ThaiCoref, we train models employing a multilingual encoder and cross-lingual transfer techniques, achieving a best F1 score of 67.88\% on the test set. Error analysis reveals challenges posed by Thai's unique linguistic features. To benefit the NLP community, we make the dataset and the model publicly available at http://www.github.com/nlp-chula/thai-coref .
翻译:尽管指代消解是自然语言处理(NLP)中一个成熟的研究领域,但由于缺乏大规模标注语料库,针对泰语的研究仍然有限。本研究介绍了ThaiCoref,一个用于泰语指代消解的数据集。我们的数据集包含来自四种文本类型(大学论文、报纸、演讲和维基百科)的777,271个词元、44,082个提及项和10,429个实体。我们的标注方案基于OntoNotes基准,并针对泰语特有语言现象进行了调整。利用ThaiCoref,我们采用多语言编码器和跨语言迁移技术训练模型,在测试集上取得了67.88%的最佳F1分数。错误分析揭示了泰语独特语言特征带来的挑战。为惠及NLP社区,我们将数据集和模型公开发布于http://www.github.com/nlp-chula/thai-coref。