Multimodal Named Entity Recognition (MNER) is a pivotal task designed to extract named entities from text with the support of pertinent images. Nonetheless, a notable paucity of data for Chinese MNER has considerably impeded the progress of this natural language processing task within the Chinese domain. Consequently, in this study, we compile a Chinese Multimodal NER dataset (CMNER) utilizing data sourced from Weibo, China's largest social media platform. Our dataset encompasses 5,000 Weibo posts paired with 18,326 corresponding images. The entities are classified into four distinct categories: person, location, organization, and miscellaneous. We perform baseline experiments on CMNER, and the outcomes underscore the effectiveness of incorporating images for NER. Furthermore, we conduct cross-lingual experiments on the publicly available English MNER dataset (Twitter2015), and the results substantiate our hypothesis that Chinese and English multimodal NER data can mutually enhance the performance of the NER model.
翻译:多模态命名实体识别(MNER)是一项关键任务,旨在通过相关图像的支持从文本中提取命名实体。然而,中文多模态NER数据的显著匮乏严重阻碍了该自然语言处理任务在中文领域的发展。为此,本研究利用中国最大的社交媒体平台微博的数据,构建了一个中文多模态NER数据集(CMNER)。该数据集包含5,000条微博帖文及其对应的18,326张图像,实体被划分为人物、地点、组织与杂项四个类别。我们在CMNER上进行了基线实验,结果凸显了融入图像对于NER的有效性。此外,我们在公开的英文多模态NER数据集(Twitter2015)上开展了跨语言实验,结果证实了我们的假设:中英文多模态NER数据能够相互提升NER模型的性能。