Generic pre-trained neural networks may struggle to produce good results in specialized domains like finance and insurance. This is due to a domain mismatch between training data and downstream tasks, as in-domain data are often scarce due to privacy constraints. In this work, we compare different pre-training strategies for LayoutLM. We show that using domain-relevant documents improves results on a named-entity recognition (NER) problem using a novel dataset of anonymized insurance-related financial documents called Payslips. Moreover, we show that we can achieve competitive results using a smaller and faster model.
翻译:通用的预训练神经网络在金融和保险等专业领域可能难以取得良好效果。这是由于训练数据与下游任务之间存在领域不匹配问题,而受隐私限制,领域内数据往往稀缺。在本研究中,我们比较了LayoutLM的不同预训练策略。通过使用名为Payslips的新型匿名保险相关财务文档数据集,我们证明了采用领域相关文档能够提升命名实体识别任务的性能。此外,我们验证了使用更小更快的模型同样可以获得具有竞争力的结果。