Manual digitization of bibliographic metadata is time consuming and labor intensive, especially for historical and real-world archives with highly variable formatting across documents. Despite advances in machine learning, the absence of dedicated datasets for metadata extraction hinders automation. To address this gap, we introduce BiblioPage, a dataset of scanned title pages annotated with structured bibliographic metadata. The dataset consists of approximately 2,000 monograph title pages collected from 14 Czech libraries, spanning a wide range of publication periods, typographic styles, and layout structures. Each title page is annotated with 16 bibliographic attributes, including title, contributors, and publication metadata, along with precise positional information in the form of bounding boxes. To extract structured information from this dataset, we valuated object detection models such as YOLO and DETR combined with transformer-based OCR, achieving a maximum mAP of 52 and an F1 score of 59. Additionally, we assess the performance of various visual large language models, including LlamA 3.2-Vision and GPT-4o, with the best model reaching an F1 score of 67. BiblioPage serves as a real-world benchmark for bibliographic metadata extraction, contributing to document understanding, document question answering, and document information extraction. Dataset and evaluation scripts are availible at: https://github.com/DCGM/biblio-dataset
翻译:书目元数据的手动数字化工作耗时耗力,尤其对于格式高度多样化的历史档案和现实世界档案而言。尽管机器学习技术取得了进展,但缺乏专门用于元数据提取的数据集阻碍了自动化进程。为填补这一空白,我们引入了BiblioPage——一个包含结构化书目元数据标注的扫描标题页数据集。该数据集包含从14个捷克图书馆收集的大约2,000个专著标题页,涵盖广泛的出版时期、印刷样式和版面结构。每个标题页均标注了16个书目属性,包括标题、贡献者和出版元数据,以及以边界框形式呈现的精确位置信息。为从该数据集中提取结构化信息,我们评估了YOLO和DETR等目标检测模型与基于Transformer的OCR相结合的方法,最高mAP达到52,F1分数为59。此外,我们评估了包括LlamA 3.2-Vision和GPT-4o在内的多种视觉大语言模型的性能,最佳模型的F1分数达到67。BiblioPage可作为书目元数据提取的现实世界基准,有助于文档理解、文档问答和文档信息提取。数据集和评估脚本可在以下网址获取:https://github.com/DCGM/biblio-dataset