Since the inception of BERT, encoder-only Transformers have evolved significantly in computational efficiency, training stability, and long-context modeling. ModernBERT consolidates these advances by integrating Rotary Positional Embeddings (RoPE), FlashAttention, and refined normalization. Despite these developments, Turkish NLP lacks a monolingual encoder trained from scratch, incorporating such modern architectural paradigms. This work introduces TabiBERT, a monolingual Turkish encoder based on ModernBERT architecture trained from scratch on a large, curated corpus. TabiBERT is pre-trained on one trillion tokens sampled from an 84.88B token multi-domain corpus: web text (73%), scientific publications (20%), source code (6%), and mathematical content (0.3%). It supports 8,192-token context length (16x original BERT), achieves up to 2.65x inference speedup, and reduces GPU memory consumption, enabling larger batch sizes. We introduce TabiBench with 28 datasets across eight task categories with standardized splits and protocols, evaluated using GLUE-style macro-averaging. TabiBERT attains 77.58 on TabiBench, outperforming BERTurk by 1.62 points and establishing state-of-the-art on five of eight categories, with particularly strong gains on question answering (+9.55 points), code retrieval (+2.41 points), and academic understanding (+0.66 points). Compared with task-specific prior best results, including specialized models like TurkishBERTweet, TabiBERT achieves +1.47 average improvement, indicating robust cross-domain generalization. We release model weights, training configurations, and evaluation code for transparent, reproducible Turkish encoder research.
翻译:自BERT问世以来,仅编码器架构的Transformer模型在计算效率、训练稳定性和长上下文建模方面取得了显著进展。ModernBERT通过整合旋转位置编码(RoPE)、FlashAttention以及改进的归一化技术,将这些进展系统整合。尽管相关技术不断发展,土耳其语自然语言处理领域仍缺乏从头开始训练、融合此类现代架构范式的单语编码器。本研究提出TabiBERT,一种基于ModernBERT架构、在精心构建的大规模语料上从头训练的单语土耳其语编码器。TabiBERT的预训练使用了从840.88亿词元的多领域语料库中采样的1万亿词元,涵盖网页文本(73%)、科学文献(20%)、源代码(6%)和数学内容(0.3%)。该模型支持8,192词元的上下文长度(为原始BERT的16倍),推理速度最高提升2.65倍,并降低了GPU内存消耗,从而支持更大的批处理规模。我们同时推出TabiBench基准,涵盖八大任务类别的28个数据集,采用标准划分与评估协议,通过GLUE式宏平均进行评估。TabiBERT在TabiBench上获得77.58的综合得分,较BERTurk提升1.62分,在八类任务中的五类达到最优性能,尤其在问答任务(+9.55分)、代码检索(+2.41分)和学术理解(+0.66分)上表现突出。相较于包括TurkishBERTweet等专用模型在内的任务特定最优结果,TabiBERT实现了平均+1.47分的提升,展现出强大的跨领域泛化能力。我们公开模型权重、训练配置和评估代码,以促进透明、可复现的土耳其语编码器研究。