Since the inception of BERT, encoder-only Transformers have evolved significantly in computational efficiency, training stability, and long-context modeling. ModernBERT consolidates these advances by integrating Rotary Positional Embeddings (RoPE), FlashAttention, and refined normalization. Despite these developments, Turkish NLP lacks a monolingual encoder trained from scratch, incorporating such modern architectural paradigms. This work introduces TabiBERT, a monolingual Turkish encoder based on ModernBERT architecture trained from scratch on a large, curated corpus. TabiBERT is pre-trained on one trillion tokens sampled from an 84.88B token multi-domain corpus: web text (73%), scientific publications (20%), source code (6%), and mathematical content (0.3%). It supports 8,192-token context length (16x original BERT), achieves up to 2.65x inference speedup, and reduces GPU memory consumption, enabling larger batch sizes. We introduce TabiBench with 28 datasets across eight task categories with standardized splits and protocols, evaluated using GLUE-style macro-averaging. TabiBERT attains 77.58 on TabiBench, outperforming BERTurk by 1.62 points and establishing state-of-the-art on five of eight categories, with particularly strong gains on question answering (+9.55 points), code retrieval (+2.41 points), and academic understanding (+0.66 points). Compared with task-specific prior best results, including specialized models like TurkishBERTweet, TabiBERT achieves +1.47 average improvement, indicating robust cross-domain generalization. We release model weights, training configurations, and evaluation code for transparent, reproducible Turkish encoder research.
翻译:自BERT问世以来,仅编码器Transformer模型在计算效率、训练稳定性和长上下文建模方面取得了显著进展。ModernBERT通过整合旋转位置嵌入(RoPE)、FlashAttention及改进的归一化技术,将这些进展系统整合。尽管技术不断演进,土耳其语自然语言处理领域仍缺乏基于此类现代架构范式、从零开始训练的单语编码器。本研究提出TabiBERT——一个基于ModernBERT架构、在大规模精选语料上从零开始训练的单语土耳其语编码器。TabiBERT的预训练使用从840.88亿词元的多领域语料库中采样的1万亿词元,涵盖网页文本(73%)、科学文献(20%)、源代码(6%)和数学内容(0.3%)。该模型支持8,192词元的上下文长度(为原始BERT的16倍),推理速度最高提升2.65倍,并降低GPU内存消耗以实现更大批处理规模。我们同步推出TabiBench基准测试框架,涵盖八大任务类别的28个数据集,采用标准化数据划分与评估协议,通过GLUE式宏平均进行评估。TabiBERT在TabiBench上获得77.58的综合得分,较BERTurk提升1.62分,在八大类别中的五个类别达到最优性能,尤其在问答任务(+9.55分)、代码检索(+2.41分)和学术理解(+0.66分)方面表现突出。相较于包括TurkishBERTweet等专用模型在内的任务特定最优结果,TabiBERT实现平均+1.47分的提升,展现出强大的跨领域泛化能力。我们公开模型权重、训练配置与评估代码,以促进透明、可复现的土耳其语编码器研究。