We present Racka, a lightweight, continually pretrained large language model designed to bridge the resource gap between Hungarian and high-resource languages such as English and German. Racka employs parameter-efficient continual pretraining via Low-Rank Adaptation (LoRA) on a Qwen-3 4B backbone, making the recipe practical on A100 (40GB)-based HPC clusters with low inter-node bandwidth. To better match the training distribution, we replace and adapt the tokenizer, achieving substantially improved tokenization fertility for Hungarian while maintaining competitive performance in English and German. The model is trained on 160B subword tokens drawn from a mixture of internet and high-quality curated sources, with a composition of 44% Hungarian, 24% English, 21% German, and 11% code. This data mix is chosen to mitigate catastrophic forgetting and preserve high-resource language capabilities during continual pretraining. Our preliminary results indicate modest but stable results in language adaptation.
翻译:我们提出Racka——一种轻量级持续预训练大语言模型,旨在弥合匈牙利语与英语、德语等高资源语言间的资源鸿沟。Racka基于Qwen-3 4B主干网络,采用低秩自适应(LoRA)进行参数高效的持续预训练,该方案可在节点间带宽较低的A100(40GB)高性能计算集群上实际部署。为更好地匹配训练数据分布,我们替换并适配了分词器,在保持英语和德语竞争性性能的同时,显著提升了匈牙利语的分词丰度。该模型使用从互联网及高质量精选数据源混合抽取的1600亿子词令牌进行训练,数据构成包含44%匈牙利语、24%英语、21%德语和11%代码。该数据混合方案旨在缓解持续预训练过程中的灾难性遗忘问题,同时保持高资源语言能力。初步实验结果表明,本方法在语言适配方面取得了稳健且可观的成效。