In contexts with limited computational and data resources, high-resource language models often prove inadequate, particularly when addressing the specific needs of Malay languages. This paper introduces a Personal Intelligence System designed to efficiently integrate both on-device and server-based models. The system incorporates SLiM-34M for on-device processing, optimized for low memory and power usage, and MANYAK-1.3B for server-based tasks, allowing for scalable, high-performance language processing. The models achieve significant results across various tasks, such as machine translation, question-answering, and translate IndoMMLU. Particularly noteworthy is SLiM-34M's ability to achieve a high improvement in accuracy compared to other LLMs while using 2 times fewer pre-training tokens. This work challenges the prevailing assumption that large-scale computational resources are necessary to build effective language models, contributing to the development of resource-efficient models for the Malay language with the unique orchestration between SLiM-34M and MANYAK-1.3B.
翻译:在计算资源和数据资源受限的场景中,高资源语言模型往往表现不足,尤其在应对马来语系特定需求时更为明显。本文提出一种个人智能系统,旨在高效整合设备端与服务器端模型。该系统集成适用于设备端处理的SLiM-34M模型(针对低内存与低功耗优化)以及用于服务器端任务的MANYAK-1.3B模型,从而实现可扩展的高性能语言处理。这些模型在机器翻译、问答及IndoMMLU翻译等多项任务中取得显著成果。特别值得注意的是,SLiM-34M在使用预训练词元数量减少2倍的情况下,相较于其他大型语言模型仍能实现准确率的显著提升。本研究对"构建有效语言模型必须依赖大规模计算资源"的主流假设提出挑战,通过SLiM-34M与MANYAK-1.3B的独特协同机制,为马来语资源高效型模型的开发作出贡献。