Multimodal AI research has overwhelmingly focused on high-resource languages, hindering the democratization of advancements in the field. To address this, we present AfriCaption, a comprehensive framework for multilingual image captioning in 20 African languages and our contributions are threefold: (i) a curated dataset built on Flickr8k, featuring semantically aligned captions generated via a context-aware selection and translation process; (ii) a dynamic, context-preserving pipeline that ensures ongoing quality through model ensembling and adaptive substitution; and (iii) the AfriCaption model, a 0.5B parameter vision-to-text architecture that integrates SigLIP and NLLB200 for caption generation across under-represented languages. This unified framework ensures ongoing data quality and establishes the first scalable image-captioning resource for under-represented African languages, laying the groundwork for truly inclusive multimodal AI.
翻译:多模态人工智能研究过度集中于高资源语言,阻碍了该领域进展的民主化。为此,我们提出AfriCaption,一个面向20种非洲语言的多语言图像描述综合框架,我们的贡献包括三个方面:(i) 基于Flickr8k构建的精选数据集,其通过上下文感知的选择与翻译过程生成语义对齐的描述文本;(ii) 一个动态的、保持上下文的处理流程,通过模型集成与自适应替换确保持续的质量;(iii) AfriCaption模型,一个集成SigLIP与NLLB200的5亿参数视觉到文本架构,用于为代表性不足的语言生成描述。这一统一框架确保了数据的持续质量,并为首个可扩展的、面向代表性不足非洲语言的图像描述资源奠定了基础,为真正包容的多模态人工智能铺平了道路。