The globalization of education and rapid growth of online learning have made localizing educational content a critical challenge. Lecture materials are inherently multimodal, combining spoken audio with visual slides, which requires systems capable of processing multiple input modalities. To provide an accessible and complete learning experience, translations must preserve all modalities: text for reading, slides for visual understanding, and speech for auditory learning. We present \textbf{BOOM}, a multimodal multilingual lecture companion that jointly translates lecture audio and slides to produce synchronized outputs across three modalities: translated text, localized slides with preserved visual elements, and synthesized speech. This end-to-end approach enables students to access lectures in their native language while aiming to preserve the original content in its entirety. Our experiments demonstrate that slide-aware transcripts also yield cascading benefits for downstream tasks such as summarization and question answering. The demo video and code can be found at https://ai4lt.github.io/boom/ \footnote{All released code and models are licensed under the MIT License}.
翻译:教育的全球化与在线学习的迅猛发展使得教育内容本地化成为一项关键挑战。讲座材料本质上是多模态的,结合了语音音频与视觉幻灯片,这要求系统能够处理多种输入模态。为提供无障碍且完整的学习体验,翻译必须保留所有模态:用于阅读的文本、用于视觉理解的幻灯片以及用于听觉学习的语音。我们提出 \textbf{BOOM},一种多模态多语言讲座伴侣,它联合翻译讲座音频与幻灯片,以生成跨三个模态的同步输出:翻译文本、保留视觉元素的本地化幻灯片以及合成语音。这种端到端的方法使学生能够以母语访问讲座,同时力求完整保留原始内容。我们的实验表明,具备幻灯片感知的转录文本还能为摘要生成和问答等下游任务带来级联效益。演示视频和代码可在 https://ai4lt.github.io/boom/ 查看\footnote{所有发布的代码和模型均遵循 MIT 许可证}。