With the remarkable advancements in image generation and open-form text generation, the creation of interleaved image-text content has become an increasingly intriguing field. Multimodal story generation, characterized by producing narrative texts and vivid images in an interleaved manner, has emerged as a valuable and practical task with broad applications. However, this task poses significant challenges, as it necessitates the comprehension of the complex interplay between texts and images, and the ability to generate long sequences of coherent, contextually relevant texts and visuals. In this work, we propose SEED-Story, a novel method that leverages a Multimodal Large Language Model (MLLM) to generate extended multimodal stories. Our model, built upon the powerful comprehension capability of MLLM, predicts text tokens as well as visual tokens, which are subsequently processed with an adapted visual de-tokenizer to produce images with consistent characters and styles. We further propose multimodal attention sink mechanism to enable the generation of stories with up to 25 sequences (only 10 for training) in a highly efficient autoregressive manner. Additionally, we present a large-scale and high-resolution dataset named StoryStream for training our model and quantitatively evaluating the task of multimodal story generation in various aspects.
翻译:随着图像生成和开放式文本生成技术的显著进步,交错式图文内容的创作已成为一个日益引人关注的领域。多模态故事生成,其特点是以交错方式生成叙事文本和生动图像,已发展成为一个具有广泛应用价值且实用性强的任务。然而,该任务也带来了重大挑战,因为它需要理解文本与图像之间复杂的相互作用,并具备生成长序列连贯且上下文相关的文本与视觉内容的能力。在本工作中,我们提出了SEED-Story,一种利用多模态大语言模型(MLLM)生成扩展多模态故事的新方法。我们的模型基于MLLM强大的理解能力,预测文本标记以及视觉标记,随后通过一个适配的视觉解标记器进行处理,以生成角色和风格一致的图像。我们进一步提出了多模态注意力汇聚机制,使得模型能够以高效的自回归方式生成最多包含25个序列(训练时仅使用10个序列)的故事。此外,我们提出了一个大规模、高分辨率的数据集StoryStream,用于训练我们的模型,并从多个方面对多模态故事生成任务进行定量评估。