Accurate prediction with multimodal data-encompassing tabular, textual, and visual inputs or outputs-is fundamental to advancing analytics in diverse application domains. Traditional approaches often struggle to integrate heterogeneous data types while maintaining high predictive accuracy. We introduce Generative Distribution Prediction (GDP), a novel framework that leverages multimodal synthetic data generation-such as conditional diffusion models-to enhance predictive performance across structured and unstructured modalities. GDP is model-agnostic, compatible with any high-fidelity generative model, and supports transfer learning for domain adaptation. We establish a rigorous theoretical foundation for GDP, providing statistical guarantees on its predictive accuracy when using diffusion models as the generative backbone. By estimating the data-generating distribution and adapting to various loss functions for risk minimization, GDP enables accurate point predictions across multimodal settings. We empirically validate GDP on four supervised learning tasks-tabular data prediction, question answering, image captioning, and adaptive quantile regression-demonstrating its versatility and effectiveness across diverse domains.
翻译:基于包含表格、文本和视觉输入或输出的多模态数据进行准确预测,对于推动不同应用领域的分析至关重要。传统方法在整合异构数据类型的同时保持高预测精度方面常常面临挑战。我们提出了生成式分布预测(GDP),这是一种新颖的框架,它利用多模态合成数据生成(例如条件扩散模型)来提升在结构化和非结构化模态上的预测性能。GDP是模型无关的,与任何高保真生成模型兼容,并支持用于领域适应的迁移学习。我们为GDP建立了严格的理论基础,当使用扩散模型作为生成主干时,为其预测精度提供了统计保证。通过估计数据生成分布并适应各种用于风险最小化的损失函数,GDP能够在多模态设置中实现准确的点预测。我们在四个监督学习任务上对GDP进行了实证验证——表格数据预测、问答、图像描述和自适应分位数回归——证明了其在多个领域的通用性和有效性。