Businesses can benefit from customer feedback in different modalities, such as text and images, to enhance their products and services. However, it is difficult to extract actionable and relevant pairs of text segments and images from customer feedback in a single pass. In this paper, we propose a novel multi-modal method that fuses image and text information in a latent space and decodes it to extract the relevant feedback segments using an image-text grounded text decoder. We also introduce a weakly-supervised data generation technique that produces training data for this task. We evaluate our model on unseen data and demonstrate that it can effectively mine actionable insights from multi-modal customer feedback, outperforming the existing baselines by $14$ points in F1 score.
翻译:企业可以通过分析不同模态(如文本和图像)的客户反馈来改进产品与服务。然而,从客户反馈中一次性提取具有可操作性的相关图文片段对存在困难。本文提出一种新颖的多模态方法:该方法在隐空间融合图像与文本信息,并通过基于图文对齐的文本解码器解码以提取相关反馈片段。同时,我们引入一种弱监督数据生成技术,为该任务生成训练数据。我们在未见数据上评估模型性能,结果表明该方法能有效从多模态客户反馈中挖掘可操作洞察,其F1分数较现有基线方法提升14个百分点。