In the past few years the transformer model has been utilized for a variety of tasks such as image captioning, image classification natural language generation, and natural language understanding. As a key component of the transformer model, self-attention calculates the attention values by mapping the relationships among the head elements of the source and target sequence, yet there is no explicit mechanism to refine and intensify the attention values with respect to the context of the input and target sequences. Based on this intuition, we introduce a novel refine and intensify attention mechanism that is called Zoneup Dropout Injection Attention Calculation (ZoDIAC), in which the intensities of attention values in the elements of the input source and target sequences are first refined using GELU and dropout and then intensified using a proposed zoneup process which includes the injection of a learned scalar factor. Our extensive experiments show that ZoDIAC achieves statistically significant higher scores under all image captioning metrics using various feature extractors in comparison to the conventional self-attention module in the transformer model on the MS-COCO dataset. Our proposed ZoDIAC attention modules can be used as a drop-in replacement for the attention components in all transformer models. The code for our experiments is publicly available at: https://github.com/zanyarz/zodiac
翻译:过去几年中,Transformer模型已被广泛应用于图像描述生成、图像分类、自然语言生成和自然语言理解等多种任务。作为Transformer模型的核心组件,自注意力机制通过映射源序列与目标序列头部元素之间的关系来计算注意力值,但缺乏针对输入与目标序列上下文的显式机制来优化和增强这些注意力值。基于这一观察,我们提出了一种新颖的优化增强注意力机制——区域提升丢弃注入注意力计算(ZoDIAC)。该机制首先通过GELU函数和丢弃操作对输入源序列与目标序列元素中的注意力值强度进行优化,随后通过提出的区域提升过程进行增强,该过程包含对可学习标量因子的注入。我们在MS-COCO数据集上进行的广泛实验表明,相较于传统Transformer模型中的自注意力模块,ZoDIAC在使用不同特征提取器的所有图像描述生成指标上均取得了统计学意义上显著更高的分数。我们提出的ZoDIAC注意力模块可作为即插即用组件替换所有Transformer模型中的注意力部件。实验代码已公开于:https://github.com/zanyarz/zodiac