In this paper, a deep learning approach for Mpox diagnosis named Customized Residual SwinTransformerV2 (RSwinV2) has been proposed, trying to enhance the capability of lesion classification by employing the RSwinV2 tool-assisted vision approach. In the RSwinV2 method, a hierarchical structure of the transformer has been customized based on the input dimensionality, embedding structure, and output targeted by the method. In this RSwinV2 approach, the input image has been split into non-overlapping patches and processed using shifted windows and attention in these patches. This process has helped the method link all the windows efficiently by avoiding the locality issues of non-overlapping regions in attention, while being computationally efficient. RSwinV2 has further developed based on SwinTransformer and has included patch and position embeddings to take advantage of the transformer global-linking capability by employing multi-head attention in these embeddings. Furthermore, RSwinV2 has developed and incorporated the Inverse Residual Block (IRB) into this method, which utilizes convolutional skip connections with these inclusive designs to address the vanishing gradient issues during processing. RSwinV2 inclusion of IRB has therefore facilitated this method to link global patterns as well as local patterns; hence, its integrity has helped improve lesion classification capability by minimizing variability of Mpox and increasing differences of Mpox, chickenpox, measles, and cowpox. In testing SwinV2, its accuracy of 96.51 and an F1score of 96.13 have been achieved on the Kaggle public dataset, which has outperformed standard CNN models and SwinTransformers; the RSwinV2 vector has thus proved its validity as a computer-assisted tool for Mpox lesion observation interpretation.
翻译:本文提出了一种名为定制化残差SwinTransformerV2(RSwinV2)的深度学习模型,用于猴痘(Mpox)诊断,旨在通过采用RSwinV2工具辅助视觉方法提升病灶分类能力。在RSwinV2方法中,根据输入维度、嵌入结构及方法目标输出,定制了Transformer的层次结构。该RSwinV2方法将输入图像分割为不重叠的图像块,并利用移位窗口机制与注意力机制对这些图像块进行处理。此过程通过避免注意力机制中非重叠区域的局部性问题,有效连接所有窗口,同时保持计算高效性。RSwinV2在SwinTransformer基础上进一步改进,引入了图像块嵌入与位置嵌入,并通过在这些嵌入中采用多头注意力机制,充分利用Transformer的全局关联能力。此外,RSwinV2开发并融入了逆残差块(IRB),该模块利用卷积跳跃连接与包容性设计,以解决训练过程中的梯度消失问题。因此,IRB的引入使本方法能够同时关联全局模式与局部模式;其完整性通过最小化猴痘病灶的类内差异、增大猴痘与水痘、麻疹、牛痘之间的类间差异,从而提升了病灶分类性能。在Kaggle公开数据集上的测试结果表明,SwinV2的准确率达到96.51%,F1分数为96.13%,其性能优于标准CNN模型与SwinTransformer;RSwinV2向量由此验证了其作为猴痘病灶观察判读的计算机辅助工具的有效性。