The progress in generative models, particularly Generative Adversarial Networks (GANs), opened new possibilities for image generation but raised concerns about potential malicious uses, especially in sensitive areas like medical imaging. This study introduces MITS-GAN, a novel approach to prevent tampering in medical images, with a specific focus on CT scans. The approach disrupts the output of the attacker's CT-GAN architecture by introducing finely tuned perturbations that are imperceptible to the human eye. Specifically, the proposed approach involves the introduction of appropriate Gaussian noise to the input as a protective measure against various attacks. Our method aims to enhance tamper resistance, comparing favorably to existing techniques. Experimental results on a CT scan demonstrate MITS-GAN's superior performance, emphasizing its ability to generate tamper-resistant images with negligible artifacts. As image tampering in medical domains poses life-threatening risks, our proactive approach contributes to the responsible and ethical use of generative models. This work provides a foundation for future research in countering cyber threats in medical imaging. Models and codes are publicly available on https://iplab.dmi.unict.it/MITS-GAN-2024/.
翻译:生成模型,特别是生成对抗网络(GAN)的进展,为图像生成开辟了新的可能性,但也引发了对其潜在恶意用途的担忧,尤其是在医学影像等敏感领域。本研究提出了MITS-GAN,一种防止医学图像被篡改的新方法,特别聚焦于CT扫描。该方法通过引入人眼难以察觉的精细调谐扰动,来破坏攻击者CT-GAN架构的输出。具体而言,所提出的方法涉及在输入中引入适当的高斯噪声,作为针对各类攻击的防护措施。我们的方法旨在增强防篡改能力,与现有技术相比具有优势。在CT扫描上的实验结果表明,MITS-GAN具有优越的性能,突显了其能够生成具有可忽略伪影的防篡改图像。鉴于医学领域的图像篡改会带来危及生命的风险,我们的主动防御方法有助于促进生成模型的负责任和合乎伦理的使用。这项工作为未来应对医学影像网络威胁的研究奠定了基础。模型和代码已公开于 https://iplab.dmi.unict.it/MITS-GAN-2024/。