This study demonstrates a novel approach to testing the security boundaries of Vision-Large Language Model (VLM/ LLM) using the EICAR test file embedded within JPEG images. We successfully executed four distinct protocols across multiple LLM platforms, including OpenAI GPT-4o, Microsoft Copilot, Google Gemini 1.5 Pro, and Anthropic Claude 3.5 Sonnet. The experiments validated that a modified JPEG containing the EICAR signature could be uploaded, manipulated, and potentially executed within LLM virtual workspaces. Key findings include: 1) consistent ability to mask the EICAR string in image metadata without detection, 2) successful extraction of the test file using Python-based manipulation within LLM environments, and 3) demonstration of multiple obfuscation techniques including base64 encoding and string reversal. This research extends Microsoft Research's "Penetration Testing Rules of Engagement" framework to evaluate cloud-based generative AI and LLM security boundaries, particularly focusing on file handling and execution capabilities within containerized environments.
翻译:本研究展示了一种利用嵌入JPEG图像中的EICAR测试文件来检验视觉-大语言模型(VLM/LLM)安全边界的新方法。我们在多个LLM平台(包括OpenAI GPT-4o、Microsoft Copilot、Google Gemini 1.5 Pro和Anthropic Claude 3.5 Sonnet)上成功执行了四种不同的测试协议。实验证实,包含EICAR签名的修改后JPEG文件可以在LLM虚拟工作空间内被上传、操作并可能被执行。主要发现包括:1)能够持续在图像元数据中隐藏EICAR字符串而不被检测;2)在LLM环境中使用基于Python的操作成功提取测试文件;3)展示了包括base64编码和字符串反转在内的多种混淆技术。本研究扩展了微软研究院的“渗透测试交战规则”框架,用于评估基于云的生成式人工智能和LLM的安全边界,特别关注容器化环境中的文件处理与执行能力。