This study assesses the ability of Large Vision-Language Models (LVLMs) to differentiate between AI-generated and human-generated images. It introduces a new automated benchmark construction method for this evaluation. The experiment compared common LVLMs with human participants using a mixed dataset of AI and human-created images. Results showed that LVLMs could distinguish between the image types to some extent but exhibited a rightward bias, and perform significantly worse compared to humans. To build on these findings, we developed an automated benchmark construction process using AI. This process involved topic retrieval, narrative script generation, error embedding, and image generation, creating a diverse set of text-image pairs with intentional errors. We validated our method through constructing two caparable benchmarks. This study highlights the strengths and weaknesses of LVLMs in real-world understanding and advances benchmark construction techniques, providing a scalable and automatic approach for AI model evaluation.
翻译:本研究评估了大视觉语言模型(LVLMs)区分AI生成图像与人类生成图像的能力,并引入了一种新的自动化基准构建方法用于此类评估。实验将常见LVLMs与人类参与者进行对比,采用包含AI和人类创作图像的混合数据集。结果表明,LVLMs在一定程度上能够区分图像类型,但表现出右偏倾向,且性能显著低于人类。基于这些发现,我们利用AI开发了一套自动化基准构建流程,涵盖主题检索、叙事脚本生成、错误嵌入及图像生成等环节,从而创建了包含有意错误的多样化图文对数据集。通过构建两个可比较的基准,我们验证了该方法的有效性。本研究揭示了LVLMs在真实世界理解中的优势与局限,推动了基准构建技术的发展,并为AI模型评估提供了可扩展的自动化方案。