The proliferation of generative AI poses challenges for information integrity assurance, requiring systems that connect model governance with end-user verification. We present Origin Lens, a privacy-first mobile framework that targets visual disinformation through a layered verification architecture. Unlike server-side detection systems, Origin Lens performs cryptographic image provenance verification and AI detection locally on the device via a Rust/Flutter hybrid architecture. Our system integrates multiple signals - including cryptographic provenance, generative model fingerprints, and optional retrieval-augmented verification - to provide users with graded confidence indicators at the point of consumption. We discuss the framework's alignment with regulatory requirements (EU AI Act, DSA) and its role in verification infrastructure that complements platform-level mechanisms.
翻译:生成式AI的激增给信息完整性保障带来了挑战,亟需能够将模型治理与终端用户验证相连接的系统。本文提出Origin Lens,一种隐私优先的移动框架,通过分层验证架构应对视觉虚假信息。与服务器端检测系统不同,Origin Lens通过Rust/Flutter混合架构在设备本地执行加密图像溯源验证与AI检测。本系统融合了多种信号——包括加密溯源信息、生成模型指纹以及可选的检索增强验证——从而在用户消费时点提供分级置信度指示。我们讨论了该框架与监管要求(欧盟《人工智能法案》《数字服务法》)的契合性,及其在补充平台级机制的验证基础设施中所扮演的角色。