This paper introduces a conversational interface system that enables participatory design of differentially private AI systems in public sector applications. Addressing the challenge of balancing mathematical privacy guarantees with democratic accountability, we propose three key contributions: (1) an adaptive $\epsilon$-selection protocol leveraging TOPSIS multi-criteria decision analysis to align citizen preferences with differential privacy (DP) parameters, (2) an explainable noise-injection framework featuring real-time Mean Absolute Error (MAE) visualizations and GPT-4-powered impact analysis, and (3) an integrated legal-compliance mechanism that dynamically modulates privacy budgets based on evolving regulatory constraints. Our results advance participatory AI practices by demonstrating how conversational interfaces can enhance public engagement in algorithmic privacy mechanisms, ensuring that privacy-preserving AI in public sector governance remains both mathematically robust and democratically accountable.
翻译:本文介绍了一种对话式界面系统,该系统支持在公共部门应用中参与式设计差分隐私人工智能系统。针对平衡数学隐私保证与民主问责的挑战,我们提出三项关键贡献:(1) 一种自适应$\epsilon$选择协议,利用TOPSIS多准则决策分析将公民偏好与差分隐私参数对齐;(2) 一种可解释的噪声注入框架,具备实时平均绝对误差可视化功能及基于GPT-4的影响分析;(3) 一种集成法律合规机制,可根据动态演变的监管约束调节隐私预算。我们的研究通过展示对话式界面如何提升公众对算法隐私机制的参与度,推动了参与式人工智能实践,从而确保公共部门治理中的隐私保护人工智能既保持数学严谨性,又符合民主问责要求。