As pedestrian navigation increasingly experiments with Generative AI, and in particular Large Language Models, the nature of routing risks transforming from a verifiable geometric task into an opaque, persuasive dialogue. While conversational interfaces promise personalisation, they introduce risks of manipulation and misplaced trust. We categorise these risks using a 2x2 framework based on intent and origin, distinguishing between intentional manipulations (dark patterns) and unintended harms (explainability pitfalls). We propose seamful design strategies to mitigate these harms. We suggest that one robust way to operationalise trustworthy conversational navigation is through neuro-symbolic architecture, where verifiable pathfinding algorithms ground GenAI's persuasive capabilities, ensuring systems explain their limitations and incentives as clearly as they explain the route.
翻译:随着行人导航系统越来越多地尝试采用生成式人工智能,特别是大型语言模型,路径规划的性质正从可验证的几何任务转变为不透明且具有说服性的对话。虽然对话界面承诺提供个性化服务,但它们也带来了操纵风险和不当信任问题。我们基于意图与来源构建了一个2x2分类框架,以此区分故意操纵(暗黑模式)与无意伤害(可解释性陷阱),并提出采用显缝化设计策略来减轻这些危害。我们认为,实现可信对话导航的一种稳健方法是通过神经符号架构——在该架构中,可验证的路径规划算法为生成式人工智能的说服能力提供基础,确保系统在解释路线时,能同样清晰地说明其自身局限性与激励机制。