The AI we use is powerful, and its power is increasing rapidly. If this powerful AI is to serve the needs of consumers, voters, and decision makers, then it is imperative that the AI is accountable. In general, an agent is accountable to a forum if the forum can request information from the agent about its actions, if the forum and the agent can discuss this information, and if the forum can sanction the agent. Unfortunately, in too many cases today's AI is not accountable -- we cannot question it, enter into a discussion with it, let alone sanction it. In this chapter we relate the general definition of accountability to AI, we illustrate what it means for AI to be accountable and unaccountable, and we explore approaches that can improve our chances of living in a world where all AI is accountable to those who are affected by it.
翻译:当前应用的人工智能系统能力强大,且其能力正迅速增强。若要使如此强大的人工智能服务于消费者、选民及决策者的需求,则必须确保其具备可问责性。一般而言,若某一主体(agent)对特定论坛(forum)具有可问责性,则意味着该论坛能要求主体就其行为提供信息,双方可就此信息展开讨论,且论坛有权对该主体实施制裁。遗憾的是,当前多数人工智能系统尚不具备可问责性——我们无法对其提出质询、展开对话,更遑论实施制裁。本章将通用问责定义与人工智能相结合,阐释人工智能具备可问责性与缺乏可问责性的具体表现,并探讨如何通过技术路径提高实现可能性,以期构建一个所有人工智能系统均能对受其影响者负责的世界。