Large language model (LLM) platforms, such as ChatGPT, have recently begun offering an app ecosystem to interface with third-party services on the internet. While these apps extend the capabilities of LLM platforms, they are developed by arbitrary third parties and thus cannot be implicitly trusted. Apps also interface with LLM platforms and users using natural language, which can have imprecise interpretations. In this paper, we propose a framework that lays a foundation for LLM platform designers to analyze and improve the security, privacy, and safety of current and future third-party integrated LLM platforms. Our framework is a formulation of an attack taxonomy that is developed by iteratively exploring how LLM platform stakeholders could leverage their capabilities and responsibilities to mount attacks against each other. As part of our iterative process, we apply our framework in the context of OpenAI's plugin (apps) ecosystem. We uncover plugins that concretely demonstrate the potential for the types of issues that we outline in our attack taxonomy. We conclude by discussing novel challenges and by providing recommendations to improve the security, privacy, and safety of present and future LLM-based computing platforms.
翻译:大型语言模型(LLM)平台(例如ChatGPT)近期开始提供应用程序生态系统,以实现与互联网上第三方服务的交互。虽然这些应用程序扩展了LLM平台的功能,但它们由任意第三方开发,因此不能获得默认信任。应用程序还通过自然语言与LLM平台及用户进行交互,而自然语言可能存在语义模糊性。本文提出一个框架,为LLM平台设计者分析和改进当前及未来第三方集成LLM平台的安全性、隐私性和可靠性奠定基础。该框架通过迭代探索LLM平台利益相关方如何利用其能力与职责相互发起攻击,构建了系统化的攻击分类体系。作为迭代过程的一部分,我们将该框架应用于OpenAI插件(应用程序)生态系统进行实证分析。我们发现了能够具体呈现攻击分类体系中各类问题的实际案例。最后,我们讨论了新型安全挑战,并为提升当前及未来基于LLM的计算平台的安全性、隐私性和可靠性提供了建议。