LLM app stores have seen rapid growth, leading to the proliferation of numerous custom LLM apps. However, this expansion raises security concerns. In this study, we propose a three-layer concern framework to identify the potential security risks of LLM apps, i.e., LLM apps with abusive potential, LLM apps with malicious intent, and LLM apps with exploitable vulnerabilities. Over five months, we collected 786,036 LLM apps from six major app stores: GPT Store, FlowGPT, Poe, Coze, Cici, and Character.AI. Our research integrates static and dynamic analysis, the development of a large-scale toxic word dictionary (i.e., ToxicDict) comprising over 31,783 entries, and automated monitoring tools to identify and mitigate threats. We uncovered that 15,146 apps had misleading descriptions, 1,366 collected sensitive personal information against their privacy policies, and 15,996 generated harmful content such as hate speech, self-harm, extremism, etc. Additionally, we evaluated the potential for LLM apps to facilitate malicious activities, finding that 616 apps could be used for malware generation, phishing, etc. Our findings highlight the urgent need for robust regulatory frameworks and enhanced enforcement mechanisms.
翻译:LLM应用商店经历了快速增长,导致大量定制化LLM应用的激增。然而,这种扩张引发了安全担忧。在本研究中,我们提出了一个三层担忧框架来识别LLM应用的潜在安全风险,即具有滥用潜力的LLM应用、具有恶意意图的LLM应用以及存在可利用漏洞的LLM应用。在五个月的时间里,我们从六个主要应用商店(GPT Store、FlowGPT、Poe、Coze、Cici和Character.AI)收集了786,036个LLM应用。我们的研究结合了静态与动态分析、构建了一个包含超过31,783个条目的大规模有害词汇词典(即ToxicDict),并利用自动化监控工具来识别和缓解威胁。我们发现,有15,146个应用存在误导性描述,1,366个应用违反其隐私政策收集敏感个人信息,15,996个应用生成了仇恨言论、自残、极端主义等有害内容。此外,我们评估了LLM应用助长恶意活动的可能性,发现616个应用可用于生成恶意软件、网络钓鱼等。我们的研究结果凸显了建立强有力的监管框架和加强执行机制的迫切需求。