The widespread use of mobile applications has driven the growth of the industry, with companies relying heavily on user data for services like targeted advertising and personalized offerings. In this context, privacy regulations such as the General Data Protection Regulation (GDPR) play a crucial role. One of the GDPR requirements is the maintenance of a Record of Processing Activities (RoPA) by companies. RoPA encompasses various details, including the description of data processing activities, their purposes, types of data involved, and other relevant external entities. Small app-developing companies face challenges in meeting such compliance requirements due to resource limitations and tight timelines. To aid these developers and prevent fines, we propose a method to generate segments of RoPA from user-authored usage scenarios using large language models (LLMs). Our method employs few-shot learning with GPT-3.5 Turbo to summarize usage scenarios and generate RoPA segments. We evaluate different factors that can affect few-shot learning performance consistency for our summarization task, including the number of examples in few-shot learning prompts, repetition, and order permutation of examples in the prompts. Our findings highlight the significant influence of the number of examples in prompts on summarization F1 scores, while demonstrating negligible variability in F1 scores across multiple prompt repetitions. Our prompts achieve successful summarization of processing activities with an average 70% ROUGE-L F1 score. Finally, we discuss avenues for improving results through manual evaluation of the generated summaries.
翻译:移动应用程序的广泛使用推动了行业增长,企业高度依赖用户数据提供定向广告和个性化服务等业务。在此背景下,《通用数据保护条例》(GDPR)等隐私法规发挥着关键作用。GDPR的要求之一是企业必须维护处理活动记录(RoPA)。RoPA包含数据处理活动的描述、处理目的、涉及的数据类型以及其他相关外部实体等多类信息。由于资源有限和时间紧迫,小型应用程序开发公司难以满足此类合规要求。为帮助开发者避免罚款,我们提出一种利用大语言模型(LLMs)从用户编写的使用场景中生成RoPA片段的方法。该方法采用GPT-3.5 Turbo进行少样本学习,以总结使用场景并生成RoPA片段。我们评估了可能影响少样本学习在总结任务中性能稳定性的多种因素,包括少样本学习提示中的示例数量、重复性以及提示中示例的排列顺序。研究结果表明提示中示例数量对总结F1分数具有显著影响,而多次提示重复产生的F1分数变异可忽略不计。我们的提示方法成功实现了处理活动总结,平均ROUGE-L F1分数达到70%。最后,我们通过对生成摘要的人工评估探讨了改进结果的途径。