Can foundation models (such as ChatGPT) clean your data? In this proposal, we demonstrate that indeed ChatGPT can assist in data cleaning by suggesting corrections for specific cells in a data table (scenario 1). However, ChatGPT may struggle with datasets it has never encountered before (e.g., local enterprise data) or when the user requires an explanation of the source of the suggested clean values. To address these issues, we developed a retrieval-based method that complements ChatGPT's power with a user-provided data lake. The data lake is first indexed, we then retrieve the top-k relevant tuples to the user's query tuple and finally leverage ChatGPT to infer the correct value (scenario 2). Nevertheless, sharing enterprise data with ChatGPT, an externally hosted model, might not be feasible for privacy reasons. To assist with this scenario, we developed a custom RoBERTa-based foundation model that can be locally deployed. By fine-tuning it on a small number of examples, it can effectively make value inferences based on the retrieved tuples (scenario 3). Our proposed system, RetClean, seamlessly supports all three scenarios and provides a user-friendly GUI that enables the VLDB audience to explore and experiment with the system.
翻译:基础模型(如ChatGPT)能否清洗您的数据?在本研究中,我们证明ChatGPT确实能够通过为数据表中特定单元格提供修正建议来协助数据清洗(场景1)。然而,当面对从未接触过的数据集(例如本地企业数据)或用户需要解释建议清洗值的来源时,ChatGPT可能表现不佳。为解决这些问题,我们开发了一种基于检索的方法,该方法通过用户提供的数据湖来增强ChatGPT的能力。我们首先对数据湖建立索引,随后检索与用户查询元组最相关的top-k个元组,最终利用ChatGPT推断出正确值(场景2)。但出于隐私考虑,将企业数据共享给外部托管的ChatGPT模型可能并不可行。针对此场景,我们开发了可本地部署的、基于RoBERTa的定制化基础模型。通过少量示例进行微调后,该模型能基于检索到的元组有效完成数值推断(场景3)。我们提出的RetClean系统无缝支持全部三种场景,并提供用户友好的图形界面,使VLDB参会者能够深入探索并实验该系统。