Relational databases are often fragmented across organizations, creating data silos that hinder distributed data management and mining. Collaborative learning (CL) -- techniques that enable multiple parties to train models jointly without sharing raw data -- offers a principled approach to this challenge. However, existing CL frameworks (e.g., federated and split learning) remain limited in real-world deployments. Current CL benchmarks and algorithms primarily target the learning step under assumptions of isolated, aligned, and joinable databases, and they typically neglect the end-to-end data management pipeline, especially preprocessing steps such as table joins and data alignment. In contrast, our analysis of the real-world corpus WikiDBs shows that databases are interconnected, unaligned, and sometimes unjoinable, exposing a significant gap between CL algorithm design and practical deployment. To close this evaluation gap, we build WikiDBGraph, a large-scale dataset constructed from 100{,}000 real-world relational databases linked by 17 million weighted edges. Each node (database) and edge (relationship) is annotated with 13 and 12 properties, respectively, capturing a hybrid of instance- and feature-level overlap across databases. Experiments on WikiDBGraph demonstrate both the effectiveness and limitations of existing CL methods under realistic conditions, highlighting previously overlooked gaps in managing real-world data silos and pointing to concrete directions for practical deployment of collaborative learning systems.
翻译:关系型数据库常分散于不同组织,形成阻碍分布式数据管理与挖掘的数据孤岛。协同学习——允许多方在不共享原始数据的情况下联合训练模型的技术——为解决这一挑战提供了原则性方法。然而,现有协同学习框架(如联邦学习与分割学习)在实际部署中仍存在局限。当前协同学习基准测试与算法主要针对孤立、对齐且可连接数据库假设下的学习步骤,通常忽略端到端数据管理流程,特别是表连接与数据对齐等预处理环节。与此相对,我们对真实世界语料库WikiDBs的分析表明,数据库实际存在互连、未对齐甚至不可连接的情况,这暴露出协同学习算法设计与实际部署间的显著差距。为弥合此评估鸿沟,我们构建了WikiDBGraph——一个通过1700万条加权边连接的10万个真实世界关系型数据库构成的大规模数据集。每个节点(数据库)与边(关系)分别标注有13项和12项属性,刻画了数据库间实例级与特征级重叠的混合特征。在WikiDBGraph上的实验既证明了现有协同学习方法在真实条件下的有效性,也揭示了其局限性,凸显了管理现实数据孤岛时先前被忽视的差距,并为协同学习系统的实际部署指明了具体方向。