Federated Domain-specific Instruction Tuning (FedDIT) leverages a few cross-client private data and server-side public data for instruction augmentation, enhancing model performance in specific domains. While the factors affecting FedDIT remain unclear and existing instruction augmentation methods mainly focus on the centralized setting without considering the distributed environment. Firstly, our experiments show that cross-client domain coverage, rather than data heterogeneity, drives model performance in FedDIT. Thus, we propose FedDCA, which maximizes domain coverage through greedy client center selection and retrieval-based augmentation. To reduce client-side computation, FedDCA$^*$ uses heterogeneous encoders with server-side feature alignment. Extensive experiments across four domains (code, medical, financial, and mathematical) validate the effectiveness of both methods. Additionally, we explore the privacy protection against memory extraction attacks with various amounts of public data and results show that there is no significant correlation between the amount of public data and the privacy-preserving capability. However, as the fine-tuning round increases, the risk of privacy leakage reduces or converges.
翻译:联邦领域特定指令微调(FedDIT)利用少量跨客户端私有数据和服务器端公共数据进行指令增强,以提升模型在特定领域的性能。然而,影响FedDIT性能的因素尚不明确,且现有的指令增强方法主要集中于中心化场景,未充分考虑分布式环境。首先,我们的实验表明,在FedDIT中,驱动模型性能的关键因素是跨客户端领域覆盖度,而非数据异质性。因此,我们提出了FedDCA方法,该方法通过贪心客户端中心选择和基于检索的增强来最大化领域覆盖。为了降低客户端计算开销,FedDCA$^*$采用异构编码器并结合服务器端特征对齐。在四个领域(代码、医疗、金融和数学)上的大量实验验证了两种方法的有效性。此外,我们探究了在不同数量公共数据下针对记忆提取攻击的隐私保护能力,结果表明公共数据量与隐私保护能力之间无显著相关性。然而,随着微调轮次的增加,隐私泄露的风险会降低或趋于收敛。