We study personalized federated learning for multivariate responses where client models are heterogeneous yet share variable-level structure. Existing entry-wise penalties ignore cross-response dependence, while matrix-wise fusion over-couples clients. We propose a Sparse Row-wise Fusion (SROF) regularizer that clusters row vectors across clients and induces within-row sparsity, and we develop RowFed, a communication-efficient federated algorithm that embeds SROF into a linearized ADMM framework with privacy-preserving partial participation. Theoretically, we establish an oracle property for SROF-achieving correct variable-level group recovery with asymptotic normality-and prove convergence of RowFed to a stationary solution. Under random client participation, the iterate gap contracts at a rate that improves with participation probability. Empirically, simulations in heterogeneous regimes show that RowFed consistently lowers estimation and prediction error and strengthens variable-level cluster recovery over NonFed, FedAvg, and a personalized matrix-fusion baseline. A real-data study further corroborates these gains while preserving interpretability. Together, our results position row-wise fusion as an effective and transparent paradigm for large-scale personalized federated multivariate learning, bridging the gap between entry-wise and matrix-wise formulations.
翻译:本文研究面向多元响应的个性化联邦学习,其中客户端模型具有异质性但共享变量级结构。现有逐项惩罚方法忽略了响应间的依赖性,而矩阵级融合则导致客户端过度耦合。我们提出了一种稀疏行向融合(SROF)正则化器,该正则化器能够跨客户端对行向量进行聚类并诱导行内稀疏性;同时我们开发了RowFed——一种通信高效的联邦算法,该算法将SROF嵌入具有隐私保护部分参与特性的线性化ADMM框架中。在理论上,我们为SROF建立了Oracle性质(能够实现正确的变量级群组恢复并具有渐近正态性),并证明了RowFed收敛至平稳解。在随机客户端参与机制下,迭代间隙以随参与概率提升而改进的速率收缩。在实证方面,异质机制下的仿真实验表明,相较于NonFed、FedAvg及个性化矩阵融合基线方法,RowFed能持续降低估计与预测误差,并增强变量级聚类恢复能力。一项真实数据研究进一步证实了这些优势,同时保持了模型的可解释性。综合而言,我们的研究结果表明行向融合是一种有效且透明的大规模个性化联邦多元学习范式,弥合了逐项与矩阵级建模框架之间的鸿沟。