Covariance regression offers an effective way to model the large covariance matrix with the auxiliary similarity matrices. In this work, we propose a sparse covariance regression (SCR) approach to handle the potentially high-dimensional predictors (i.e., similarity matrices). Specifically, we use the penalization method to identify the informative predictors and estimate their associated coefficients simultaneously. We first investigate the Lasso estimator and subsequently consider the folded concave penalized estimation methods (e.g., SCAD and MCP). However, the theoretical analysis of the existing penalization methods is primarily based on i.i.d. data, which is not directly applicable to our scenario. To address this difficulty, we establish the non-asymptotic error bounds by exploiting the spectral properties of the covariance matrix and similarity matrices. Then, we derive the estimation error bound for the Lasso estimator and establish the desirable oracle property of the folded concave penalized estimator. Extensive simulation studies are conducted to corroborate our theoretical results. We also illustrate the usefulness of the proposed method by applying it to a Chinese stock market dataset.
翻译:协方差回归提供了一种利用辅助相似性矩阵建模大型协方差矩阵的有效方法。本文提出一种稀疏协方差回归方法,用于处理潜在的高维预测变量(即相似性矩阵)。具体而言,我们采用惩罚方法同时识别信息性预测变量并估计其关联系数。首先研究Lasso估计量,随后考虑折叠凹惩罚估计方法(如SCAD和MCP)。然而,现有惩罚方法的理论分析主要基于独立同分布数据,无法直接适用于本场景。为解决此问题,我们通过利用协方差矩阵和相似性矩阵的谱特性建立非渐近误差界。继而推导Lasso估计量的估计误差界,并建立折叠凹惩罚估计量所期望的oracle性质。通过大量模拟研究验证理论结果,并应用于中国股市数据集以证明所提方法的实用性。