This article introduces differentially private log-location-scale (DP-LLS) regression models, which incorporate differential privacy into LLS regression through the functional mechanism. The proposed models are established by injecting noise into the log-likelihood function of LLS regression for perturbed parameter estimation. We will derive the sensitivities utilized to determine the magnitude of the injected noise and prove that the proposed DP-LLS models satisfy $\epsilon$-differential privacy. In addition, we will conduct simulations and case studies to evaluate the performance of the proposed models. The findings suggest that predictor dimension, training sample size, and privacy budget are three key factors impacting the performance of the proposed DP-LLS regression models. Moreover, the results indicate that a sufficiently large training dataset is needed to simultaneously ensure decent performance of the proposed models and achieve a satisfactory level of privacy protection.
翻译:本文介绍了差分隐私对数位置尺度(DP-LLS)回归模型,该模型通过功能机制将对数位置尺度回归与差分隐私相结合。所提出的模型通过向LLS回归的对数似然函数注入噪声以实现扰动参数估计。我们将推导用于确定注入噪声幅度的敏感度,并证明所提出的DP-LLS模型满足ε-差分隐私。此外,我们将通过模拟实验和案例分析评估所提出模型的性能。研究结果表明,预测变量维度、训练样本规模和隐私预算是影响DP-LLS回归模型性能的三个关键因素。同时,结果显示需要足够大的训练数据集才能同时保证模型的良好性能并达到令人满意的隐私保护水平。