Membership inference attacks (MIAs) aim to determine whether specific data were used to train a model. While extensively studied on classification models, their impact on time series forecasting remains largely unexplored. We address this gap by introducing two new attacks: (i) an adaptation of multivariate LiRA, a state-of-the-art MIA originally developed for classification models, to the time-series forecasting setting, and (ii) a novel end-to-end learning approach called Deep Time Series (DTS) attack. We benchmark these methods against adapted versions of other leading attacks from the classification setting. We evaluate all attacks in realistic settings on the TUH-EEG and ELD datasets, targeting two strong forecasting architectures, LSTM and the state-of-the-art N-HiTS, under both record- and user-level threat models. Our results show that forecasting models are vulnerable, with user-level attacks often achieving perfect detection. The proposed methods achieve the strongest performance in several settings, establishing new baselines for privacy risk assessment in time series forecasting. Furthermore, vulnerability increases with longer prediction horizons and smaller training populations, echoing trends observed in large language models.
翻译:成员推理攻击旨在判断特定数据是否被用于模型训练。尽管在分类模型上已有广泛研究,但其对时间序列预测的影响仍鲜有探讨。本文通过引入两种新型攻击填补这一空白:(i)将多元LiRA(一种最初为分类模型开发的最先进成员推理攻击)适配至时间序列预测场景;(ii)提出一种名为深度时间序列攻击的端到端学习新方法。我们将这些方法与从分类场景适配的其他主流攻击进行基准测试。在TUH-EEG和ELD数据集的实际场景中,针对LSTM和最先进的N-HiTS两种强预测架构,在记录级和用户级威胁模型下评估所有攻击。实验结果表明预测模型存在脆弱性,用户级攻击常能实现完美检测。所提方法在多种场景中取得最优性能,为时间序列预测的隐私风险评估建立了新基线。此外,预测时域延长和训练群体缩小会加剧脆弱性,这与大语言模型中观察到的趋势相呼应。