Accurate epileptic seizure prediction from electroencephalography (EEG) remains challenging because pre-ictal dynamics may span long time horizons while clinically relevant signatures can be subtle and transient. Many deep learning models face a persistent trade-off between capturing local spatiotemporal patterns and maintaining informative long-range context when operating on ultralong sequences. We propose EEG-Titans, a dualbranch architecture that incorporates a modern neural memory mechanism for long-context modeling. The model combines sliding-window attention to capture short-term anomalies with a recurrent memory pathway that summarizes slower, progressive trends over time. On the CHB-MIT scalp EEG dataset, evaluated under a chronological holdout protocol, EEG-Titans achieves 99.46% average segment-level sensitivity across 18 subjects. We further analyze safety-first operating points on artifact-prone recordings and show that a hierarchical context strategy extending the receptive field for high-noise subjects can markedly reduce false alarms (down to 0.00 FPR/h in an extreme outlier) without sacrificing sensitivity. These results indicate that memory-augmented long-context modeling can provide robust seizure forecasting under clinically constrained evaluation
翻译:从脑电图(EEG)准确预测癫痫发作仍具挑战性,因为发作前动态可能跨越长时间范围,而临床相关特征可能微妙且短暂。许多深度学习模型在处理超长序列时,始终面临捕获局部时空模式与保持信息丰富长程上下文之间的权衡。我们提出EEG-Titans,一种融合现代神经记忆机制用于长上下文建模的双分支架构。该模型结合滑动窗口注意力以捕获短期异常,以及通过循环记忆通路总结随时间演变的缓慢渐进趋势。在CHB-MIT头皮EEG数据集上,按时间顺序保留协议进行评估,EEG-Titans在18名受试者中实现了99.46%的平均片段级灵敏度。我们进一步分析了易受伪影干扰记录的安全优先操作点,并表明通过分层上下文策略扩展高噪声受试者的感受野,可显著减少误报(在极端异常情况下降至0.00 FPR/h)而不牺牲灵敏度。这些结果表明,在临床约束评估下,增强记忆的长上下文建模能够提供稳健的癫痫发作预测。