Recently introduced by some of the authors, the in-context identification paradigm aims at estimating, offline and based on synthetic data, a meta-model that describes the behavior of a whole class of systems. Once trained, this meta-model is fed with an observed input/output sequence (context) generated by a real system to predict its behavior in a zero-shot learning fashion. In this paper, we enhance the original meta-modeling framework through three key innovations: by formulating the learning task within a probabilistic framework; by managing non-contiguous context and query windows; and by adopting recurrent patching to effectively handle long context sequences. The efficacy of these modifications is demonstrated through a numerical example focusing on the Wiener-Hammerstein system class, highlighting the model's enhanced performance and scalability.
翻译:最近由部分作者提出的上下文辨识范式,旨在基于合成数据离线估计描述整个系统类行为的元模型。该元模型训练完成后,通过输入由真实系统生成的观测输入/输出序列(上下文),以零样本学习方式预测其行为。本文通过三项关键创新增强了原始元建模框架:将学习任务置于概率框架中;管理非连续的上下文与查询窗口;以及采用循环分块技术以有效处理长上下文序列。这些改进的有效性通过聚焦于维纳-哈默斯坦系统类的数值算例得到验证,突显了模型增强的性能与可扩展性。