We propose Generative Adversarial Regression (GAR), a framework for learning conditional risk scenarios through generators aligned with downstream risk objectives. GAR builds on a regression characterization of conditional risk for elicitable functionals, including quantiles, expectiles, and jointly elicitable pairs. We extend this principle from point prediction to generative modeling by training generators whose policy-induced risk matches that of real data under the same context. To ensure robustness across all policies, GAR adopts a minimax formulation in which an adversarial policy identifies worst-case discrepancies in risk evaluation while the generator adapts to eliminate them. This structure preserves alignment with the risk functional across a broad class of policies rather than a fixed, pre-specified set. We illustrate GAR through a tail-risk instantiation based on jointly elicitable $(\mathrm{VaR}, \mathrm{ES})$ objectives. Experiments on S\&P 500 data show that GAR produces scenarios that better preserve downstream risk than unconditional, econometric, and direct predictive baselines while remaining stable under adversarially selected policies.
翻译:我们提出生成对抗回归(GAR),这是一个通过学习与下游风险目标对齐的生成器来学习条件风险场景的框架。GAR建立在可引出泛函(包括分位数、期望分位数以及联合可引出对)的条件风险回归刻画之上。我们将这一原理从点预测扩展到生成建模,通过训练生成器,使其策略诱导的风险在相同情境下与真实数据的风险相匹配。为确保在所有策略下的鲁棒性,GAR采用极小极大化形式,其中对抗性策略识别风险评估中的最坏情况差异,而生成器则进行调整以消除这些差异。这种结构保持了与广泛策略类别(而非固定的、预先指定的集合)中风险泛函的对齐。我们通过基于联合可引出目标$(\mathrm{VaR}, \mathrm{ES})$的尾部风险实例来说明GAR。在标普500数据上的实验表明,与无条件基线、计量经济学基线和直接预测基线相比,GAR生成的场景能更好地保持下游风险,同时在对抗性选择的策略下保持稳定。