The default Gaussian latent in flow-based generative models poses challenges when learning certain distributions such as heavy-tailed ones. We introduce a general framework for learning data-adaptive latent distributions using one-dimensional quantile functions, optimized via the Wasserstein distance between noise and data. The quantile-based parameterization naturally adapts to both heavy-tailed and compactly supported distributions and shortens transport paths. Numerical results confirm the method's flexibility and effectiveness achieved with negligible computational overhead.
翻译:基于流的生成模型中默认的高斯隐变量在学习某些分布(如重尾分布)时面临挑战。本文提出一种通用框架,通过一维分位数函数学习数据自适应的隐分布,并利用噪声与数据之间的Wasserstein距离进行优化。基于分位数的参数化方法能自然适应重尾分布与紧支撑分布,同时缩短传输路径。数值实验结果验证了该方法在可忽略计算开销下实现的灵活性与有效性。