Procedural noise is a fundamental component of computer graphics pipelines, offering a flexible way to generate textures that exhibit "natural" random variation. Many different types of noise exist, each produced by a separate algorithm. In this paper, we present a single generative model which can learn to generate multiple types of noise as well as blend between them. In addition, it is capable of producing spatially-varying noise blends despite not having access to such data for training. These features are enabled by training a denoising diffusion model using a novel combination of data augmentation and network conditioning techniques. Like procedural noise generators, the model's behavior is controllable via interpretable parameters and a source of randomness. We use our model to produce a variety of visually compelling noise textures. We also present an application of our model to improving inverse procedural material design; using our model in place of fixed-type noise nodes in a procedural material graph results in higher-fidelity material reconstructions without needing to know the type of noise in advance.
翻译:程序化噪声是计算机图形学流程中的基础组成部分,提供了一种灵活生成具有“自然”随机变化纹理的方法。目前存在多种不同类型的噪声,每种由独立的算法生成。本文提出了一种单一生成模型,能够学习生成多种类型的噪声,并在它们之间进行平滑混合。此外,该模型还能在未接触空间变化噪声混合训练数据的情况下生成此类混合。这些功能通过采用新颖的数据增强与网络条件化技术相结合的降噪扩散模型训练实现。与程序化噪声生成器类似,该模型的行为可通过可解释参数与随机性源进行控制。我们利用该模型生成了多种视觉上引人注目的噪声纹理,并展示了其在改进逆程序化材质设计中的应用:在程序化材质图中使用我们的模型替代固定类型的噪声节点,无需预先知晓噪声类型即可实现更高保真度的材质重建。