Interactive acoustic auralization allows users to explore virtual acoustic environments in real-time, enabling the acoustic recreation of concert hall or Historical Worship Spaces (HWS) that are either no longer accessible, acoustically altered, or impractical to visit. Interactive acoustic synthesis requires real-time convolution of input signals with a set of synthesis filters that model the space-time acoustic response of the space. The acoustics in concert halls and HWS are both characterized by a long reverberation time, resulting in synthesis filters containing many filter taps. As a result, the convolution process can be computationally demanding, introducing significant latency that limits the real-time interactivity of the auralization system. In this paper, the implementation of a real-time multichannel loudspeaker-based auralization system is presented. This system is capable of synthesizing the acoustics of highly reverberant spaces in real-time using GPU-acceleration. A comparison between traditional CPU-based convolution and GPU-accelerated convolution is presented, showing that the latter can achieve real-time performance with significantly lower latency. Additionally, the system integrates acoustic synthesis with acoustic feedback cancellation on the GPU, creating a unified loudspeaker-based auralization framework that minimizes processing latency.
翻译:交互式声学可听化允许用户实时探索虚拟声学环境,实现对音乐厅或历史礼拜空间等声学场景的再现——这些空间或因无法进入、声学特性改变、或实际参观不便而难以体验。交互式声学合成需要对输入信号与一组合成滤波器进行实时卷积,这些滤波器建模了空间的时空声学响应。音乐厅和历史礼拜空间的声学特性均表现为长混响时间,导致合成滤波器包含大量抽头系数。因此,卷积计算可能产生高昂的计算开销,引入显著延迟,从而限制可听化系统的实时交互性。本文提出了一种基于多声道扬声器的实时可听化系统实现方案。该系统利用GPU加速技术,能够实时合成强混响空间的声学特性。通过对比传统CPU卷积与GPU加速卷积的性能,证明后者能以显著降低的延迟实现实时处理。此外,该系统在GPU上集成了声学合成与声学反馈消除功能,构建了一个处理延迟最小化的统一扬声器可听化框架。