Marmoset, a highly vocalized primate, has become a popular animal model for studying social-communicative behavior and its underlying mechanism comparing with human infant linguistic developments. In the study of vocal communication, it is vital to know the caller identities, call contents, and vocal exchanges. Previous work of a CNN has achieved a joint model for call segmentation, classification, and caller identification for marmoset vocalizations. However, the CNN has limitations in modeling long-range acoustic patterns; the Transformer architecture that has been shown to outperform CNNs, utilizes the self-attention mechanism that efficiently segregates information parallelly over long distances and captures the global structure of marmoset vocalization. We propose using the Transformer to jointly segment and classify the marmoset calls and identify the callers for each vocalization.
翻译:狨猴作为一种高度发声的灵长类动物,已成为研究社会交流行为及其潜在机制的热门动物模型,常与人类婴儿语言发展进行比较。在发声交流研究中,了解发声者身份、呼叫内容及声音交互至关重要。先前基于CNN的研究已实现狨猴发声的呼叫分割、分类与发声者识别的联合建模。然而,CNN在建模长程声学模式方面存在局限;Transformer架构通过自注意力机制,能够高效并行处理长距离信息并捕捉狨猴发声的全局结构,已被证明性能优于CNN。本研究提出使用Transformer对狨猴呼叫进行联合分割与分类,并对每个发声进行发声者识别。