Modern deep learning approaches usually utilize modality-specific processing. For example, the most common deep learning approach to image classification involves decoding image file bytes into an RGB tensor which is passed into a neural network. Instead, we investigate modality-independent representation learning by performing classification directly on file bytes, without the need for decoding files at inference time. This enables models to operate on various modalities without any hand-designed, modality-specific processing. Our model, ByteFormer, improves ImageNet Top-1 classification accuracy by $5\%$ (from $72.2\%$ to $77.33\%$) relative to DeIT models of similar size. Compared to Perceiver IO, our model requires absolutely no modality-specific processing at inference time, and uses an order of magnitude fewer parameters at equivalent accuracy on ImageNet. We demonstrate that the same ByteFormer architecture can perform audio classification without modifications or modality-specific preprocessing. We achieve $95.42\%$ classification accuracy on the Speech Commands V2 dataset (comparable to the state-of-the-art accuracy of $98.7\%$). Additionally, we demonstrate that ByteFormer can operate jointly on images and audio, handling joint classification without explicit knowledge of the input modality. We release our code at https://github.com/apple/corenet/tree/main/projects/byteformer.
翻译:现代深度学习方法通常采用模态特定的处理方式。例如,最常见的图像分类深度学习方法需要将图像文件字节解码为RGB张量,再输入神经网络。与此不同,我们研究了一种模态无关的表征学习方法,直接在文件字节上进行分类,无需在推理时解码文件。这使得模型能够在无需任何人工设计的模态特定处理的情况下,操作于多种模态之上。我们的模型ByteFormer,在ImageNet Top-1分类准确率上,相对于同等规模的DeIT模型提升了$5\%$(从$72.2\%$提升至$77.33\%$)。与Perceiver IO相比,我们的模型在推理时完全不需要模态特定的处理,并且在ImageNet上达到同等准确率时,使用的参数数量少一个数量级。我们证明了相同的ByteFormer架构无需修改或模态特定的预处理即可执行音频分类。我们在Speech Commands V2数据集上实现了$95.42\%$的分类准确率(与当前最先进的$98.7\%$准确率相当)。此外,我们证明了ByteFormer可以同时对图像和音频进行操作,处理联合分类任务而无需明确知晓输入模态。我们的代码发布于https://github.com/apple/corenet/tree/main/projects/byteformer。