Human infants learn language while interacting with their environment in which their caregivers may describe the objects and actions they perform. Similar to human infants, artificial agents can learn language while interacting with their environment. In this work, first, we present a neural model that bidirectionally binds robot actions and their language descriptions in a simple object manipulation scenario. Building on our previous Paired Variational Autoencoders (PVAE) model, we demonstrate the superiority of the variational autoencoder over standard autoencoders by experimenting with cubes of different colours, and by enabling the production of alternative vocabularies. Additional experiments show that the model's channel-separated visual feature extraction module can cope with objects of different shapes. Next, we introduce PVAE-BERT, which equips the model with a pretrained large-scale language model, i.e., Bidirectional Encoder Representations from Transformers (BERT), enabling the model to go beyond comprehending only the predefined descriptions that the network has been trained on; the recognition of action descriptions generalises to unconstrained natural language as the model becomes capable of understanding unlimited variations of the same descriptions. Our experiments suggest that using a pretrained language model as the language encoder allows our approach to scale up for real-world scenarios with instructions from human users.
翻译:人类婴儿在与环境互动时学习语言,其看护者可能会描述他们所操作的物体和动作。与人类婴儿类似,智能体可以在与环境交互的过程中学习语言。本文首先提出一个神经模型,在简单的物体操作场景中双向绑定机器人动作及其语言描述。基于我们之前提出的配对变分自编码器(PVAE)模型,通过使用不同颜色的立方体进行实验,并支持生成替代词汇表,验证了变分自编码器相对于标准自编码器的优越性。进一步实验表明,该模型的通道分离视觉特征提取模块能够处理不同形状的物体。随后,我们引入PVAE-BERT模型,该模型配备了预训练的大规模语言模型——基于Transformer的双向编码器表征(BERT),使其能够超越仅理解网络训练过的预定义描述;动作描述的识别能力可泛化至非受限自然语言,模型因此得以理解同一描述的无限变体。实验证明,使用预训练语言模型作为语言编码器,使得我们的方法能够扩展至真实场景中人类用户提供的指令。