Large Language Models (LLMs) are transforming the robotics domain by enabling robots to comprehend and execute natural language instructions. The cornerstone benefits of LLM include processing textual data from technical manuals, instructions, academic papers, and user queries based on the knowledge provided. However, deploying LLM-generated code in robotic systems without safety verification poses significant risks. This paper outlines a safety layer that verifies the code generated by ChatGPT before executing it to control a drone in a simulated environment. The safety layer consists of a fine-tuned GPT-4o model using Few-Shot learning, supported by knowledge graph prompting (KGP). Our approach improves the safety and compliance of robotic actions, ensuring that they adhere to the regulations of drone operations.
翻译:大型语言模型(LLMs)正在通过使机器人能够理解并执行自然语言指令来改变机器人领域。LLM的核心优势在于能够基于所提供的知识处理来自技术手册、指令、学术论文和用户查询的文本数据。然而,在机器人系统中部署未经安全验证的LLM生成代码存在重大风险。本文概述了一个安全层,用于在模拟环境中执行由ChatGPT生成的代码以控制无人机之前对其进行验证。该安全层由一个使用小样本学习进行微调的GPT-4o模型构成,并辅以知识图谱提示(KGP)。我们的方法提高了机器人动作的安全性与合规性,确保其遵循无人机操作规范。