This study proposes an intelligent multi-agent framework built on LLMs and VLMs and specifically tailored to robotics. The goal is to integrate the strengths of LLMs and VLMs with computational tools to automatically analyze and solve problems related to robotic manipulators. Our developed framework accepts both textual and visual inputs and can automatically perform forward and inverse kinematics, compute velocities and accelerations of key points, generate 3D simulations of the robot, and ultimately execute motion control within the simulated environment, all according to the user's query. To evaluate the framework, three benchmark tests were designed, each consisting of ten questions. In the first benchmark test, the framework was evaluated while connected to GPT-4o, DeepSeek-V3.2, and Claude-Sonnet-4.5, as well as their corresponding raw models. The objective was to extract the forward kinematics of robots directly from textual descriptions. The results showed that the framework integrated with GPT-4o achieved the highest accuracy, reaching 0.97 in computing the final solution, whereas the raw model alone attained an accuracy of only 0.30 for the same task. Similarly, for the other two models, the framework consistently outperformed the corresponding raw models in terms of accuracy. The second benchmark test was identical to the first, except that the input was provided in visual form. In this test, the GPT-4o LLM was used alongside the Gemini 2.5 Pro VLM. The results showed that the framework achieved an accuracy of 0.93 in obtaining the final answer, which is approximately 20% higher than that of the corresponding raw model. The third benchmark test encompassed a range of robotic tasks, including simulation, control, velocity and acceleration computation, as well as inverse kinematics and Jacobian calculation, for which the framework achieved an accuracy of 0.97.
翻译:本研究提出了一种基于大语言模型和视觉语言模型构建的智能多智能体框架,并专门针对机器人学领域进行了定制。该框架旨在整合大语言模型、视觉语言模型与计算工具的优势,以自动分析和解决与机器人操作器相关的问题。我们开发的框架能够同时接受文本和视觉输入,并可根据用户查询自动执行正向和逆向运动学计算、关键点速度与加速度求解、机器人三维仿真生成,并最终在仿真环境中实现运动控制。为评估该框架,我们设计了三个基准测试,每个测试包含十个问题。在第一个基准测试中,框架在连接GPT-4o、DeepSeek-V3.2和Claude-Sonnet-4.5及其对应原始模型的条件下进行评估,目标是从文本描述中直接提取机器人的正向运动学。结果显示,集成GPT-4o的框架在计算最终解时达到了最高精度(0.97),而单独使用原始模型完成相同任务的精度仅为0.30。其他两种模型也呈现相同趋势:框架的精度始终优于对应的原始模型。第二个基准测试与第一个测试内容相同,但输入以视觉形式提供。该测试中,GPT-4o大语言模型与Gemini 2.5 Pro视觉语言模型协同工作。结果表明,框架在获取最终答案时达到0.93的精度,较对应原始模型提升约20%。第三个基准测试涵盖了一系列机器人任务,包括仿真、控制、速度与加速度计算,以及逆向运动学和雅可比矩阵求解,框架在该测试中取得了0.97的精度。