Large Language Models (LLMs) have garnered significant attention for their ability to understand text and images, generate human-like text, and perform complex reasoning tasks. However, their ability to generalize this advanced reasoning with a combination of natural language text for decision-making in dynamic situations requires further exploration. In this study, we investigate how well LLMs can adapt and apply a combination of arithmetic and common-sense reasoning, particularly in autonomous driving scenarios. We hypothesize that LLMs hybrid reasoning abilities can improve autonomous driving by enabling them to analyze detected object and sensor data, understand driving regulations and physical laws, and offer additional context. This addresses complex scenarios, like decisions in low visibility (due to weather conditions), where traditional methods might fall short. We evaluated Large Language Models (LLMs) based on accuracy by comparing their answers with human-generated ground truth inside CARLA. The results showed that when a combination of images (detected objects) and sensor data is fed into the LLM, it can offer precise information for brake and throttle control in autonomous vehicles across various weather conditions. This formulation and answers can assist in decision-making for auto-pilot systems.
翻译:大型语言模型(LLMs)因其理解文本与图像、生成类人文本以及执行复杂推理任务的能力而备受关注。然而,其在动态情境中结合自然语言文本进行决策时,能否推广这种高级推理能力仍需进一步探索。本研究探讨了LLMs在自动驾驶场景中,特别是在算术推理与常识推理相结合的情况下,其适应与应用能力。我们假设LLMs的混合推理能力可通过分析检测到的物体与传感器数据、理解交通规则与物理定律、并提供额外上下文信息来提升自动驾驶性能。这有助于处理传统方法可能难以应对的复杂场景,例如低能见度(由天气条件导致)下的决策。我们在CARLA仿真环境中通过将LLMs的答案与人工标注的真实结果进行对比,基于准确率评估了大型语言模型。结果表明,当向LLM输入图像(检测到的物体)与传感器数据的组合时,其能为不同天气条件下自动驾驶车辆的刹车与油门控制提供精确信息。该问题构建与答案可为自动驾驶系统的决策提供支持。