Autonomous robots in orchards require real-time 3D scene understanding despite repetitive row geometry, seasonal appearance changes, and wind-driven foliage motion. We present AgriGS-SLAM, a Visual--LiDAR SLAM framework that couples direct LiDAR odometry and loop closures with multi-camera 3D Gaussian Splatting (3DGS) rendering. Batch rasterization across complementary viewpoints recovers orchard structure under occlusions, while a unified gradient-driven map lifecycle executed between keyframes preserves fine details and bounds memory. Pose refinement is guided by a probabilistic LiDAR-based depth consistency term, back-propagated through the camera projection to tighten geometry-appearance coupling. We deploy the system on a field platform in apple and pear orchards across dormancy, flowering, and harvesting, using a standardized trajectory protocol that evaluates both training-view and novel-view synthesis to reduce 3DGS overfitting in evaluation. Across seasons and sites, AgriGS-SLAM delivers sharper, more stable reconstructions and steadier trajectories than recent state-of-the-art 3DGS-SLAM baselines while maintaining real-time performance on-tractor. While demonstrated in orchard monitoring, the approach can be applied to other outdoor domains requiring robust multimodal perception.
翻译:果园中的自主机器人需要在重复的行列几何结构、季节性外观变化以及风驱枝叶运动的条件下实现实时三维场景理解。本文提出AgriGS-SLAM,一种视觉-激光雷达SLAM框架,将直接激光雷达里程计与闭环检测同多相机三维高斯溅射(3DGS)渲染相耦合。通过互补视角的批量栅格化处理,系统能在遮挡条件下恢复果园结构;而在关键帧之间执行统一的梯度驱动地图生命周期管理,既可保留精细细节,又能约束内存占用。位姿优化由基于概率的激光雷达深度一致性项引导,通过相机投影反向传播以加强几何与外观的耦合。我们在苹果园与梨园的休眠期、花期及采收期,将系统部署于田间移动平台,采用标准化轨迹协议评估训练视角与新视角合成效果,以减少3DGS在评估中的过拟合现象。跨季节与跨场地的实验表明,相较于近期先进的3DGS-SLAM基线方法,AgriGS-SLAM能提供更清晰、更稳定的重建结果与更平滑的轨迹,同时在拖拉机平台上保持实时性能。该方法虽以果园监测为演示场景,亦可推广至其他需要鲁棒多模态感知的户外应用领域。