Vision Based Navigation consists in utilizing cameras as precision sensors for GNC after extracting information from images. To enable the adoption of machine learning for space applications, one of obstacles is the demonstration that available training datasets are adequate to validate the algorithms. The objective of the study is to generate datasets of images and metadata suitable for training machine learning algorithms. Two use cases were selected and a robust methodology was developed to validate the datasets including the ground truth. The first use case is in-orbit rendezvous with a man-made object: a mockup of satellite ENVISAT. The second use case is a Lunar landing scenario. Datasets were produced from archival datasets (Chang'e 3), from the laboratory at DLR TRON facility and at Airbus Robotic laboratory, from SurRender software high fidelity image simulator using Model Capture and from Generative Adversarial Networks. The use case definition included the selection of algorithms as benchmark: an AI-based pose estimation algorithm and a dense optical flow algorithm were selected. Eventually it is demonstrated that datasets produced with SurRender and selected laboratory facilities are adequate to train machine learning algorithms.
翻译:视觉导航是指从图像中提取信息后,将相机用作制导、导航与控制(GNC)的精密传感器。为将机器学习应用于航天任务,面临的障碍之一在于证明现有训练数据集足以验证相关算法。本研究的目标是生成适用于训练机器学习算法的图像与元数据数据集。研究选取了两个应用场景,并开发了一套包含真值验证的稳健方法来验证数据集。第一个应用场景是与人造物体(ENVISAT卫星模型)进行在轨交会对接。第二个应用场景是月球着陆任务。数据集来源包括:档案数据集(嫦娥三号)、DLR TRON设施实验室、空客机器人实验室、基于模型捕获的SurRender软件高保真图像模拟器以及生成对抗网络。应用场景定义包含基准算法的选取:研究选定了一种基于人工智能的姿态估计算法和一种稠密光流算法作为基准。最终证明,利用SurRender及选定实验室设施生成的数据集足以用于训练机器学习算法。