We present a new approach for vision-based force estimation in Minimally Invasive Robotic Surgery based on frequency domain basis of motion of organs derived directly from video. Using internal movements generated by natural processes like breathing or the cardiac cycle, we infer the image-space basis of the motion on the frequency domain. As we are working with this representation, we discretize the problem to a limited amount of low-frequencies to build an image-space mechanical model of the environment. We use this pre-built model to define our force estimation problem as a dynamic constraint problem. We demonstrate that this method can estimate point contact forces reliably for silicone phantom and ex-vivo experiments, matching real readings from a force sensor. In addition, we perform qualitative experiments in which we synthesize coherent force textures from surgical videos over a certain region of interest selected by the user. Our method demonstrates good results for both quantitative and qualitative analysis, providing a good starting point for a purely vision-based method for surgical force estimation.
翻译:我们提出了一种新的基于视觉的力估计方法,用于微创机器人手术,该方法基于直接从视频中推导出的器官运动频域基。利用呼吸或心脏周期等自然过程产生的内部运动,我们在频域推断出该运动的图像空间基。由于我们使用这种表示形式,我们将问题离散化为有限数量的低频分量,以构建环境的图像空间力学模型。我们使用这个预先构建的模型,将我们的力估计问题定义为一个动态约束问题。我们证明,该方法能够在硅胶模型和离体实验中可靠地估计点接触力,与力传感器的真实读数相符。此外,我们进行了定性实验,在用户选择的特定感兴趣区域内,从手术视频合成连贯的力纹理。我们的方法在定量和定性分析中都展示了良好的结果,为纯粹基于视觉的手术力估计方法提供了一个良好的起点。