One of the main concerns in design and process planning for multi-axis additive and subtractive manufacturing is collision avoidance between moving objects (e.g., tool assemblies) and stationary objects (e.g., a part unified with fixtures). The collision measure for various pairs of relative rigid translations and rotations between the two pointsets can be conceptualized by a compactly supported scalar field over the 6D non-Euclidean configuration space. Explicit representation and computation of this field is costly in both time and space. If we fix $O(m)$ sparsely sampled rotations (e.g., tool orientations), computation of the collision measure field as a convolution of indicator functions of the 3D pointsets over a uniform grid (i.e., voxelized geometry) of resolution $O(n^3)$ via fast Fourier transforms (FFTs) scales as in $O(mn^3 \log n)$ in time and $O(mn^3)$ in space. In this paper, we develop an implicit representation of the collision measure field via deep neural networks (DNNs). We show that our approach is able to accurately interpolate the collision measure from a sparse sampling of rotations, and can represent the collision measure field with a small memory footprint. Moreover, we show that this representation can be efficiently updated through fine-tuning to more efficiently train the network on multi-resolution data, as well as accommodate incremental changes to the geometry (such as might occur in iterative processes such as topology optimization of the part subject to CNC tool accessibility constraints).
翻译:在多轴增材与减材制造的设计与工艺规划中,一个主要关注点是运动物体(例如刀具组件)与静止物体(例如与夹具一体的工件)之间的碰撞避免。对于两个点集之间各种相对刚性平移与旋转组合的碰撞度量,可概念化为定义在6维非欧几里得构型空间上的紧支撑标量场。显式表示和计算该场在时间和空间上均代价高昂。若固定稀疏采样的 $O(m)$ 个旋转(例如刀具方向),通过快速傅里叶变换(FFT)在分辨率为 $O(n^3)$ 的均匀网格(即体素化几何)上计算作为三维点集指示函数卷积的碰撞度量场,其时间复杂度和空间复杂度分别为 $O(mn^3 \log n)$ 和 $O(mn^3)$。本文提出一种通过深度神经网络(DNN)对碰撞度量场进行隐式表示的方法。我们证明该方法能够从稀疏旋转采样中精确插值碰撞度量,并能以较小的内存占用表示碰撞度量场。此外,我们表明该表示可通过微调进行高效更新,以更有效地在多分辨率数据上训练网络,并能适应几何结构的增量变化(例如在受数控刀具可达性约束的零件拓扑优化等迭代过程中可能发生的变化)。