We present a deep learning method that propagates point-wise feature representations across shapes within a collection for the purpose of 3D shape segmentation. We propose a cross-shape attention mechanism to enable interactions between a shape’s point-wise features and those of other shapes. The mechanism assesses both the degree of interaction between points and also mediates feature propagation across shapes, improving the accuracy and consistency of the resulting point-wise feature representations for shape segmentation. Our method also proposes a shape retrieval measure to select suitable shapes for cross-shape attention operations for each test shape. Our experiments demonstrate that our approach yields state-of-the-art results in the popular PartNet dataset.
Qualitative comparisons for a few characteristic test shapes of PartNet between the original MinkowskiNet for 3D shape segmentation (“MinkResUNet”), our backbone (“MinkHRNet”), and CrossShapeNet (CSN) in case of using self-shape attention alone (“MinkHRNetCSN-SSA”) and using cross-shape attention with K = 1 key shape per query shape (“MinkHRNetCSN-K1”). The inlet images (red dotted box) show this key shape retrieved for each of the test shapes.
Qualitative comparisons for a few characteristic test shapes of PartNet between the original MID-FC network for 3D shape segmentation (“MID-FC”), and CrossShapeNet (CSN) in case of using self-shape attention alone (“MID-FC-CSN-SSA”) and using cross-shape attention with K = 4 key shape per query shape (“MID-FC-CSN-K4”). The last column shows the key shapes and their ordering, retrieved for each test shape.
Comparisons with other methods reporting performance in PartNet. The column “avg.” reports the mean Part IoU (averaged over all 17 categories). The last column “#cat” counts the number of categories that a method wins over others.
Cross-Shape Attention for Part Segmentation of 3D Point Clouds
Marios Loizou, Siddhant Garg, Dmitry Petrov, Melinos Averkiou, Evangelos Kalogerakis