Download PDFOpen PDF in browser

Towards Deeper and Better Multi-VIew Feature Fusion for 3D Semantic Segmentation

EasyChair Preprint no. 11722

12 pagesDate: January 9, 2024


3D point clouds are rich in geometric structure information, while 2D images contain important and continuous texture information. Combining 2D information to achieve better 3D semantic segmentation has become a mainstream in 3D scene understanding. Albeit the success, it still remains elusive how to fuse and process the cross-dimensional features from these two distinct spaces. Existing state-of-the-art usually exploit bidirectional projection methods to align the cross-dimensional features and realize both 2D & 3D semantic segmentation tasks. However, to enable bidirectional mapping, this framework often requires a symmetrical 2D-3D network structure, thus limiting the network’s flexibility. Meanwhile, such dual-task settings may distract the network easily and lead to over-fitting in the 3D segmentation task. As limited by the network’s inflexibility, fused features can only pass through a decoder network, which affects model performance due to insufficient depth. To alleviate these drawbacks, in this paper, we argue that despite its simplicity, projecting unidirectionally multi-view 2D deep semantic features into the 3D space aligned with 3D deep semantic features could lead to better feature fusion. On the one hand, the unidirectional projection enforces our model focused more on the core task, i.e., 3D segmentation; on the other hand, unlocking the bidirectional to unidirectional projection enables a deeper cross-domain semantic alignment and enjoys the flexibility to fuse better and complicated features from very different spaces. In joint 2D-3D approaches, our proposed method achieves superior performance on the ScanNetv2 benchmark for 3D semantic segmentation.

Keyphrases: Multi-view fusion, point cloud, semantic segmentation

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Chaolong Yang and Yuyao Yan and Weiguang Zhao and Jianan Ye and Xi Yang and Amir Hussain and Bin Dong and Kaizhu Huang},
  title = {Towards Deeper and Better Multi-VIew Feature Fusion for 3D Semantic Segmentation},
  howpublished = {EasyChair Preprint no. 11722},

  year = {EasyChair, 2024}}
Download PDFOpen PDF in browser