3D-Former: Monocular Scene Reconstruction with SDF 3D Transformers

ICLR 2023


Weihao Yuan, Xiaodong Gu, Heng Li, Zilong Dong, Siyu Zhu,

Alibaba Group   

Abstract



Monocular scene reconstruction from posed images is challenging due to the complexity of a large environment. Recent volumetric methods learn to directly predict the TSDF volume and have demonstrated promising results in this task. However, most methods focus on how to extract and fuse the 2D features to a 3D feature volume, but none of them improve the way how the 3D volume is aggregated. In this work, we propose an SDF transformer network, which replaces the role of 3D CNN for better 3D feature aggregation. To reduce the explosive computation complexity of the 3D multi-head attention, we propose a sparse window attention module, where the attention is only calculated between the non-empty voxels within a local window. Then a top-down-bottom-up 3D attention network is built for 3D feature aggregation, where a dilate-attention structure is proposed to prevent geometry degeneration, and two global modules are employed to equip with global receptive fields. The experiments on multiple datasets show that this 3D transformer network generates a more accurate and complete reconstruction, which outperforms previous methods by a large margin. Remarkably, the mesh accuracy is improved by 41.8%, and the mesh completeness is improved by 25.3% on the ScanNet dataset.


Overview Pipeline


Framework

The overview of the 3D reconstruction framework. The input images are extracted to features by a 2D backbone network, then the 2D features are back-projected and fused to 3D feature volumes, which are aggregated by our 3D transformer and generate the reconstruction in a coarse-to-fine manner.


Network Structure


Network

The structure of the 3D transformer. “S-W-Attn” denotes sparse window attention.


Sparse Window Attention


Attention

(a) Illustration of the sparse window attention. For calculating the attention of the current voxel (in orange), we first sparsify the volume using the occupancy prediction from the coarser level, and then search the occupied voxels (in dark blue) within a small window. The attention is hence computed based on only these neighbor occupied voxels. (b) Illustration of the dilate-attention in a 2D slice. We dilate the occupied voxels and calculate the attention of these dilated voxels (in yellow) to maintain the geometry structure.


Results


Results Results

Discussions


Due to the volume representation, our framework is limited by the trade-off between the resolution of the volume and the memory consumption. The voxel size is set to 4 cm, such that the geometry details less than 4 cm are hard to be recovered. A smaller voxel size would cost more memory, and obtain better reconstruction with more details.

In addition, this 3D transformer structure could be used to aggregate any 3D feature volume, thus it could be applied to more 3D tasks in the future, such as 3D segmentation.


Citation


@inproceedings{yuan2022former3d,
  title={3D-Former: Monocular Scene Reconstruction with 3D SDF Transformers},
  author={Yuan, Weihao and Gu, Xiaodong and Li, Heng and Dong, Zilong and Zhu, Siyu},
  booktitle={Proceedings of the International Conference on Learning Representations},
  pages={},
  year={2023}
}