DreaMo: Articulated 3D Reconstruction From A Single Casual Video

1National Tsing Hua University, 2CMU, 3UC Merced, 4UIUC, 5Amazon

DreaMo reconstructs an articulated 3D model from just one Internet video, even when the target subject lacks sufficient view coverage.

Abstract

Articulated 3D reconstruction has valuable applications in various domains, yet it remains costly and demands intensive work from domain experts. Recent advancements in template-free learning methods show promising results with monocular videos. Nevertheless, these approaches necessitate a comprehensive coverage of all viewpoints of the subject in the input video, thus limiting their applicability to casually captured videos from online sources.

In this work, we study articulated 3D shape reconstruction from a single and casually captured internet video, where the subject's view coverage is incomplete. We propose DreaMo that jointly performs shape reconstruction while solving the challenging low-coverage regions with view-conditioned diffusion prior and several tailored regularizations. In addition, we introduce a skeleton generation strategy to create human-interpretable skeletons from the learned neural bones and skinning weights.

We conduct our study on a self-collected internet video collection characterized by incomplete view coverage. DreaMo shows promising quality in novel-view rendering, detailed articulated shape reconstruction, and skeleton generation. Extensive qualitative and quantitative studies validate the efficacy of each proposed component, and show existing methods are unable to solve correct geometry due to the incomplete view coverage.

Video

Novel View Rendering

Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo

3D Shape Reconstruction

Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo
Ours
BANMo

BibTeX

@article{tu2023dreamo,
  title={DreaMo: Articulated 3D Reconstruction From A Single Casual Video},
  author={Tu, Tao and Li, Ming-Feng and Lin, Chieh Hubert and Cheng, Yen-Chi and Sun, Min and Yang, Ming-Hsuan},
  journal={arXiv preprint arXiv:2312.02617},
  year={2023}
}

References

[BANMo] Gengshan Yang, et al. “Building animatable 3d neural models from many casual videos.” CVPR. 2022.