Learning Naturally Aggregated Appearance for Efficient 3D Editing

1HKUST2Ant Group3CAD&CG, ZJU4Stanford

Efficient 3D Editing with the learned canonical image and projection field. Slide for comparison.
The novel view videos without editing is provided on the left.

Abstract

Neural radiance fields, which represent a 3D scene as a color field and a density field, have demonstrated great progress in novel view synthesis yet are unfavorable for editing due to the implicitness. In view of such a deficiency, we propose to replace the color field with an explicit 2D appearance aggregation, also called canonical image, with which users can easily customize their 3D editing via 2D image processing. To avoid the distortion effect and facilitate convenient editing, we complement the canonical image with a projection field that maps 3D points onto 2D pixels for texture lookup. This field is carefully initialized with a pseudo canonical camera model and optimized with offset regularity to ensure naturalness of the aggregated appearance. Extensive experimental results on three datasets suggest that our representation, dubbed AGAP, well supports various ways of 3D editing (e.g., stylization, interactive drawing, and content extraction) with no need of re-optimization for each case, demonstrating its generalizability and efficiency.

Method

AGAP Pipeline

The overview of our method. AGAP consists of two components: (1) an explicit 3D density grid ΦG to estimate geometry for density σ; (2) an explicit canonical image ΦI with an associated view-dependent projection field P to aggregate appearance for color c. By performing 2D image processing on the canonical image, our method enables various editing (e.g., content extraction, interactive drawing, and scene stylization) through volume rendering without the need for re-optimization.

3D Scene Editing

3D Scene Editing Teaser

Scene Stylization

Content Extraction

Texture Editing

BibTeX

@article{cheng2023learning,
    title   = {Learning Naturally Aggregated Appearance for Efficient 3D Editing},
    author  = {Ka Leong Cheng and Qiuyu Wang and Zifan Shi and Kecheng Zheng and Yinghao Xu and Hao Ouyang and Qifeng Chen and Yujun Shen},
    journal = {arXiv:2312.06657},
    website = {https://felixcheng97.github.io/AGAP/},
    year    = {2023},
}