Neural Adaptive Scene Tracing (NAScenT)

Rui Li, Darius Rückert, Yuanhao Wang, Ramzi Idoughi, Wolfgang Heidrich,
VMV 2022



NAScenT jointly optimizes a hybrid explicit-implicit representation consisting of an octree for 3D space partitioning, and structured networks in each active leaf node. Each network maps a spatial coordinate and a direction to a view-independent density and a view-dependent color. NAScenT adaptively allocates more tree nodes to parts of the 3D space with higher scene complexity. Shown here are renderings of novel views from Fruit and Fern.

Abstract

Neural rendering with implicit neural networks has recently emerged as an attractive proposition for scene reconstruction, achieving excellent quality albeit at high computational cost. While the most recent generation of such methods has made progress on the rendering (inference) times, very little progress has been made on improving the reconstruction (training) times. In this work we present Neural Adaptive Scene Tracing (NAScenT ), the first neural rendering method based on directly training a hybrid explicit-implicit neural representation. NAScenT uses a hierarchical octree representation with one neural network per leaf node and combines this representation with a two-stage sampling process that concentrates ray samples where they matter most – near object surfaces. As a result, NAScenT is capable of reconstructing challenging scenes including both large, sparsely populated volumes like UAV captured outdoor environments, as well as small scenes with high geometric complexity. NAScenT outperforms existing neural rendering approaches in terms of both quality and training time.

Visual Comparison

Figure 1

Novel View Comparison on Synthetic Dataset [MST ∗ 20]. We render viewpoints from near to far for visualizing viewpoint change and the influence of geometry in rendering.

Figure 2

Novel View Comparison on Real Scene Dataset [MST ∗ 20]. We render extrapolated viewpoints that far away from view sampling in the training dataset, to show the rendering performance for challenging large viewpoint change.

Figure 3

UAV scene reconstruction. We compare our method against NeRF [MST ∗ 20], MipNeRF [BMT ∗ 21].

Paper and Supplement

Paper [main.pdf] 
Supplement [supp.pdf] 


Code and dataset

Source code  [Github  (coming soon)] 

Dataset  [Dataset (coming soon)] 

Citation

@InProceedings{Rui2022NAScenT, 
      title={Neural Adaptive Scene Tracing (NAScenT)}, 
      author={Li, Rui and Darius Rückert and Yuanhao Wang and Ramzi Idoughi and Heidrich, Wolfgang},  
      booktitle = {The Symposium on Vision, Modeling, and Visualization (VMV)}, 
      year = {2022}
      }