Lighting up NeRF via Unsupervised Decomposition and Enhancement (ICCV2023)

Haoyuan Wang1,
Xiaogang Xu2
,
Ke Xu1
and
Rynson W.H. Lau1
Haoyuan Wang1, Xiaogang Xu2, Ke Xu1 and Rynson W.H. Lau1
1City University of Hong Kong  
2Zhejiang University  

We propose an unsupervised method to decompose NeRF and enhance it to address the problem of reconstructing high-quality NeRF given low-quality low-light images with heavy noise.

abstract

Neural Radiance Field (NeRF) is a promising approach for synthesizing novel views, given a set of images and the corresponding camera poses of a scene. However, images photographed from a low-light scene can hardly be used to train a NeRF model to produce high-quality results, due to their low pixel intensities, heavy noise, and color distortion. Combining existing low-light image enhancement methods with NeRF methods also does not work well due to the view inconsistency caused by the individual 2D enhancement process. In this paper, we propose a novel approach, called Low-Light NeRF (or LLNeRF), to enhance the scene representation and synthesize normal-light novel views directly from sRGB low-light images in an unsupervised manner. The core of our approach is a decomposition of radiance field learning, which allows us to enhance the illumination, reduce noise and correct the distorted colors jointly with the NeRF optimization process. Our method is able to produce novel view images with proper lighting and vivid colors and details, given a collection of camera-finished low dynamic range (8-bits/channel) images from a low-light scene. Experiments demonstrate that our method outperforms existing low-light enhancement methods and NeRF methods.

Our results
Reconstruct high-quality NeRF given low-light noisy images

Although RawNeRF demonstrates that training NeRF on RAW images is feasible under low-light conditions, our primary inputs, as illustrated in the following left figure, are significantly dark noisy sRGB images, where training NeRF directly is difficult. The issues predominantly stem from extremely low illumination levels and intensive noise. As emphasized in the right figure. Despite brightening the image, the output remains noticeably noisy with severe color distortion.

Left: A sample of training image; Right: Brightened result, where noise and color distortion degrade the image.

With the implementation of our proposed method, we successfully extract high-quality 3D reconstruction from these distorted, low-quality images. The resultant high-quality 3D renderings, showcased in the following figure, demonstrates the efficacy of our method against the original NeRF. Please read our paper for more results.

A comparison of the baseline methods (the combination of 2D enhancement methods and NeRF) and our method.

Our Network
Decompose, and enhance.
Cite Our Work
@inproceedings{wang2023lighting,
  title={Lighting up NeRF via Unsupervised Decomposition and Enhancement},
  author={Haoyuan Wang, Xiaogang Xu, Ke Xu, and Rynson W.H. Lau},
  booktitle={ICCV},
  year={2023}
}