Jump to Content

Accelerating Neural Field Training via Langevin Monte-Carlo Sampling

Published
View publication Download

Abstract

We show how Neural Field training can be accelerated by efficiently choosing where to sample. While Neural Fields have recently become popular, it is often trained by uniformly randomly sampling the training domain, or through handcrafted heuristics. In this work, we show that improved convergence and final training quality can be achieved by smarter sampling. Specifically, we propose a sampling scheme based on Langevin Monte-Carlo sampling that focuses our training samples in the domain where it actually matters.

Authors

Shakiba Kheradmand, Daniel Rebain, Gopal Sharma, Hossam Isack, Abhishek Kar, Andrea Tagliasacchi, Kwang Moo Yi

Venue

CVPR 2024