Abstract
Neural fields have emerged as a powerful and broadly applicable method for representing signals. While there has been much work on applying these representations to various types of signals, the portfolio of signal processing tools that has been built up around discrete signal representations has seen only limited application to the world of neural fields. In this paper, we address this problem by showing how a probabilistic re-interpretation of neural fields can enable their training and inference processes to become filter aware. The formulation we propose not only merges training and filtering in an efficient way, but also generalizes beyond the familiar Euclidean coordinate spaces to the more general set of smooth manifolds and convolutions induced by the actions of Lie groups. We demonstrate how this framework can enable novel filtering applications for neural fields on both Euclidean domains, such as images and audio, as well as non-Euclidean domains, such as rotations and rays. This is achieved with minimal modification to network architecture and training pipelines, and without an increase in computational complexity.
Authors
Daniel Rebain, Soroosh Yazdani, Kwang Moo Yi, Andrea Tagliasacchi
Venue
CVPR 2024