Fused-Planes:
Improving Planar Representations
for Learning Large Sets of 3D Scenes

* equal contribution
1 Criteo AI Lab, Paris, France
2 LASTIG, Université Gustave Eiffel, IGN-ENSG, F-94160 Saint-Mandé
3 Université Côte d’Azur, CNRS, I3S, France

Abstract

To learn large sets of scenes, Tri-Planes are commonly employed for their planar structure that enables an interoperability with image models, and thus diverse 3D applications. However, this advantage comes at the cost of resource efficiency, as Tri-Planes are not the most computationally efficient option. In this paper, we introduce Fused-Planes, a new planar architecture that improves Tri-Planes resource-efficiency in the framework of learning large sets of scenes, which we call "multi-scene inverse graphics". To learn a large set of scenes, our method divides it into two subsets and operates as follows: (i) we train the first subset of scenes jointly with a compression model, (ii) we use that compression model to learn the remaining scenes. This compression model consists of a 3D-aware latent space in which Fused-Planes are learned, enabling a reduced rendering resolution, and shared structures across scenes that reduce scene representation complexity. Fused-Planes present competitive resource costs in multi-scene inverse graphics, while preserving Tri-Planes rendering quality, and maintaining their widely favored planar structure. Our codebase is publicly available as open-source.

Method



Method Scheme

Fused-Planes architecture and training framework. We learn a set of Fused-Planes \(\mathcal{T} = \{T_i\}\) in the latent space of an autoencoder, denoted by the encoder \(E_\phi\) and the decoder \(D_\psi\). Hence, Fused-Planes render latent images \(\tilde{z}_{i,j}\) with reduced resolution, enabling faster rendering and training. Each Fused-Plane \(T_i\) is split into a micro plane \(T_i^\mathrm{mic}\) which captures scene specific information, and a macro plane \(T_i^\mathrm{mac}\) computed via a weighted summation over \(M\) shared base planes \(\mathcal{B}\), with weights \(W_i\). The shared planes \(\mathcal{B}\) capture common structure across scenes. To learn our set of Fused-Planes, we start by training a first subset of micro planes \(\mathcal{T}_1^\mathrm{mic}\), their corresponding weights \(W_i\) and the base planes \(\mathcal{B}\), jointly with the encoder \(E_\phi\) and decoder \(D_\psi\). Subsequently, we learn the remaining scenes by training the micro planes \(\mathcal{T}_2^\mathrm{mic}\) and their corresponding weights \(W_i\) while fine-tuning \(\mathcal{B}\) and \(D_\psi\).

Results

Resource Costs



Method Scheme

Overview: NeRF methods for MSIG. Comparison of resource costs and rendering quality across recent works when training a scene. Circle sizes represent the NVS quality. Our method presents the lowest training time and memory footprint among all planar representations, while maintaining a similar rendering quality. Fused-Planes-ULW presents the lowest memory requirement.



Comparison with classical Tri-Planes

Shapenet Cars Scenes



Fused-Planes


Tri-Planes

Basel Faces Scenes

Fused-Planes
Tri-Planes

BibTeX


      @article{fused-planes,
        title={{Fused-Planes: Improving Planar Representations for Learning Large Sets of 3D Scenes}}, 
        author={Karim Kassab and Antoine Schnepf and Jean-Yves Franceschi and Laurent Caraffa and Flavian Vasile and Jeremie Mary and Andrew Comport and Valérie Gouet-Brunet},
        journal={arXiv preprint arXiv:2410.23742},
        year={2025}
      }