Research Paper

FSGS: Real-Time Few-shot View Synthesis using Gaussian Splatting

A few-shot 3DGS method that tries to produce real-time novel views from very sparse input views.

December 2023Few-Shot ReconstructionarXiv:2312.00451

Detailed Reading

FSGS focuses on what happens when the capture set is too small. With only a few views, vanilla 3DGS can place Gaussians that explain the training cameras but fail badly elsewhere. The problem is not rendering speed; it is underconstrained geometry.

The paper adds few-shot regularization so Gaussians are less free to overfit. It borrows ideas from sparse-view neural rendering and adapts them to explicit primitives, trying to preserve plausible structure while keeping the final scene real-time renderable.

The algorithmic lesson is that sparse-view 3DGS needs priors. Densification alone cannot invent reliable geometry from three views. If a product claims few-photo splat creation, papers like FSGS explain the extra constraints needed behind the scenes.

FSGS studies the sparse-input setting, where the original 3DGS pipeline is underconstrained. With only a few views, photometric optimization can create floaters, overfit visible pixels, or hallucinate geometry that fails from novel viewpoints. The paper therefore adds priors and regularization aimed at making few-shot reconstruction usable.

The method typically relies on depth or consistency guidance to compensate for missing views. Instead of letting Gaussians grow purely from sparse image loss, it constrains their geometry and appearance so they agree across viewpoints. The goal is not only high training-view quality, but stable extrapolation to unseen cameras.

The algorithmic issue is that 3DGS is powerful enough to memorize. In dense capture, memorization is checked by many overlapping views; in few-shot capture, the optimizer has too much freedom. FSGS is important because it makes this failure explicit and proposes a way to regularize Gaussian placement and densification under sparse supervision.

For users, the paper is most relevant when capture is expensive or impossible to repeat. It will not magically reconstruct unseen backsides or hidden interiors, but it can reduce the collapse that happens when vanilla 3DGS is trained on too little evidence. Read it alongside generalizable and single-image 3DGS papers to see the spectrum between optimization and learned priors.

What The Paper Does

FSGS tackles the sparse-input problem. Vanilla 3DGS works best with many posed images, but real capture often gives only a few useful views.

The method adds constraints and priors so a Gaussian scene can be trained from limited observations while retaining interactive rendering.

Core Ideas

  • Targets as few as three training views in challenging settings.
  • Regularizes Gaussian optimization to reduce sparse-view overfitting.
  • Keeps the real-time rendering benefit of Gaussian Splatting.

Why It Matters

  • Sparse input is one of the main practical constraints for casual capture and robotics.
  • It starts an important branch of research around few-view Gaussian reconstruction.
  • It clarifies why explicit splats overfit when observations are too limited.

Read This If

  • You cannot assume dense photo coverage for every scene.
  • You are comparing 3DGS with few-shot NeRF methods.
  • You need to understand sparse-view regularization for Gaussian primitives.

Limitations And Caveats

  • Few-shot reconstruction can hallucinate or smear unseen regions.
  • Performance depends heavily on view placement and priors.
  • It does not eliminate the need for good camera poses or a useful initial point cloud.