A New Way to Capture the Real World
In July 2023, a team at Inria and the Max Planck Institute published a paper called "3D Gaussian Splatting for Real-Time Radiance Field Rendering." Within months, it changed how people think about capturing and rendering real-world environments. The paper introduced a method to represent 3D scenes as millions of tiny, semi-transparent ellipsoids called Gaussians, trained from regular photographs, and rendered in real time at over 100 frames per second.
By early 2026, Gaussian splatting has landed in production tools: Nuke 17 ships with native support, Houdini 21 includes a technical preview, OpenUSD 26.03 added a first-class schema, and V-Ray 7 can ray-trace splats. Framestore used 4D Gaussian splatting to deliver ~40 final-pixel shots in Superman.
This article breaks down what Gaussian splatting actually is, what it can and can't do, how it integrates into VFX tools today, and where it fits in a production pipeline from capture through compositing.
At a glance
How Gaussian Splatting Works
Traditional 3D pipelines work with polygons. Photogrammetry works with meshes and textures. NeRFs work with neural networks. Gaussian splatting works with something different: a cloud of millions of small 3D ellipsoids, each one a Gaussian distribution in space.
Every Gaussian stores five properties:
| Property | What It Controls |
|---|---|
| Position (x, y, z) | Where it sits in 3D space |
| Scale (3 values) + Rotation (quaternion) | The size and orientation of the ellipsoid |
| Color (spherical harmonics) | View-dependent color: the appearance changes based on viewing angle, capturing specularity and sheen |
| Opacity | How transparent or solid the splat is |
The view-dependent color is key. Spherical harmonics (SH) encode how the color of each Gaussian shifts as you look at it from different angles. This is what makes splats look photorealistic rather than flat. Reflections on a car hood, the sheen on a wooden table, the way a window catches light differently as you walk past: SH captures all of that.
Training. You start with photographs of a scene taken from multiple angles. COLMAP (a Structure from Motion tool) processes those photos to estimate camera positions and generates a sparse point cloud. Each point seeds an initial Gaussian. Then a differentiable rasterizer renders the current Gaussians to images, compares them against the original photos, and adjusts the Gaussian properties through gradient descent. Over thousands of iterations, Gaussians are cloned in under-reconstructed areas, split where a single Gaussian covers too much, and pruned where opacity drops near zero.
The output is a .ply file containing millions of Gaussians. Training takes roughly 20-30 minutes on an RTX 3060.
Rendering. This is the breakthrough. Instead of ray tracing (casting rays through the scene), the renderer projects each 3D Gaussian onto the 2D screen as a 2D ellipse, sorts them by depth, and composites them front-to-back. This maps directly to GPU rasterization hardware, which is why it hits 100+ FPS at 1080p on consumer GPUs. No neural network inference at render time. No ray marching. Just fast alpha compositing.
Source: Kerbl et al., SIGGRAPH 2023
What Gaussian Splatting Can Do
Real-time novel view synthesis. Given photos from positions A, B, and C, the trained model renders a photorealistic view from a never-photographed position D. The original paper demonstrated 30+ FPS at 1080p; current implementations hit 100+ FPS on RTX 4060-class hardware.
Phone-based capture. 20 to 200 photographs from a modern smartphone (iPhone 13+, Pixel 7+, Galaxy S22+) with 70-80% overlap between frames is enough. Lock the exposure, use the main lens (not ultra-wide), and cover the subject from multiple heights and angles.
Reflective and translucent surfaces. Photogrammetry struggles with shiny or semi-transparent objects because it expects diffuse surfaces. Gaussian splatting handles these better because the spherical harmonics encode view-dependent appearance rather than assuming a single flat color per point.
Large environments. Outdoor scenes, architectural interiors, film sets. Drone or DSLR captures of thousands of images produce navigable, photorealistic environments. Rotor Studios uses 4,000-5,000 input images for automotive background environments in commercial production.
What Gaussian Splatting Cannot Do
This section matters more than the previous one.
Relighting. The lighting from the original photographs is baked into the spherical harmonic coefficients. You cannot add a key light, remove a shadow, change time of day, or match to a plate with different lighting. This is architectural, not a missing feature. The Gaussians store appearance, not material properties. There is no BRDF, no diffuse/specular separation, no PBR shading model.
Research on relightable Gaussian splatting exists (Relightable 3D Gaussian, NeurIPS 2023; LumiGauss, WACV 2025) but these require special training pipelines and are not available in any commercial DCC tool as of April 2026. Volinga Plugin Pro for Unreal Engine offers shadow casting from and onto splats via proxy geometry, but this is shadow interaction, not full material-based relighting.
Clean mesh extraction. Gaussians are unstructured ellipsoids in space, not a surface. You cannot hand a splat file to a modeler and say "retopo this." SuGaR (CVPR 2024) adds a regularization term to align Gaussians to surfaces, then uses Poisson reconstruction, but the resulting meshes have documented quality issues: bumpy surfaces, degraded detail at high-curvature areas, sensitivity to scene scale.
Traditional AOVs. No diffuse, specular, shadow, ambient occlusion, or material ID passes. There is no PBR shading model to decompose. Nuke 17's SplatRender outputs depth, deep data, and motion blur, but not the full AOV suite you get from an Arnold or Karma render.
Shadow casting and receiving. V-Ray's own documentation confirms: Gaussian splats are "treated as self-illuminated objects and can't receive shadows from other objects directly." The renderer bypasses classical ray tracing for splats.
Animation. Standard 3DGS captures a static scene. 4D Gaussian Splatting (CVPR 2024) handles dynamic scenes but requires synchronized multi-camera capture rigs (not a single handheld camera) and has documented temporal popping artifacts with large motions.
Transparent and glass objects. This is a fundamental limitation. The alpha blending model cannot separate contributions from transparent geometry and what lies behind it. Mirrors, glass panels, water surfaces produce floating artifacts or holes.
Edge quality. Gaussians are soft ellipsoids. Object boundaries against a background produce semi-transparent halos rather than hard edges. For clean compositing requiring tight mattes, this is a real limitation.
3DGS vs Photogrammetry vs NeRF
| Photogrammetry | NeRF | 3DGS | |
|---|---|---|---|
| Output | Mesh + UV texture | Implicit neural network | Explicit Gaussians |
| Geometry | Clean, measurable, retopologizable | Not extractable | Poor extraction (research-grade) |
| Render speed | Standard rasterization/ray tracing | Slow (seconds per frame) | Real-time (100+ FPS) |
| Training time | N/A (algorithmic, not trained) | Hours to days | 20-30 minutes |
| Reflective surfaces | Fails (bakes highlights as texture) | Reasonable | Good (spherical harmonics) |
| Transparency | Fails | Poor | Poor (fundamental limit) |
| Relighting | Yes (mesh + materials = full PBR) | Research only | Research only |
| Pipeline integration | Any DCC, any renderer | Specialized viewers | Emerging (Houdini, Nuke, UE5, Blender) |
| File handoff | .obj, .fbx, .abc, .usd | Proprietary | .ply, .splat, .spz, .usd (26.03) |
Bottom line: Photogrammetry gives you geometry you can relight, retexture, and animate. 3DGS gives you appearance you can navigate in real time. They solve different problems. On many shows, the answer will be both: photogrammetry for hero assets that enter the lighting pipeline, Gaussian splatting for environment capture, reference, and background plates.
Sources: Teleport/Varjo comparison, Splatware, PyImageSearch NeRF vs 3DGS
Where It Fits in VFX Production
Superman (2025): The First Final-Pixel Use
Framestore delivered approximately 40 final-pixel shots in Superman using 4D Gaussian splatting for Kryptonian holographic recordings (Bradley Cooper and Angela Sarafyan as Superman's biological parents).
The capture pipeline used Infinite Realities' Deus Capture Stage with roughly 200 synchronized cameras in a spherical array. A single two-minute take in controlled lighting. GPU training ran for several days per capture, outputting PLY file sequences at 24fps.
In post, Framestore used Houdini + GSOPs for creative manipulation: applying noise fields, transforming, and slicing the splat volume into "corrupted data packet" segments. Nuke handled compositing with the Irrealix splat plugin for depth extraction directly from PLY sequences. The SIBR Viewer (custom-modified from the original 3DGS codebase) rendered the final frames with command-line camera control.
A production detail worth noting: the ragged ellipsoidal edges from slicing the Gaussian cloud accidentally gave an organic "transmission error" aesthetic that Framestore incorporated into the holographic look. The limitation became the feature.
Sources: Art of VFX, fxguide podcast, befores & afters
Beyond Film: Music Videos and Commercials
Gaussian splatting has landed in music videos and commercial work, where the aesthetic flexibility matters more than photorealistic compositing requirements.
A$AP Rocky's "Helicopter" used dynamic Gaussian splatting, described as one of the most ambitious real-world deployments of dynamic GS in a major music release. Perfume's music video (February 2026, Crescent Inc. Tokyo) was the first to use 4DViews' HOLOSYS+ splatting pipeline for volumetric capture. Tinie Tempah's "Closer" used the Argus Rig for 3DGS capture.
On the commercial side, The Electric Lens Co. produced the first documented full HDR Gaussian splatting commercial ("We Know Good Food") at 4K ACES HDR 60fps, compositing 10 splat scenes at 30 million points each through Nuke with the Irrealix plugin. That's significant because it proves ACES/linear color pipeline is achievable with splats in a controlled production environment.
Practical VFX Use Cases Today
| Use Case | Ready? | Notes |
|---|---|---|
| Set/environment capture for reference | Yes | Phone + Scaniverse. Free. Lighting data embedded in the capture |
| Background plates for compositing | Yes | Nuke 17 SplatRender with depth for Z-compositing over plates |
| Previs / techvis | Yes | Real-time navigation of captured environments |
| Controlled stage capture (Superman-style) | Yes | Requires specialized multi-camera rig |
| LED volume backgrounds (virtual production) | Emerging | Volinga in UE5, ACES/OCIO supported. No confirmed tier-1 credits yet |
| Replacing CG environment builds | No | Can't relight, can't art-direct |
| Hero character work | No | Static, no animation, no AOVs |
Placing CG Objects Inside a Gaussian Splat
This is the question every VFX artist asks: can I capture a location as a Gaussian splat, drop a CG character or object into it, and have the splat environment light and reflect on the CG surface?
The short answer: not with any production tool today.
The problem is architectural. Standard Gaussian splatting bakes lighting into the spherical harmonic color of each Gaussian. The splat is an appearance model, not a scene with extractable light sources. From the renderer's perspective, there are no lights to bounce, no surfaces to reflect, no environment to sample.
Reflections. V-Ray's documentation confirms that splats are "self-illuminated objects" in the renderer. A CG chrome sphere placed inside a V-Ray scene with splats will technically see the splat geometry in reflections, but what it reflects is the baked appearance, not a physically-lit environment. GSOPs' Karma path bakes spherical harmonics to flat v@Cd color for ray tracing compatibility, explicitly losing view-dependent effects. As you orbit the camera, the reflections on your CG object won't update correctly.
Environment lighting. No production DCC allows a Gaussian splat to function as an environment light or HDRI dome for CG objects. Research systems exist: GaSLight (ICCV 2025) demonstrated using HDR Gaussians as emitters in Blender's Cycles for spatially-varying illumination of inserted CG objects. GBake (SIGGRAPH 2025) bakes cubemap reflection probes from inside a splat scene for Unity. Neither is a shipping production tool.
Shadows. CG objects casting shadows onto splats works through shadow catcher proxy geometry (supported in V-Ray, standard Nuke compositing workflow). Splats casting shadows onto CG objects is limited to directional lights in Unreal Engine via GSX Shadows. No other tool supports this.
Z-depth compositing works. Nuke 17's SplatRender outputs depth and deep data. You can correctly composite CG elements in front of or behind splat geometry using standard Z-compositing. Occlusion is solved. Lighting interaction is not.
The Practical Workaround
The production workflow today for placing CG inside a captured environment:
- Capture the location as a Gaussian splat (for the background plate)
- On the same set, photograph a chrome ball and grey ball for a traditional HDRI at the insertion point
- Light your CG object using the captured HDRI in your renderer (Karma, Arnold, V-Ray)
- Render the splat background in Nuke 17 with depth output
- Composite the CG render over the splat background using Z-depth for correct occlusion
- Use shadow catcher proxy geometry for CG shadows onto the splat ground plane
The splat replaces the background plate. The HDRI handles CG lighting and reflections. Chrome balls on film sets remain essential even in the Gaussian splatting era.

Cinematic Island Masterclass
15 hours building a cinematic environment in Solaris from scratch: terrain, scattering, ocean FX, Karma XPU lighting, and Nuke compositing. Gaussian splatting captures real-world environments fast, but for full creative control over lighting and materials, a CG environment pipeline like this is still the production standard.
View Course →Houdini Integration
Houdini 21 Native (August 2025)
SideFX added initial Gaussian splatting support in Houdini 21:
- Bake GSplat SOP: Converts standard .ply attributes (f_dc, rot, opacity, scale, f_rest) to Houdini's internal format (Cd, scale, orient, GS_Alpha, GS_SPH coefficients)
- Solaris import: Bring baked splats into a Solaris stage via SOP Import LOP
- Karma XPU rendering: The only supported render delegate for reconstructed splats. Karma CPU shows a point cloud, not the reconstructed image
- Mixed scenes: Splats and traditional geometry can coexist in the same Solaris stage
SideFX tags this as "technical preview, not production-ready." It's functional for experimentation and R&D, not for shipping final frames through the native path alone.
Source: SideFX Karma H21 What's New, Entagma tutorial
GSOPs (v2.9, January 2026)
GSOPs by CG Nomads (David Rhodes and Ruben Diaz) is the most complete Gaussian splatting toolset for Houdini. Licensed under AGPL-3.0 (free tier) with a paid Early Access tier for Karma/Solaris support.
Free tier (SOPs):
- Import .ply, .splat, .spz files (auto-detects 3DGS vs 2DGS)
- Crop, transform, align, sharpen individual Gaussians
- DBSCAN clustering for noise/floater removal
- Approximate IBL relighting using HDRIs (pre-baked, not ray-traced)
- Real-time viewport rendering with depth compositing against traditional geometry
- Coarse mesh extraction
- v2.9 added an attribute bridge to convert between GSOPs and H21 native conventions
Early Access tier (LOPs):
gaussian_splats_importLOP injects splats into the Solaris stage via proxy spheres with a MaterialX shader- Karma rendering with proper shadows and reflections against CG geometry
- This is the production-grade path for Houdini users who need splats integrated with a full lighting setup
Important distinction: The free-tier IBL relighting is an approximation that re-bakes appearance. The Early Access Karma/Solaris path uses proxy geometry for actual ray-traced shadow and reflection interaction. These are different levels of integration.
Sources: GSOPs GitHub, Digital Production, Radiance Fields
OpenUSD 26.03 (March 2026)
The Alliance for OpenUSD added a native Gaussian splatting schema: UsdVolParticleField3DGaussianSplat. It inherits from Gprim, which means splats are now composable USD primitives: layerable, referenceable, and variant-capable, just like mesh or volume prims.
The release includes a reference Hydra delegate renderer (hdParticleField) for conformance testing and a Python script for PLY-to-USD conversion. Houdini's timeline for integrating OpenUSD 26.03 into Solaris has not been publicly confirmed yet.
Sources: AOUSD blog, CG Channel, OpenUSD schema docs
Nuke 17: Compositing with Splats
Nuke 17.0 (released February 2026) is the first major compositing tool with native Gaussian splatting support. Available in all editions (standard Nuke, NukeX, Nuke Studio).
What you can do:
| Node | Function |
|---|---|
| GeoImport / GeoReference | Import .ply and .splat files into Nuke's 3D system |
| SplatRender | Render splats to 2D with depth output, deep output, and motion blur |
| GeoGrade | Color correct splats non-destructively, with field-based masking |
| GeoDeletePoints | Remove unwanted splats (floaters, noise) |
| Field nodes | FieldCrop, FieldRamp, FieldImage, SDF shapes for masking regions |
Compositing over plates: The SplatRender output drops directly into Nuke's 2D stack. Depth output enables Z-compositing against CG elements or plate geometry. Framestore's Superman pipeline used this approach (with a third-party plugin pre-Nuke 17) to extract depth from PLY sequences and composite holographic elements over live-action.
What you cannot do in Nuke 17: Relight splats. Generate normal passes. Split into diffuse/specular. Get Cryptomatte. The splats are a rendered appearance, not a shading model.
Sources: Foundry release notes, Digital Production, Radiance Fields

Deep Compositing in Nuke: Full CG Forest
Shahid Gire's deep compositing course covers the Nuke pipeline that handles exactly this kind of integration: CG elements composited over environment plates using deep data, Z-depth, and holdout mattes. The same techniques apply whether your background is a traditional plate or a Gaussian splat render.
View Course →Blender
The Blender community has been one of the most active adopters of Gaussian splatting.
3DGS Render by KIRI Engine (v4.1, free, Apache 2.0) is the primary addon. It imports .ply files and converts them to mesh geometry with custom OpenGL/GLSL shaders executed through EEVEE. This is not EEVEE's standard material pipeline; the addon runs its own shaders for the Gaussian appearance.
What you can do:
- Import, view, and render Gaussian splats in real time
- Edit splats using Blender's standard tools (select, delete, transform)
- Paint directly on splats with image textures or solid colors (v3.0+, March 2025)
- Convert mesh objects (.OBJ) to Gaussian splat .ply format
- Combine splats with regular Blender mesh objects in the same scene via "Combine with Native Render" mode
- Apply geometry node modifiers, crop boxes, camera culling
What you cannot do:
- Render through Cycles (path-traced splat rendering is research-grade only)
- Get CG objects to receive lighting or reflections from the splat environment (the combine mode is a 2D compositor merge, not true 3D integration)
- Use orthographic viewport mode (splats require perspective projection)
The "Combine with Native Render" feature deserves a note: it renders splats and mesh objects separately, then composites them together. Your CG character and the splat background will have correct depth ordering (occlusion works), but no light transport between them. No GI bounce, no reflections, no shadow interaction. It's compositing, not rendering.
SkySplat (v0.4, presented at BlenderCon 2025) targets a different workflow: drone footage to splat, entirely inside Blender. Video processing, COLMAP reconstruction, and splat training from a single interface.
Render Engine Support (April 2026)
| Renderer | Gaussian Splat Support | Key Details |
|---|---|---|
| Karma XPU | Technical preview | H21 native. Karma CPU shows point cloud only. "Not production-ready" per SideFX |
| Karma (via GSOPs) | Yes (paid Early Access) | Proxy-sphere + MaterialX. Shadows and reflections against CG geometry |
| V-Ray 7 | Yes (native, Jan 2025) | First commercial renderer with support. Ray-traced with shadows + reflections. Splat clipping added in v7.2 (Dec 2025) |
| Arnold | Not supported | No announcement or documentation found as of April 2026 |
| RenderMan 27 | Not supported | Potential future path via OpenUSD 26.03 schema, but not currently implemented |
| Redshift | Third-party only | TACTYC shader plugin available on Gumroad. Not native |
| EEVEE (Blender) | Yes | Via KIRI Engine 3DGS Render addon (v4.1, free). Cycles not supported |
| Nuke SplatRender | Yes | Depth, deep, motion blur. No AOV suite |
The gap to watch: Arnold has no support. For studios running Arnold-heavy pipelines, Gaussian splatting currently requires V-Ray or the GSOPs Karma path as an alternative.
Getting Started: Capture and Tools
Capture Devices
A phone works. Seriously. The input is photographs, not specialized scan data. 20 to 200 overlapping images of a subject from multiple angles. Lock exposure (auto-exposure confuses the SfM step). Use the main lens. Cover the subject from low, mid, and high angles.
For larger environments (rooms, exteriors, film sets), 300-1000+ images from a DSLR or drone produce better results. Rotor Studios captures 4,000-5,000 images for automotive environments.
Software Pipeline
| Step | Tool | Cost |
|---|---|---|
| Capture (mobile) | Scaniverse (Niantic) | Free, on-device processing, exports .ply and .spz |
| Capture (mobile) | Polycam | Freemium, cloud-processed |
| Training (desktop) | Postshot (Jawset) | Free tier / €17-39/mo. GUI app, strong on architectural scenes |
| Training (open source) | nerfstudio + splatfacto | Free. Needs NVIDIA GPU (RTX 3060 8GB minimum). ~25 min training |
| SfM (prerequisite) | COLMAP | Free. Runs automatically inside nerfstudio |
| Cleanup (browser) | SuperSplat (PlayCanvas) | Free, MIT. Select, delete, mask, compress. No install needed |
| Houdini | GSOPs + H21 Bake GSplat SOP | GSOPs free tier (SOPs) / paid (Solaris). H21 native is free |
| Nuke | Nuke 17.0 native | Included in all editions |
| Blender | KIRI Engine 3DGS Render (v4.1) | Free. Import, edit, paint, render via EEVEE |
A Practical Workflow
- Capture a scene with Scaniverse on your phone (free, on-device)
- Clean the splat in SuperSplat: remove floaters, crop boundaries, reorient
- Import the .ply into Houdini via GSOPs or Bake GSplat SOP
- In Solaris, combine with CG geometry (via GSOPs Early Access LOP for shadow/reflection interaction)
- Render through Karma XPU (native) or V-Ray 7 (production-grade shadows)
- Or: import directly into Nuke 17 via GeoImport, render with SplatRender, composite over plates

Foggy Barn: Cinematic UE5 Environment Breakdown
Julien Rollin's Unreal Engine 5 environment breakdown. The article discusses NanoGS and Volinga for loading splats into UE5. This course shows the traditional UE5 environment pipeline that splats are beginning to augment: building, lighting, and rendering cinematic environments in the engine.
View Course →What This Means for VFX
Gaussian splatting is not a replacement for CG pipelines. You can't art-direct baked lighting. You can't retopologize a splat cloud. You can't generate render passes for a compositor to grade per-light.
What it is: a fast, photorealistic capture format that fills gaps traditional pipelines handle slowly. Set reference. Background environments. On-set data that preserves real-world lighting. Pre-production visualization where speed matters more than control.
The tools are moving fast. Eighteen months ago, splats lived in research code and web viewers. Now they're in Nuke, Houdini, V-Ray, USD, and Unreal. Arnold and RenderMan don't support them yet, but USD 26.03 gives both a schema to build on. The next year will likely close that gap.
For artists working in lighting and compositing: this is a new data type entering your pipeline. Not replacing anything you do today, but showing up alongside it. On-set teams will capture splats of locations. Environment departments will use them as reference or background plates. Compositors will receive .ply files and need to know what SplatRender does and doesn't give them.
The practical move is to grab Scaniverse, capture something around you, clean it in SuperSplat, and import it into your DCC. Ten minutes of hands-on time teaches more than any article.
All Featured Products
| Asset | Creator | Price |
|---|---|---|
| Atmospheric Showdown - Full shot in Houdini 21 Solaris, Karma XPU and Copernicus. | lagfx | Free |
| Foggy Barn - Cinematic UE5 Environment Breakdown | Julien Rollin | $19 |
| Desolate Tropical Island: A Cinematic Environments Masterclass | Arvid | $399 |
| Deep Compositing in Nuke: Full CG Forest | Shahid Gire | $160 |
About the author
Arvid Schneider
Lighting Supervisor, Founder of CG Lounge
Emmy-winning VFX artist with 17 years in the industry. Currently a Lighting Supervisor in Vancouver. Built CG Lounge because marketplace fees shouldn't eat into what creators earn.
Go further
Courses & assets by Arvid Schneider







