Let's Get This Out of the Way
I've watched the "this will kill VFX" cycle play out a few times now. Real-time engines were going to kill it. Virtual production was going to kill it. Now AI is going to kill it.
But this time, something is actually different. The technology is moving faster than previous shifts, and the impact is already visible. Entry-level positions are shrinking. Entire departments are getting leaner. Studios are doing more with fewer people and calling it "efficiency."
I'm not going to pretend nothing is changing. That would be dishonest. What I will do is break down exactly where AI is in production VFX right now, what it's actually replacing, what it can't touch, and what you should do about it.
At a glance
AI Is Already in Your Pipeline
Here's what most people miss: AI has been in production VFX for years. You're already using it. You just don't think of it as "AI" because it's packaged inside the tools you already know.
GPU denoising. Every time you hit render in Arnold, RenderMan, or Redshift and use OptiX or Intel OIDN to denoise, that's a neural network. It's been trained on millions of noisy/clean image pairs. It's why you can get a usable lookdev render in 30 seconds instead of 30 minutes. Nobody panicked about this one. Nobody said "the denoiser is replacing lighters." It just made lighting artists faster.
Smart rotoscoping. Nuke's CopyCat, Runway, and even DaVinci's magic mask all use machine learning to separate foreground from background. A roto that took a junior artist 4 hours now takes 45 minutes of cleanup. The roto artist didn't disappear overnight, but roto and paint departments are measurably smaller than they were five years ago. That's a fact worth sitting with.
Texture synthesis. Adobe's Substance 3D Sampler uses ML to extract PBR map sets from a single photo: base color, roughness, normal, displacement, metallic. Their Firefly integration adds text-to-texture generation in Substance 3D Sampler and Stager. Texture artists still art-direct everything. The AI handles the grunt work of initial map extraction and iteration.
Motion capture cleanup. ML-based solvers clean up mocap data faster than manual curve editing. The animators still direct the performance, hit the beats, add the character. The AI handles the noise.
Simulation prediction. Researchers at studios and universities have demonstrated ML models that predict fluid, cloth, and particle behavior at interactive speeds. These aren't shipping in production tools yet, but they point toward a future where FX artists get instant feedback instead of waiting hours for a sim to cook. The direction is clear: more iteration, faster art direction.
Set reconstruction. Gaussian splatting and NeRF capture real-world locations as 3D scenes from camera footage. Nuke 17 added native Gaussian Splat support, making these usable in comp pipelines. This gives environment artists a photogrammetric base to work from instead of starting from reference photos and guesswork.
Face and body ML. De-aging and face replacement started as pure CG: Rogue One's Grand Moff Tarkin in 2016 was a full digital head replacement, no machine learning involved. But ML changed that fast. After The Mandalorian Season 2 used a traditional CG head replacement for Luke Skywalker and drew criticism, a YouTuber named Shamook posted a deepfake version that looked noticeably better. ILM hired him as a Senior Facial Capture Artist, and by The Book of Boba Fett they were blending CG with ML-based face swaps for the final result. Now ML-assisted facial work is standard practice at the major houses. Artists still refine, direct, and approve every frame.
Note
Every AI tool in production VFX has the same pattern: the machine handles the repetitive, time-intensive computation. The artist handles taste, direction, and quality control. This split isn't new. It's what computers have always done for us. But this time, the "repetitive" bucket is growing faster than before, and that means fewer people needed for the same output.
What AI Is NOT Doing in Production
Let's be equally clear about what's not happening on real film and TV productions:
Nobody is typing prompts to generate final shots. No VFX supervisor at ILM, Weta, or Framestore is using Midjourney to create hero shots. The output doesn't match plates. It doesn't follow art direction. It doesn't composite into live action. It doesn't hold up at 4K on a 60-foot screen. Some studios use generative AI for early concept exploration and mood boards, but that's it. The final pixels are still crafted by artists.
There's also a legal wall. The U.S. Copyright Office confirmed in January 2025 that pure AI-generated outputs from prompts alone are not copyrightable. The Supreme Court declined to revisit this in March 2026. Studios need clear copyright ownership over their VFX output, both for their own IP and for client deliverables. Using AI-generated content in final shots creates legal uncertainty that no studio wants to deal with on a $200M production. Until the legal landscape settles, generative AI stays in the concept phase at best.
Nobody is generating hero CG assets from text prompts. The topology, UV layout, and material setup required for production characters and vehicles is still well beyond what generative 3D tools produce. A hero character in a Marvel film has thousands of hours of human craft in the sculpt, retopology, rigging, grooming, texturing, and lookdev. AI-generated 3D meshes are improving, and some studios experiment with them for early concepting and distant background elements, but hero work is still handcrafted.
Union protections exist, but they're limited. IATSE negotiated AI protections into their 2024 AMPTP contract: consent requirements for scanning, severance for AI displacement, and restrictions on forced prompt entry. But here's the uncomfortable truth: most VFX workers aren't covered by these protections. VFX remains one of the largest non-union sectors in motion picture production. If you're a VFX artist at a non-union facility, those IATSE protections don't apply to you. VES has been actively discussing AI's impact, but as an honorary society, not a union, it can't negotiate on your behalf.
Pros
- AI denoising cuts render times by 80-90%
- Smart roto saves days of manual work per shot
- Instant sim previews enable better art direction
- Texture extraction from photos accelerates lookdev
- Motion capture cleanup is faster and cleaner
- Set reconstruction gives real-world data to work from
- More iteration cycles = better final shots
Cons
- Entry-level roto, paint, and matchmove roles are shrinking
- Concept art is shifting toward AI-assisted workflows with fewer billed hours
- Studios expect faster turnarounds without adjusting budgets
- An estimated 118,000+ entertainment positions could be displaced by AI by 2026
- Most VFX workers have no union protections against AI displacement
- The pipeline that trains junior artists into senior ones is getting thinner
- Artists need to learn new tools on top of existing pipelines
Let's Talk About the Jobs
I'm not going to insult your intelligence by pretending AI hasn't cost anyone their job. It has. And the scale is bigger than a lot of people in leadership positions want to admit.
The numbers, as best we can measure them:
- LA County lost an estimated 41,000 film and TV jobs over three years, a quarter of its entertainment workforce.
- MPC/Technicolor laid off thousands of employees in major restructuring waves. Their UK workforce dropped from over 1,100 to under 450.
- Unity/Weta Digital cut 1,800 jobs (25% of their workforce), including over 250 from Weta Digital.
- An Animation Guild-commissioned study estimated ~118,500 entertainment positions could be consolidated, replaced, or eliminated by AI by 2026.
- 75% of entertainment industry leaders surveyed said AI supported the elimination, reduction, or consolidation of jobs at their companies.
Not all of this is AI. The 2023 WGA/SAG strikes created a production drought. Streaming platforms recalibrated their spending. The industry was over-expanded and corrected. But AI is accelerating the contraction in specific areas: roto, paint, concept art, matchmove, and junior generalist roles.
The honest assessment: AI is not replacing senior artists who make creative decisions. It is replacing tasks, and when enough tasks get replaced, positions get consolidated. A team of 12 becomes a team of 8. The 8 are busier and more productive. The 4 are job hunting.
If you're a senior artist with strong creative judgment and cross-department skills, you're in a strong position. If you're early in your career and your entire skill set is a single repetitive task, the ground is shifting under you. That's not fearmongering. That's what the data says.
The Real Threat Isn't Just AI
I'm going to say something that might be unpopular: AI alone isn't the existential threat. The bigger danger is the combination of AI with management that was already looking for reasons to cut costs.
The artists who are struggling right now aren't all struggling because of AI. Many are struggling because:
- They stopped learning new tools 5 years ago
- They only know one DCC and refuse to touch another
- They don't understand the pipeline beyond their department
- They never learned to code or script, even at a basic level
- They treat their skills as static instead of evolving
This has always been true. When I started, artists who only knew Shake and refused to learn Nuke got left behind. Artists who only knew mental ray and refused to learn Arnold got left behind. The tool changed. The pattern didn't.
But I won't pretend that adaptability alone is enough. Sometimes the job just isn't there anymore, no matter how skilled you are. That's the part this industry needs to reckon with honestly.
What You Should Actually Learn
Forget the hype. Here's what's practically useful right now if you want to stay ahead:
1. Understand Your Renderer's ML Features
OptiX denoising, adaptive sampling, AI-driven light sampling. These aren't optional features anymore. They're how you get work done at modern production quality without burning server farms. Know what they do. Know when to use them. Know when they fail (and they do fail on certain materials and edge cases).
2. Learn Nuke's ML Tools
Nuke has been integrating machine learning for a while now. CopyCat lets you train custom inference models on your own data. For compositors, that's a seriously powerful addition to the toolkit. You're not replacing yourself. You're building tools that automate the parts of the job you'd rather skip.

Deep Compositing in Nuke
You need to understand deep compositing before you start layering ML tools on top. This course covers the full CG forest pipeline in Nuke, which is exactly the kind of complex comp work where AI assists but doesn't replace the artist.
View Course →3. Get Comfortable with Python
Every major DCC supports Python. Every pipeline tool is written in Python. Every ML tool has a Python API. You don't need to become a machine learning engineer. You need to be able to read a script, modify it, and plug tools together. That's the baseline now.
4. Learn Procedural Thinking
Houdini artists have been thinking procedurally for decades. That mental model (building systems instead of pushing vertices) is exactly what AI integration requires. Understanding how to build a procedural setup that an ML model can enhance is more valuable than memorizing hotkeys in any single tool.

Procedural Environments in Houdini
Procedural workflows are where AI-assisted VFX is heading. This course teaches the mindset: build systems, not static scenes. That thinking transfers directly to integrating ML tools into your pipeline.
View Course →5. Study the Full Pipeline
Artists who understand multiple departments are way more valuable when AI tools start connecting those departments. If you're a lighter who understands compositing, you can art-direct your renders for the comp. If you're a texture artist who understands shading, you build assets that work in any lighting condition. Cross-department knowledge only gets more valuable as tools evolve.

Introduction to LookDev for Production
LookDev bridges modeling, texturing, lighting, and rendering. Understanding it makes you better in any department, and that cross-department knowledge only gets more valuable as tools change.
View Course →The Studios' Perspective
I've talked to supervisors and leads at multiple studios over the past year. Here's the consistent message:
They want artists who can use AI tools. The budget doesn't always change when you add ML to the pipeline. Sometimes the quality ceiling rises. Sometimes the team shrinks. Usually both. Studios want better work faster, and if that means a team of 8 doing what 12 did before, that's what happens. I'm not going to pretend studios are hiring the same number of people and just "raising expectations." Some are. Some aren't.
The studios experimenting with AI the most aggressively are doing it in R&D departments staffed by senior artists and TDs who understand both the creative and technical sides. They're building proprietary tools that fit their existing pipeline. The strategic play is clear: augment the best artists, automate the rest.
Pro Tip
If you're in a job interview and they ask about AI, the right answer isn't fear or dismissal. It's: "I've been experimenting with ML-assisted denoising and roto tools. Here's how I integrated them into my last project." That's the answer that gets you hired.
The Art Can't Be Automated (Yet)
Here's what I keep coming back to: the hardest part of VFX was never the computation. It was the taste.
Knowing that a light needs to be 2% warmer. Knowing that the creature's skin needs one more pass of subsurface breakup. Knowing that the comp needs to breathe more in the shadows. Knowing when a shot is done.
None of that is automatable today. It requires a human who has spent years developing their eye, who understands story, who can read a director's face in dailies and adjust before they even give a note.
AI makes the technical side faster. The creative side still needs you. But the window between "AI can't do this" and "AI can do a rough version of this" is closing faster than anyone predicted five years ago. Stay honest about where that line is moving.
Common Fears, Honest Answers
It already is, in specific areas. Paint and roto departments are smaller than they were five years ago. Concept art workflows are shifting toward AI-assisted iteration with fewer billed hours. New roles are emerging (ML pipeline TD, AI tool developer, data wrangler), but they require different skills than the entry-level positions they're replacing. If you're entering the industry, focus on understanding the full pipeline, building creative judgment, and learning to work with AI tools, not just the tasks AI is automating. Versatility and creative problem-solving are your best insurance.
For personal exploration, mood boards, and concept development, sure. For production work, understand the legal and ethical implications first. Most studios have policies about this, and copyright law currently doesn't protect AI-generated imagery the same way it protects human-authored work. The skill of prompting and iterating with generative AI is useful context to have, but it's a complement to traditional skills, not a replacement. A concept artist who can paint AND knows how to use AI tools for exploration is more useful than one who can only do either.
Yes, with real caveats. The industry went through a severe contraction in 2023-2024: mass layoffs, studio closures, and budget cuts as streaming platforms recalibrated. LA County alone lost an estimated 41,000 entertainment jobs. But pipelines are filling back up heading into 2026, with Netflix and Disney each spending tens of billions on content. Virtual production, real-time rendering, and immersive media are creating new markets on top of traditional film and TV. The industry has serious structural problems (crunch, rate compression, project instability, lack of union protections for VFX workers), and AI is adding pressure on entry-level roles. If you're passionate about the craft, the opportunities are real, but go in with eyes open about the challenges.
It's already happening at some places, usually framed as "restructuring" or "efficiency improvements." If a studio cuts headcount and expects the remaining team to absorb the work with AI tools, that's a decision about labor costs, not about AI capability. Pay attention to whether your studio is investing in artists who use AI tools or just using AI as an excuse to cut costs. If it's the latter, start looking. And consider supporting VFX unionization efforts, because individual artists have very little leverage against these decisions without collective bargaining.
No. Every product, course, and breakdown on CG Lounge is created by real artists. The marketplace exists specifically to support human creators selling their craft. We're not interested in AI-generated content flooding the catalog. Quality over quantity, always.
The Bottom Line
Every hour you spend reading panic threads about AI replacing artists is an hour you could spend learning a new tool, improving your reel, or building something. But equally, every hour spent pretending nothing is changing is an hour wasted on denial.
The artists who will thrive in the next decade are the ones who:
- Use AI tools as part of their workflow, not a threat to it
- Keep learning across departments, not just their specialization
- Build transferable skills like procedural thinking, scripting, and pipeline understanding
- Focus on taste and craft because that's the part machines can't replicate today
- Support each other because the VFX community has always been stronger together
- Stay honest about what's changing instead of pretending it isn't
The tools got better. Some jobs got smaller. New ones are emerging. The one constant across every technical shift in this industry: the people who stayed curious, kept learning, and supported each other always came out ahead. AI doesn't change that. But pretending it's consequence-free doesn't help anyone.
Go make something. And help the person next to you do the same.
About the author
Arvid Schneider
Lighting Supervisor at Image Engine, Founder of CG Lounge
VFX artist who's worked at ILM, MPC, and Image Engine on projects like The Mandalorian, Ready Player One, and Dune: Prophecy. Built CG Lounge because the industry needs a marketplace that respects artists.
Go further
Courses & assets by Arvid Schneider






