The movie Toy Story needed top-computers in 1995 to render every frame and that took a lot of time (800000 machine-hours according to Wikipedia).
Could it be possible to render it in real time with modern (2025) GPUs on a single home computer?
Fun fact: in the first Toy Story, all the kids at Andy’s birthday have the same face as him.
Basically to sum it up:
-
Render the actual movie from original files, hard because of the inherent technical challenges
-
Recreate the movie, easy from a technical perspective for your machine to render, hard (potentially very hard) from an artistic and design perspective.
-
Others have covered the topic of modern renders and their shortcuts but if you wanted an exact replica I think films like this are rendered on large HPC clusters.
Looking at the Top500 stats for HPCs the average top500 cluster in 1995 was 1.1TFlops, and today that seems to be around 23.4PFlops.
An increase of approximate 21,000 times.
So 800,000 hours from 1995 is about 37 hours on today’s average top500 cluster.
There have been massive improvements in algorithmic efficiencies too
Kingdom Hearts 3 Toy Story world looked damn close to the original, so I’d assume maybe if work was put into it?
Every time. And I don’t even drink. 🤡
I believe he’s making a joke about using PBR shaders to do so, but my knowledge of this is zero.
It’s a joke not everyone appreciates, but that’s ok.
It’s just too “inside golf” for us outside the niche. ‘Scool.
Digital Foundry compared the first movie with Kingdom Hearts 3 back in 2017. Worth a watch.
Wow, it feels like the only thing missing in KH3 is ray-tracing to have a closer result!
Super interesting watch, thanks for the link! Now I’m off to figure out where I was in my Kingdom Hearts play through
I also work in 3D and I wanna say yes. If we’re talking solely about the technical aspect, real-time render today can definitely hold up to, or even surpass, the quality of renders from 30 years ago.
If it would look exactly the same and how much work it would be to recreate the entire movie in a real time engine is another question.
But 1995 isn’t 30 years…ago…
Yeah! it was only… uh…
no no no no no no
no…
With a modern game engine and PBR shaders you can definitely get the same look as the movie. If you try to render it exactly the way they did it with a software renderer on the CPU then maybe. Their rendering software, Reyes, didn’t use raytracing or path tracing at all. You can read about it here
https://graphics.pixar.com/library/Reyes/paper.pdf
I only skimmed it but it seems what they call micro polygons is just subdivision. Which can also be done realtime with tessellation.
Things that can affect it, with some wild estimates on how it reduces the 800kh:
- Processors are 10-100 times faster. Divide by 100ish.
- A common laptop CPU has 16 cores. Divide by 16.
- GPUs and CPUs have more and faster math operations for numbers. Divide by 10.
- RAM speeds and processor cache lines are larger and faster. Divide by 10.
- Modern processors have more and stronger SIMD instructions. Divide by 10.
- Ray tracing algorithms may be replaced with more efficient ones. Divide by 2.
That brings it down to 3-4 hours I think, which can be brought to realtime by tweaking resolution.
So it looks plausible!
They used top of the line hardware specialized for 3D rendering. Seems like they used Silicon Graphics workstations, which costed more than $10k back in the day. Not something the typical consumer would buy. The calculations are probably a bit off with this taken into account.
Then they likely relied on rendering techniques optimized for the hardware they had. I suspect modern GPUs aren’t exactly compatible with these old rendering pipelines.
So multiply with 10ish and I think we have a more accurate number.
There is no comparison between a top of the line SGI workstation from 1993-1995 and a gaming rig built in 2025. The 2025 Gaming Rig is literal orders of magnitude more powerful.
In 1993 the very best that SGI could sell you was an Onyx RealityEngine2 that cost an eye-watering $250,000 in 1993 money ($553,000 today).
A full spec breakdown would be boring and difficult but the best you could do in a “deskside” configuration is 4 x single core MIPS processors, either R4400 at 295Mhz or R10000 at 195Mhz with something like 2GB of memory. The RE2 system could maybe pull 500 Megaflops.
A 2025 Gaming Rig can have a 12 core (or more) processor clocked at 5Ghz and 64GB of RAM. An Nvidia 4060 is rated for 230 Gigaflops.
A modern Gaming Rig absolutely, completely, and totally curb stomps anything SGI could build in the early-mid 90s. The performance delta is so wide it’s difficult to adequately express it. The way that Pixar got it done was by having a whole bunch of SGI systems working together but 30 years of advancements in hardware, software, and math have nearly, if not completely, erased even that advantage.
If a single modern gaming rig can’t replace all of the Pixar SGI stations combined it’s got to be very close.
Remember how extreme hardware progress was back then. the devkit for the n64 was $250k in 1993 but the console was $250 in 1996.
Most of that cost was unlikely for the hardware itself, but rather Nintendo greed. Most of it was probably for the early access to Nintendo’s next console and possibly support from Nintendo directly.
the devkit was an SGI supercomputer, since they designed the CPU. no nintendo hardware in it.
Feels like we’re closing in on a time when remaking something like Toy Story, or any animation, would be as simple as writing a prompt. “Remake Toy Story but it turns out Slinky Dog is an evil mastermind who engineered Woody’s fall from grace”.
Yes and no.
You could get away with it with lots of tricks to down sample and compress at times where even an rtx 5090 with 32GB VRAM is like 1/64th of what you’d need to do in high fidelity.
So you could “do it” but it wouldn’t be “it”.
Hello, I’ve worked in 3D animation productions. Short answer: You can get close.
Unreal Engine has the capacity to deliver something of similar quality in real time providing you have a powerful enough rig. You would need not only the hardware but also the technical know how to optimize several aspects of your scenes. Basically the knowledge of a mid to senior unreal TD and a mid to senior generalist combined to say the least.
Well, considering the existence of The Matrix Awakens on PS5, that sounds right.
Modern GPUs are easily 1000x faster than the ones back then, so 800k hours would be reduced to 800h which is a month worth of time. Thats just raw compute tho, there is lots of optimization work that has happened in the last 30 years, so its probably waaay less than that. I would expect it to be possible in a few days on a single high end GPU depending on how closely you want to replicate the graphics. Some rendering things might be impossible to reproduce in identical manner due to a loss of the exact system and software used back then.
Doubtful
I’m not talking out my ass, a good buddy of mine worked for frantic films for decades and I myself learned 3D alongside him… We would squabble over the rendering farm too…
Anyways most of the renderers made for those early movies were custom built. And anytime you custom build, you can’t generalize to output to a different system. So it’s a long way of saying no but maybe, if you wrote a custom renderer that was specifically designed to handle the architecture of the scenes and the art and the lighting and blah blah blah
Edit oh and you would probably need the Lord of all graphics cards, possibly multiple in a small array with custom therading software
I’d say you could render something close in real time. I’m not entirely aware of all techniques used in this film, but seeing what we can render at 60fps in terms of games, I think you could find a way of achieving a Toy Story look at 24fps. You may need a lot of tweaking though, depending on what you use (i was thinking about EEVEE, the Blender ‘real-time’ engine, and I know there are a bunch of settings and workarounds to use to get good results, and i think they may tend to make the render not real-time (like 0.5s, 1s, 2s per frame, so quite fast but not real time)