clock menu more-arrow no yes mobile
ILMxLAB Unreal Engine real-time ray tracing demo - one Stormtrooper talking to another with Captain Phasma in the background
A scene from a Star Wars real-time ray tracing demo built in the Unreal Engine.
ILMxLAB

Filed under:

Real-time ray tracing, and what it means for video games, explained

Welcome to the next generation of gaming graphics

Samit Sarkar (he/him) is Polygon’s deputy managing editor. He has more than 16 years of experience covering video games, movies, television, and technology.

With the arrival in April of the first official details about Sony’s next-generation PlayStation, and Microsoft’s unveiling of Project Scarlett at its E3 2019 press briefing on Sunday, the future of console gaming is finally becoming a bit less hazy.

Details remain scarce for now. Sony confirmed that it will not release the PlayStation 5 (or whatever it ends up being called) before 2020, and Microsoft said that Project Scarlett is scheduled for holiday 2020 (with Halo Infinite as a launch title). But there’s a key next-generation graphics feature that seems to be gaining traction across console and PC gaming — and no, we’re not referring to 8K resolution, which the PS5 and Project Scarlett will apparently support even though it’s barely a twinkle in the eye of most consumers.

Ray tracing — specifically, real-time ray tracing — is the latest buzz phrase in gaming graphics, and with good reason: It’s the future. The technique was not feasible in video games until very recently; it had been limited primarily to the world of Hollywood filmmaking, where it has long been used to produce computer-generated imagery in animation and visual effects (and used to blend that CGI seamlessly with live-action footage). Now, with confirmation that the PS5 and Project Scarlett will support real-time ray tracing, it’s poised to become one of the defining elements of next-generation graphics.

But why is ray tracing such a big deal for video games? How is it different from the way game makers have done things for decades? Let’s dive in — and don’t worry, we’ll try not to get too nerdy.

Control - screenshot of wet floor with Nvidia RTX ray traced reflections
A wet office floor with ray-traced reflections in Remedy’s upcoming game Control.
Remedy Entertainment/505 Games

What is ray tracing, and how does it work?

In the real world, everything that we see is essentially the result of light hitting the objects in our view. The varying degrees to which those objects absorb, reflect, and/or refract the light determines how everything looks to us.

Ray tracing is essentially the reverse process, and the name is very literal: It refers to a method of generating an image with a computer by “tracing” the path of light from an imaginary eye or camera to the objects in that image. (This is way more efficient than tracing all rays emitted by light sources; processing any rays that don’t make it to the viewer would be a waste of computing power.)

A ray tracing algorithm accounts for elements such as materials and light sources. For instance, two basketballs of an identical shade of orange won’t look the same if one is made of leather and the other is made of rubber, because light will interact with them very differently. Anything that’s more shiny, like metal or hard plastic, will produce reflections and illuminate nearby items with indirect light. Objects sitting in the path of any light rays will cast shadows. And a transparent or translucent substance such as glass or water will refract (bend) light — think of the way a straw appears to break if it’s sitting in a glass of water.

a diagram of how ray tracing works in computer graphics: a camera projects rays through an image plane (a grid) toward an orange ball, which casts a shadow below it because of a light bulb above it
A diagram of ray tracing.
Henrik/Wikimedia Commons

Because ray tracing is based on simulating the way that light moves in real life and how it behaves when it interacts with physical substances and materials — i.e., it’s governed by the laws of physics — CGI produced via ray tracing can truly be photorealistic. That’s why the technique has become the norm in filmmaking. But the downside of ray tracing is that it is so computationally intensive as to be impractical for the needs of real-time video game graphics (or rather, it had been until recently).

To explain why, let’s dive further into the details of how ray tracing actually works. In the diagram above, think of the grid as a computer monitor. To render a scene from a modern video game, the computer maps the 3D virtual world of the game to the 2D viewing plane that is the monitor. In doing so, the computer must determine the color for every pixel on the screen — and a 1080p display has north of 2 million pixels.

The process begins with projecting one or more rays from the camera through each pixel, and seeing if the rays intersect with any triangles. (Virtual objects in computer graphics are composed of polygons, sometimes of thousands or millions of triangles.) If a ray does hit a triangle, the algorithm uses data such as the color of the triangle and its distance from the camera to help determine the final color of the pixel. In addition, rays may bounce off a triangle or travel through it, creating more and more rays to measure. And tracing a single ray through a pixel isn’t sufficient to produce a lifelike image. The more rays, the higher the image quality ... and the higher the processing cost.

Companies like Weta Digital and Pixar have “render farms” — supercomputers that may use tens of thousands of processor cores working together — that can spend hours and hours generating a single frame of a particular visual effect, or everything on the screen in a CG-animated film. But all the CPU and GPU calculations that go into producing one frame of a video game have to be conducted within a fraction of a second. That’s because a gaming machine must be able to render a new frame at least 30 times every second, on the fly, to deliver a smooth play experience. (This is why pre-rendered cutscenes often look significantly better than live gameplay: When developers can take their time to generate the footage for a pre-recorded video, they can put a lot more into it.)

In contrast, real-time computer graphics have been rendered for decades using a process known as rasterization. While rasterization also requires a lot of computing power, modern GPUs are very adept at using the technique to quickly draw polygons and convert them into pixels that can then be shaded and lit. However, there are major limitations to rasterization when it comes to producing photorealistic graphics, especially with respect to lighting. Over the years, game developers have devised clever techniques to generate lighting elements such as reflections, shadows, and indirect illumination, but many of these implementations amount to hacks and workarounds that are required because ray tracing hasn’t been a feasible alternative.

The latest graphics technology uses both rasterization and ray tracing, splitting up the rendering duties between the two methods by the particular tasks to which each one is best suited. Nvidia first brought ray tracing to consumers last fall with its new RTX line of graphics cards, which include hardware that’s built specifically to compute ray tracing calculations. In April, the company released a new driver to enable ray tracing support for some of its GTX cards as well, albeit with lower performance.

EA DICE used the technique for reflections in Battlefield 5, Crystal Dynamics focused on shadows in Shadow of the Tomb Raider, and 4A Games went with global illumination and ambient occlusion in Metro Exodus. Ray tracing support remains limited for now, but that’s likely about to change.

Why is ray tracing the future of video game graphics?

It’s early days for real-time ray tracing in video games. Nvidia’s RTX graphics cards are currently the only consumer-level GPUs that offer hardware-based support for the technique, which is why the number of games that take advantage of the feature is low. But as more and more stakeholders jump on board, the future looks brighter and brighter.

Microsoft integrated ray tracing support into DirectX 12, key software for Windows and Xbox One games, last fall. At the 2019 Game Developers Conference in March, Epic Games and Unity Technologies announced that their respective game engines — the Unreal Engine and Unity, which are two of the most popular engines in modern game development — now offer native support for ray tracing. Crytek will soon integrate software-based ray tracing into its CryEngine, meaning that the feature can work in CryEngine games without dedicated hardware such as Nvidia’s RTX cards.

Nvidia’s chief rival, AMD, isn’t sitting on the sidelines when it comes to ray tracing. The company has yet to announce its own GPUs with hardware-based support for the feature — in unveiling two cards based on its next-generation Navi architecture at E3 2019 on Monday, the Radeon RX 5700 XT and the cheaper RX 5700, AMD did not make any mention of ray tracing. However, both Project Scarlett and the next PlayStation will run on AMD silicon, including custom GPUs based on Navi. If the PS5 and Scarlett support ray tracing, then it seems a safe bet that some of AMD’s Navi graphics cards will, too.

It’s likely that a year from now, AMD will have joined Nvidia in selling ray tracing-capable GPUs. And at that point, we might be just a few months away from the debuts of the PS5 and Scarlett. That would create a massive, rock-solid foundation for game developers, a compelling reason to make games that support ray tracing features. And it would push video game graphics forward, bringing games to an unprecedented level of photorealism.