TransWikia.com

Why is eye-based ray tracing preferred over light-based ray tracing?

Computer Graphics Asked by jheindel on August 12, 2020

It seems that virtually all path tracers use eye-based or view-based path tracing. That is, the light rays originate from the camera rather than the light source. The reason given for this everywhere I have seen online is that if one begins from the light source, then it is quite unlikely that the ray will ever hit the camera. For instance, the documentation for Blender’s cycles says " we do not waste light rays that will not end up in the camera".

This seems intuitive at first, but this seems only to reverse the problem because one will just trace a ray until it hits the light source now, and this ray may never hit a light source before the maximum number of bounces, so we will just waste that ray as well. I can see how this is an improvement though, because if we originate at the camera, the first bounce will tend to have a large weight associated with it, but if we flipped the path to start from the light, then this last bounce would tend to have a small weight because many bounces have preceded it. That seems like the answer, except for one thing.

Most path tracers use next even estimation, which just means that at each bounce, the light path is connected directly to a light source as long as nothing occludes that path. This is always a valid path and greatly speeds up convergence. However, if one is going to use next event estimation, then I really can’t understand how tracing rays from the camera is advantageous over tracing rays from the light source? In next event estimation, one should almost always get a complete light path either way. Is it related to the ambiguity about which pixel on the camera to connect to? There is a similar ambiguity about which point of an area light to connect to.

I am sure there must be a reason for tracing from the camera because everybody seems to be doing it, so if someone could explain this for me, and/or point to recent papers which compare the approaches, that would be greatly appreciated.

2 Answers

It is a matter of performance.

From a particular point on any object, you cannot know precisely where any illumination is coming from. There could be a near-perfect mirrored surface nearby. There could be water which reflects some portion of light to that point on the surface. Answering this question would require a full solution to the rendering equation.

However, from a particular point on any object in the scene, you can at least know whether or not any lights are directly casting light on that point. How? By firing a ray directly from the surface to that light. You know exactly where the lights are, so you can fire rays in those directions. If nothing opaque is in the way, then some portion of that light directly illuminates that point on the surface.

If you start from the perspective of a light, you can fire a whole bunch of rays in every direction of the light and hit a whole bunch of objects. You can determine the direct illumination on all of those positions on all of those objects.

But there is no guarantee that any of those objects are visible to the viewer. In this scenario, many rays and light computations get wasted.

By contrast, if you start off from the perspective of all surfaces directly visible to the viewer, you can find all surfaces that are directly visible to the viewer. From there, you can compute the direct illumination for all of those points within the view.

Exactly zero rays have been wasted in this scenario. Every ray you fire is contributing in some way to the image (even if a light is blocked, it's still contributing by not providing illumination from that surface).

So in the case of purely direct illumination, there is no question that starting from the viewport uses fewer rays and fewer lighting computations. It is therefore faster.

If you start firing doing reflection and refraction rays, these too are faster if you start from the viewer. Each reflection/refraction ray is fired from a position that we know contributes to the final image. Whereas if you're firing a bunch of rays due to rays that were fired from lights, you have no idea if they're going to meaningfully contribute to the scene.

In view-based ray tracing, every ray you fire will contribute meaningfully to the scene being displayed. Well, until you start having to take into account more complex lighting, such as diffuse inter-reflection. That's the point where you can no longer know for certain where the source of lighting is, so you have to start firing rays speculatively.

But even then, if you're going to fire rays speculatively, it's best to start from places that you know are going to be illuminated.

Answered by Nicol Bolas on August 12, 2020

As you point out there is problems about which pixel on the camera to connect to. You probably wouldn't notice where you took the sample on an area light because generally the light is pretty uniform throughout, and even if it is textured you can just take a few samples and it will average to look ~correct. However if you connect to a random pixel, you would very easily notice that some pixels are black or have aliasing problems. By starting at the camera it's easier to ensure that all pixels are being sampled enough.

You are also only considered the light from one bounce. If you also want to get indirect lighting then your statement about weights comes into play again, and you are more likely to get a good path by starting at the camera.

Finally there are many techniques which start from the light! And techniques which start at both ends and try and connect. Try searching for photon mapping and bi-directional path tracing. It just turns out that forward path tracers generally work the best in most situations. Renderman has an integrator which combines all these techniques which you can read about here https://rmanwiki.pixar.com/display/REN/PxrVCM .

Answered by Peter on August 12, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP