使用Next Event Estimation在路徑跟踪器中得到的概率密度


7

我正在嘗試通過遵循已經實現它的那個人的代碼來實現自己的Gradient Domain Path Tracer:

https://gist.github.com/BachiLi/4f5c6e5a4fef5773dab1

我已經設法完成了多個步驟,但我想做更多的事情。我通過實現下一個事件估計擴展了參考中的代碼,這是一些結果。

正常路徑跟踪器圖像:

Basic Path Tracer

"漸變域"生成的圖像:

Gradient Domain image without Next Event Estimation

結果已經不錯了。但是如前所述,我想要更多。所以我實現了Next Event Estimation,這是基本Path Tracer的結果:

Path Tracer with NEE

這是我的代碼:

private Vector3 SampleWithNEE( Ray ray )
{
  // prepare
  Vector3 T = (1,1,1), E = (0,0,0), NL = (0,-1,0);
  int depth = 0;
  // random walk
  while (depth++ < MAXDEPTH)
  {
    // find nearest ray/scene intersection
    Scene.Intersect( ray );
    if (ray.objIdx == -1) break; //if there is no intersection
    Vector3 I = ray.O + ray.t * ray.D; //go to the Hit Point on the scene
    Material material = scene.GetMaterial( ray.objIdx, I );
    if (material.emissive) //case of a light
    {
        E += material.diffuse;
        break;
    }
    // next event estimation
    Vector3 BRDF = material.diffuse * 1 / PI;
    float f = RTTools.RandomFloat();
    Vector3 L = Scene.RandomPointOnLight() - I;
    float dist = L.Length();
    L = Vector3.Normalize( L );
    float NLdotL = Math.Abs( Vector3.Dot( NL, -L ) );
    float NdotL = Vector3.Dot( ray.N, L );
    if (NdotL > 0)
    {
        Ray r = new Ray( I + L * EPSILON, L, dist - 2 * EPSILON ); //make it a tiny bit shorter otherwise I risk to hit my starting and destination point
        Scene.Intersect( r );
        if (r.objIdx == -1) //no occlusion towards the light
        {
            float solidAngle= (nldotl * light.getArea()) / (dist * dist);
            E += T * (NdotL) * solidAngle * BRDF * light.emission;
        }
    }
    // sample random direction on hemisphere
    Vector3 R = DiffuseReflectionCosWeighted( ray.N );
    float hemi_PDF = Vector3.Dot( R, ray.N ) / PI;
    T *= (Vector3.Dot( R, ray.N ) / hemiPDF) * BRDF;
    ray = new Ray( I + R * EPSILON, R, 1e34f );
  }
  return E;
}

一切正常,結果如上圖所示。還有一件事:我的場景中只有分散的曲面。

現在,問題在於,在這種方法中,我使用了兩種PDF:

  • 通過在下一事件估計的"直接照明"中隨機採樣光來給出一個,實際上 SolidAngle 是我們的PDF或更好的1 / PDF。
  • 第二個PDF是使用 DiffuseReflectionCosWeighted 導致的PDF,它帶來的PDF等於 CosTheta / PI

到目前為止,一切都很好,對於任何實現細節,您都可以看一下我的代碼,但是我的Gradient Domain Path Tracer存在問題。確實,就像在上面Tzu-Mao Li所實現的參考鏈接中一樣,我需要整個路徑的最終概率密度來計算最終的梯度圖像。如果沒有下一個事件估計(NEE),如何計算?在這種情況下(因為我只有漫反射曲面),該概率是場景中每次反彈時 CosTheta / PI 的乘積。一切都很好,上面顯示了所得的漸變圖像。

相反,如果我使用NEE,則由於我的整個路徑的概率密度會發生變化,並且我無法理解它的運行方式,因此一切將不再起作用。帶有下一個事件估計的最終梯度域圖像為:

enter image description here

我需要了解如何計算路徑的最終密度概率。你能幫我做嗎?預先感謝!

4

I don't have any experience with Gradient Domain Path Tracing, but here are my thoughts:

There seems to be a different problem

If you look carefully at the little spikes of distortion in the final image, you will see that they are all lit from the same direction - on their top left side at a consistent 45 degrees. The sphere also appears to be lit from this angle, rather than from above by the light source.

This is unlikely to be explained by an incorrect probability estimation for a path. I would expect there to be a different problem with the code, that these distortions are hinting at.

I will therefore address these two separate points:

  1. You want to know how to calculate the Probability Density of a path when using Next Event Estimation.
  2. There is evidence of some problem unrelated to this.

I'll also review the code for non-essential points - but I'll leave this until after the essentials.

Probability Density of a path when using Next Event Estimation

Looking at the paper on which the code you are following was based it seems that the novel shift mapping described in section 5.2 is defined in terms of the reflective properties of the surfaces found at the vertices of the path. I must emphasise that I don't have a full understanding of this, but it suggests that Next Event Estimation may not require a change in this approach, as the surfaces encountered will be the same. Hopefully once the other problems are cleared up it will be easier to judge whether the image looks correct.

Note that section 5.2 of the paper already mentions (just below Figure 10) that they take into account sampling the emitter "either using BSDF or area sampling".

The difference with Next Event Estimation is that the area sampling happens at every vertex of the path, but it isn't obvious to me that this should cause a problem.

The fact that your scene uses only diffuse surfaces means that the offset path should in most cases rejoin the base path at the second vertex, so you would only need to recalculate the area sampling for the first vertex of the offset path.

The cause of the incorrect lighting direction

In reading through the code to familiarise myself with how it works, I noticed that NLdotL is calculated, but then not used. A text search revealed that the only other occurrence has a different case: nldotl. Here are the two variables in context (first and ninth lines of this excerpt):

float NLdotL = Math.Abs( Vector3.Dot( NL, -L ) );
float NdotL = Vector3.Dot( ray.N, L );
if (NdotL > 0)
{
    Ray r = new Ray( I + L * EPSILON, L, dist - 2 * EPSILON ); //make it a tiny bit shorter otherwise I risk to hit my starting and destination point
    Scene.Intersect( r );
    if (r.objIdx == -1) //no occlusion towards the light
    {
        float solidAngle= (nldotl * light.getArea()) / (dist * dist);
        E += T * (NdotL) * solidAngle * BRDF * light.emission;
    }
}

Since nldotl is not defined, the result of the code is undefined behaviour. In practice, the program is likely to act as if nldotl is zero, or for some compilers perhaps a constant arbitrary value, or even a different arbitrary value at each iteration. For your particular compiler it appears to be a constant value, and I strongly suspect this is the cause of the distinctive alignment of lighting angle on all the speckles and the sphere. If there is also another contributing problem it will be easier to analyse it in a separate question once this initial problem has been fixed.

It may be worth considering using a compiler and/or flag setting that gives an error or at the very least a warning for undefined variables, as this type of mistake is both very easy to make, and very easy to overlook later.

Additional contribution of light source

There appears to be another problem that will cause results to be incorrect in a more subtle way, with no obvious distortion. Due to Next Event Estimation, the light source is contributing to each step along the path. This means it should not make a contribution if the path itself hits the light source directly. Otherwise, for that path the light source will be contributing twice. You can correct this by changing:

if (material.emissive) //case of a light
{
    E += material.diffuse;
    break;
}

to:

if (material.emissive) //case of a light
{
    break;
}

This will give a zero contribution from the intersection with the light.

Note that since I can only see this one function, I can't guess whether this will also make the light appear black in the image. You may or may not need to adjust for rays that start at the camera and hit the light directly.


Code review

Doubly finite rays

I'm used to defining a ray as a half infinite line segment - having a starting point but no ending point. I notice that this code gives a ray both a starting point and a length. The only place I can see a reason for this is when testing a shadow ray against the light source: the code checks that there are no intersections on the way to the light, so intersections behind the light (or on the light itself) must be excluded. In all other places, the ray is defined with a pseudo-infinite length (1e34f).

The following suggestion won't affect the correctness of your code, but it may be more readable and avoid you having to work around the need for infinity and having to account for epsilon twice.

If the ray is simply a starting point and a direction, then shadow rays can simply check that the first intersection is the light, rather than checking that there is no intersection. For example, by replacing:

if (r.objIdx == -1) //no occlusion towards the light

with:

if (r.objIdx == LIGHT) // There is no object intersected before the light

Here I use LIGHT as a placeholder for the id of the light source, since that part of the code is not included in the question.

  • This will always give false if the light is occluded by a nearer object.
  • This will always give true if the ray hits the light before any more distant objects.

This is therefore equivalent to the current code, but does not require the ray to store a length.

Lights that don't reflect

The code currently models lights and surfaces separately. This means that if an object is a light then it is only emissive, and reflects no light from other objects.

This causes a negligible difference in the example scene in the question, which has a single bright light. However, if used with a number of dimmer lights, it would be more noticeable that they do not light each other. In many cases the difference will not be noticeable, so this is not a problem if it works for the scenes you wish to render.