環境圖的重要性抽樣


15

在基於MIS的單向路徑跟踪器和類似類型的渲染器中,對環境地圖(EM)進行採樣的當前最著名,最理想的也是經過生產驗證的方法是什麼?與那些以超複雜且難以理解的實現為代價提供完美採樣的解決方案相比,我寧願選擇相對複雜且功能合理的解決方案。

到目前為止我所知道的

有一些簡單的方法來採樣EM。可以以餘弦加權的方式採樣所需的半球,它同時忽略了BSDF和EM函數形狀。結果是,它不適用於動態EM:

enter image description here

要改善採樣到可用水平,就可以採樣EM的亮度在整個領域。它相對容易實現,並且效果還不錯。但是,抽樣策略仍然是忽略半球能見度信息和余弦係數(以及BSDF),導致表面上的高噪聲不會被EM的高強度區域直接照亮:

enter image description here

紙張

我找到了一些有關該主題的論文,但尚未閱讀。這些中的任何一個值得在前向單向路徑跟踪器中閱讀和實現,還是有更好的東西?

0

While the product sampling methods provides better (perfect) distribution for rays I would say that using MIS (multiple importance sampling) is a method verified in production. Since shadowing information is unknown product sampling doesn't become perfect anyway and it is quite hard to implmenet. Shooting more rays might be worth more! Depends on your situation and ray budgets of course!

Short description of MIS: In essence you trace both a BSDF-ray (as you would anyway for doing indirect lighting) and an explicit ray towards the EM. MIS give you weights so that you can combine them in a way that removes a lot of the noise. MIS is especially good at choosing "technique" (implicit or explicit sampling) based on the situation that arises. This happens naturally without the user having to make hard choices based on roughness etc.

Chapter 9 of http://graphics.stanford.edu/papers/veach_thesis/ covers this in detail. Also see https://www.shadertoy.com/view/4sSXWt for a demo of MIS in action with area lights.


4

Disclaimer: I have no idea what is the state of the art in the environmental map sampling. In fact, I have very little knowledge about this topic. So this will not be complete answer but I will formulate the problem mathematically and analyze it. I do this mainly for myself, so I make it clear for my self but I hope that OP and others will find it useful.


$$ \newcommand{\w}{\omega} $$

We want to calculate direct illumination at a point i.e. we want to know the value of the integral $$ I =\int_{S^2} f(\omega_i,\omega_o,n) \, L(\omega_i) \,(\omega_i\cdot n)^+ d\omega_i $$ where $f(\omega_i,\omega_o,n)$ is BSDF function(I explicitly state dependance on the normal which will be useful later), $L(\omega_i)$ is radiance of environmental map and $(\omega_i \cdot n)^+$ is the cosine term together with the visibility(that what the $+$ is for) i.e. $(\omega_i \cdot n)^+=0$ if $(\omega_i \cdot n)<0$

We estimate this integral by generating $N$ samples $\omega_i^1,\dots,\omega_i^N$ with the respect to the probability density function $p(\omega_i)$, the estimator is $$ I \approx \frac1N \sum_{k=1}^N \frac{f(\omega_i^k,\omega_o,n) \, L(\omega_i^k) \,(\omega_i^k\cdot n)^+ }{p(\omega_i^k)} $$

The question is: How do we choose the pdf $p$ such that we are able to generate the samples in acceptable time and the variance of the above estimator is reasonably small.


Best method Pick $p$ proportional to the integrand $$ p(\omega_i) \sim f(\omega_i,\omega_o,n) \, L(\omega_i) \,(\omega_i\cdot n)^+ $$ But most of the times it is very expensive to generate a sample according to this pdf, so it is not useful in practice.

Methods suggested by OP:

Method one: Choose $p$ proportional to the cosine term $$ p(\omega_i) \sim (\omega_i\cdot n)^+ $$ Method two: Choose $p$ proportional to the EM $$ p(\omega_i) \sim L(\omega_i) $$


Based on the names of mentioned papers I can partially guess what they do(unfortunately I do not have the time and energy to read them right now). But before discussing what they most probably do, let's talk about power series a little bit :D


If we have a function of one real variable e.g. $f(x)$. Then if it is well behaved then it can be expanded into a power series $$ f(x) = \sum_{k=0}^\infty a_k x^k $$ Where $a_k$ are constants. This can be used to approximate $f$ by truncating the sum at some step $n$ $$ f(x) \approx \sum_{k=0}^n a_k x^k $$ If $n$ is sufficiently high then the error is really small.

Now if we have function in two variables e.g. $f(x,y)$ we can expand it only in the first argument $$ f(x,y) = \sum_{k=0}^\infty b_k(y) \, x^k $$ where $b_k(y)$ are functions only in $y$. It can be also expanded in both arguments $$ f(x,y) = \sum_{k,l=0}^\infty c_{kl} x^k y^l $$ where $c_{kl}$ are constants. So function with real arguments can be expanded as sum of powers of that argument. Something similar can be done for functions defined on sphere.

Now, let's have a function defined on sphere e.g. $f(\omega)$. Such a function can be also expanded in similar fashion as function of one real parameter $$ f(\omega) =\sum_{k=0}^\infty \alpha_k S_k(\omega) $$ where $\alpha_k$ are constants and $S_k(\omega)$ are spherical harmonics. Spherical harmonics are normally indexed by two indices and are written as function in spherical coordinates but that is not important here. The important thing is that $f$ can be written as a sum of some known functions.

Now function which takes two points on sphere e.g. $f(\omega,\omega')$ can be expanded only in its first arguments $$ f(\omega,\omega') = \sum_{k=0}^\infty \beta_k(\omega') \, S_k(\omega) $$ or in both its arguments $$ f(\omega,\omega') = \sum_{k,l=0}^\infty \gamma_{kl} \, S_k(\omega)S_l(\omega') $$


So how is this all useful?

I propose the CMUNSM(Crazy mental useless no sampling method): Lets assume that we have expansions for all the function i.e. \begin{align} f(\omega_i,\omega_o,n) &= \sum_{k,l,m=0}^\infty \alpha_{klm} S_k(\omega_i)S_l(\omega_o) S_m(n) \\ L(\omega_i ) &= \sum_{n=0}^\infty \beta_n S_n(\omega) \\ (\omega_i\cdot n)^+ &= \sum_{p,q=0}^\infty \gamma_{pq} S_p(\omega_i)S_q(n) \end{align} If we plug this into the integral we get $$ I = \sum_{k,l,m,n,p,q=0}^\infty \alpha_{klm} \beta_n \gamma_{pq} S_l(\omega_o) S_m(n) S_q(n) \int_{S^2} S_k(\omega_i) S_n(\omega) S_p(\omega_i) d\omega_i $$

Actually we no longer need Monte Carlo because we can calculate values of the integrals $\int_{S^2} S_k(\omega_i) S_n(\omega) S_p(\omega_i) d\omega_i$ beforehand and then evaluate the sum(actually approximate the sum, we would sum only first few terms) and we get desired result.

This is all nice but we might not know the expansions of BSDF or environmental map or the expansions converge very slowly therefore we would have to take a lots of terms in the sum to get reasonably accurate answer.


So the idea is not to expand in all arguments. One method which might be worth investigating would be to ignore BSDF and expand only the environmental map i.e. $$ L(\omega_i) \approx \sum_{n=0}^K \beta_n S_n(\omega_i) $$ this would lead to pdf: $$ p(\omega_i) \sim \sum_{n=0}^K \beta_n S_n(\omega_i) (\omega \cdot n)^+ $$

We already know how to do this for $K=0$, this is nothing but the method one. My guess is, it is done in one of the papers for higher $K$.


Further extensions. You can expand different functions in different arguments and do similar stuff as above. Another thing is, that you can expand in different basis, i.e. do not use spherical harmonics but different functions.

So this is my take on the topic, I hope you have found it at least a little bit useful and now I'm off to GoT and bed.


6

This is not a full answer, I would just like to share the knowledge I obtained by studying two of the papers mentioned in the question: Steerable Importance Sampling and Practical Product Importance Sampling for Direct Illumination.

Steerable Importance Sampling

In this paper they propose a method for sampling the product of the clamped cosine component and environment map lighting:

$$ L_{EM}\left(\omega_{i}\right)\left(\omega_{i}\cdot n\right)^+ $$

They make use of the fact that a piece-wise linear approximation of the product function can be relatively well expressed and partially pre-computed using the first nine spherical harmonic bases. They build this approximation on top of an adaptively triangulated EM and use it as an importance function for sampling.

They pre-compute and store approximation coefficients for each triangle vertex and also coefficients for computation of approximation integral over the triangle for each triangle. These coefficients are called vertex and triangle weights. Then they make use of the fact that is it possible to easily compute coefficients for an integral over a set of triangles just by summing the individual triangle weights without incorporating additional spherical harmonic bases. This allows them to build a balanced binary tree over the triangles where each node contains coefficients for computing approximation integral over the node's sub-tree triangles.

The sampling procedure consists of selecting a triangle and sampling its area:

  • A triangle is chosen by descending down the pre-built binary tree with probability proportional to the sub-integral approximations. This costs $O\left(\log N_{\triangle}\right)$ on-the-fly computations of sub-integrals, each consisting of one inner product of clamped-cosine spherical harmonic coordinates with the pre-computed coefficients.
  • The chosen triangle surface is then sampled in $O\left(1\right)$ time in a bi-linear fashion by a novel stratified sampling strategy proposed in the paper.

To me, this looks like a promising technique, but the classical question with papers is how it will behave in the real life. On the one hand, there may be pathological cases when the EM is hard to approximate with triangulated piece-wise linear function, which can lead to an enormous amount of triangles and/or to poor sample quality. On the other hand, it can instantly provide a relatively good approximation of the whole EM contribution, which can be useful when sampling multiple light sources.

Practical Product Importance Sampling for Direct Illumination

In this paper they propose a method for sampling the product of environment map lighting and cosine-weighted surface reflectance:

$$ L_{EM}\left(\omega_{i}\right)f_{r}\left(\omega_{i},\omega_{o},n\right)\left(\omega_{i}\cdot n\right)^+ $$

The only pre-processing in this method is computation of a hierarchical representation of the EM (either mipmap or wavelet based). The rest is done on the fly during the sampling.

The sampling procedure:

  • Building an on-the-fly BRDF approximation: They first draw several BRDF importance samples and evaluate $f_{r}\left(\omega_{i},\omega_{o},n\right)\left(\omega_{i}\cdot n\right)^+$. From these values they build a quadtree-based piece-wise constant approximation of the BRDF, where each leaf of the tree contains exactly one sample.
  • Computing a product of the BRDF approximation and the EM: Multiplication is done at the BRDF quadtree leaves and averaged values are propagated to parents.
  • Product sampling: uniform samples are fed through the product tree using simple sample warping.

The procedure should generate relatively good samples at the cost of heavy pre-computation – they show that roughly 100–200 BRDF samples are needed for BRDF approximation to achieve the best sampling performance. This may make it suitable for purely direct illumination computations, where one generates many samples per shading point, but is most probably too expensive for global illumination algorithms (e.g. uni- or bi-directional path tracers), where you usually generate only a few samples per shading point.