$$\Phi = \int_{A_{lens}} \int_{\Omega} L(x, \omega_{i}) cos\theta d\omega dx$$
We got the power, but we actually want the average radiance value in specific pixel (so that we can form the image, which is what ray tracing usually looking for). Is there any way that we can convert this power to radiance? Yes :) In Veach97 the equation actually has another extra term, and the equation does not explicitly integrate out power as result. It looks like this:
$$I = \int_{A_{lens}} \int_{\Omega} W_{e}(x, \omega)L(x, \omega_{i}) cos\theta d\omega dx$$
The original paper name this "measurement equation" and that extra term \(W_{e}(x, \omega)\) is named as importance. Measurement equation try to measure certain value related to power, and importance convert the power to that value. The following description is directly copied from Veach Chapter 4.5 (p115):
"For real sensors, \(W_{e}\) is called the flux responsivity of the sensor. The responding units are \(S/Watt\) (Watt is the unit for power), where \(S\) is the unit of sensor response. Depending on the sensor, \(S\) could represent a voltage, current, change in photographic film density, deflection of a meter needle, etc."
Ya that's how scientists write paper...they view problem in a broader term, try to solve voltage, current, change in photographic film density.....all in one equation....but wait! I just want the average radiance in particular pixel! Can you please just give me the \(W_{e}(x, \omega)\) for that?
It sounds silly now but this question is probably the one that trouble me longest while looking in the paper :| , and I think it's worthy to write it down as memo.
Start with measurement equation above:
\(I = \int_{A_{lens}} \int_{\Omega} W_{e}(x, \omega)L(x, \omega_{i}) cos\theta d\omega dx\)
We can transform the \(\Omega\) part solid angle integration to area space integration over the film area with an additional geometry term \(G\) (like what we did in previous post):
\(I = \int_{A_{lens}} \int_{A_{film}} W_{e}(x_{film}\rightarrow x_{lens})L(x_{film}\rightarrow x_{lens}) G(x_{film}\leftrightarrow x_{lens}) dx_{film} dx_{lens}\)
The \(A_{film}\) term is the world area of film. Again, we use the non real world CG camera model that film stands in front of lens (instead of behind lens like real world camera) and in the focus distance. That is, we treat the focus plane as film like the following image illustrates:
The Monte Carlo estimator for the area space equation looks like:
\(E(I)=\frac{1}{N}\sum_{i= 1}^{N}\frac{W_{e}(x_{film_{i}}\rightarrow x_{lens_{i}})L(x_{film_{i}}\rightarrow x_{lens_{i}}) G(x_{film_{i}}\leftrightarrow x_{lens_{i}})}{pdf_{A}(x_{film_{i}})pdf_{A}(x_{lens_{i}})}\)
We want the \(W_{e}(x_{film_{i}}\rightarrow x_{lens_{i}})\) term can transfer this measurement result to be radiance \(L(x_{film_{i}}\rightarrow x_{lens_{i}})\), which gives us the result:
\(W_{e}(x_{film}\rightarrow x_{lens})=\frac{pdf_{A}(x_{film})pdf_{A}(x_{lens})}{G(x_{film}\leftrightarrow x_{lens})} = \frac{d(x_{film}\leftrightarrow x_{lens})^{2}}{A_{film}A_{lens}cos\theta^{2}}\)
If we look in further to the above equation, \(A_{film}\) is proportional to \(d(x_{film}\leftrightarrow x_{lens})^{2}\), which means distance is not a factor that would affect the result and we can use \(W_{e}(x_{film}\rightarrow x_{lens})\) directly for \(W_{e}(x, \omega)\) where \(\omega\) is the shooting ray direction from \(x_{lens}\) to \(x_{film}\)
and now we can write some codes :)
A beautiful symmetry between radiance and importance is shaping up once we have importance in hand:
the image can be rendered through camera shooting rays generated by uniformly sampling camera lens and camera film, capturing incoming radiance through the ray and accumulate it to corresponding pixel. At the end, divide accumulated radiance by sample per pixel. This is the parh tracing style (in its simplest form).
the image can also be rendered through lights shooting rays generated by uniformly sampling light surface and hemisphere outgoing directions, capturing incoming importance through the ray, add light ray carrying weight (this weight value \(\alpha\) will be discussed in later post) multiplied importance to pixel corresponding to the incoming importance. At the end, divided accumulated radiance by total number of samples shoot out from light. This method is usually named as light tracing or particle tracing (I use the former naming since particle tracing short as pt is a bit confusing when mention it together with path tracing)
This two methods should converge to the same expectation value when sample numbers crank up (mathematically and theoretically :) we gonna write the light tracer to prove this by our eye! ) I got to say I was mind blown when I learn this symmetry, it's so Zen that kinda like that ancient Chinese philosopher talk ("Now I do not know whether I was then a man dreaming I was a butterfly, or whether I am now a butterfly, dreaming I am a man." - Zhuangzi)
The above method work as it is when each sample only contribute to one pixel (for example, sample (320.4, 510.6) only contributes radiance (320, 511), sample (128.7, 426.3) only contributes radiance (129, 426)), but that's not the case when we have filter kernel that would potentially contribute to multiple pixels per sample...(a simplest example, 2X2 box filter can contribute radiance to 4 pixels per sample), does the "divide by \(N\)(total number of samples shoot out from light)" still works for light tracing when filter jump in? Nope! we need to do some surgery work on filter if we want to get unbiased result, and we will talk about it in the next post :)
Hi Wei,
ReplyDeleteI am following your posts on implementing the bidirectional ray tracing. It has been helpful for me so far. Thanks.
You have mentioned above that "the equation that integrate the power flow through focal lens and land on camera lens looks like this:
Φ=∫Alens∫Ω L(x,ωi)cosθdωdx
We got the power, but we actually want the average radiance value in specific pixel. Is there any way that we can convert this power to radiance?"
Is the L(x,ωi) not the radiance value for a specific pixel? and if
so why bother with phi, and converting that into radiance? I probably
am missing something.
Hey~ L(x,ωi) "is" the radiance value for a specific pixel, the problem is that we don't necessarily know that value when we are using light tracing strategy since we don't know which pixel this photon sample will land to.
ReplyDeleteThink about an simple setup like this: a camera look at a sphere light with no any other geometry in the scene. The sphere light has radius 1 and emitted diffuse radiance 1. When we do thing in raytracing way we'll end up probably an image that center area with pixel value 1 and rest 0. However, in light tracing we only know each photon sample carry energy 1/(pdfA * pdfW). During the tracing stage some of the photons land on lens some going to nowhere, the total photon lands on lens divides by emitted photon number is the estimated power flow through focal lens. However, we know we should get an image with center area 1 and 0 for the rest, how do we convert those photon energy splat on lens to this result? That's the reason we try to derive out that "importance" value through measurement equation.
Hope this helps to illustrate the problem :)
Thanks Wei. Still the whole measurement equation is not clear to me.
ReplyDeleteIn the Monte Carlo estimator for measurement equation, We is chosen in a way so that it cancels out G/pdfLens*pdfFilmarea term, and so L remains. If this is the case do we then need to evaluate We and G and the two pdfs?
yes. We do evaluate We and G and the two pdfs, you can look at the equation break down in the later post "More than one way to form a path", these values do get evaluated for the calculation.
ReplyDeleteImportance (We) is a concept that try to build up the symmetry of Light emitted radiance (Le) so that we can calculate "energy carried by a path" from both light direction and eye direction (or connect them).
Importance is usually not used in unidirectional path tracing since the effect of We got canceled by the pdf of choosing a ray shooting out from camera (pdfShootRay = pdfLens * pdfFilm / G). unidirectional path tracing don't divide pdfShootRay while casting ray and thus don't need to apply We. Though in the bdpt world we these values so that every path is in the same path space, otherwise we can't do the multiple importance sampling combination of all different strategies.
I felt that there is a mindset shifting while working on bdpt than traditional pt: in path tracing your goal is come up an image composed by pixels, and we want to compute the L value in each pixels. In bdpt your main goal is "estimate the integral result of power flow through camera lens in the scene", and a bonus reward you have from this estimation is transforming the result to an image.
Hope this helps :)
Oh WoW!!! My mind is also blown by learning this! Still need some time to digest such concept...
ReplyDeleteYour explanations really helped me a lot.
The We is Pdflen * PdfA / G. Here G = costheta^2 / dist^2, the dist is the length of CameraPoint to FilmPoint. But when we connect the HitPoint to Camera, , the distance of G between the We and Throughtput is the length of CameraPoint to HitPoint. If the distance is very large, the final value is much small. It is so strange, right ?
ReplyDeleteWe is symmetric to radiance Le, distance is not a factor that will affect its value (unit is per projected solid angle per unit area). On the other hand, when you try to integrate the measurement equation, imagine you put a triangle (dA) closer to lens, it will extend a larger solid angle, farther, smaller solid angle. That's the reason for that d^2 in G term, and it makes sense to me the as the result farther sample point contribute smaller throughput across solid angle. Hope this helps
ReplyDeleteHi,if I estimate the integral result of power flow through camera lens in the scene , how can I transform the result to an image, and what is the role of We ?
ReplyDelete