The implementation is really straightforward once we have the \(C_{s,t}^{*}\) equation in the previous post. We set up a max path length (the level of bounces, 2 for only direct lighting, 3 for one bounce, 4 for two bounces....etc) then we evaluate \(C_{1,1}^{*}\), \(C_{2,1}^{*}\), \(C_{3,1}^{*}\), \(C_{4,1}^{*}\) ...... I tried to match the code variables as close as the original equation, but I do made an extra branch statement to deal with the \(L_{e}^{(0)}\), \(L_{e}^{(1)}\), \(f_{s}(y_{-1} \rightarrow y_{0}\rightarrow y_{1})\) case mentioned in previous post. The guts and bone of light tracing is wrapped in LightTracer::splatFilmT1 and it looks like this:
at the end of the render, we output image with the unbiased filter estimator described in previous post
We got light tracing implemented, and here comes the scary question: is the result correct? This small toy previous implemented Whitted style ray tracing, unidirectional path tracing, ambient occlusion, they all look different for sure, but theoretically, light tracing should converge to the same result as path tracing :| How do I know whether my light tracing is correct? or, how do I even know whether my path tracing is correct?
This was actually a question colleague asked me before I started working on bdpt, and the suggestion he made was: use the mighty Mitsuba for comparison. Face to face, fist to fist. as he suggested, I spent some times to build a Mitsuba scene that is 100% matching a simple personal scene with personal scene description format (manual conditioning...that kinda reminds me some not that pleasant tasks I've worked on before as a daily job...) and close my eye....finger crossed....hit enter!
the Mitsuba reference
the bloody face to face battle royale
The light tracing...as my memory served, didn't get that luck. The debug process was not pretty fun, but at least I isolate down it's light tracing having problem instead of the whole renderer having problem thanks to the existing reference. The bugs I remembered including: the wrong filter normalization, the wrong order of \(\omega_{o}\), \(\omega_{i}\) , and the purest evil: I didn't reset the tfar after ray intersect something: this bug made the direct lighting looks identical but problem occurs after the first bounce, and it took me probably 3 days to find it.... :| Enough of complaints, after fixing this bug fixing that bug, light tracing also went to the same converge result at the end. Hooray~~
light trace vs path trace, battle royale round 2
Personally I feel implementing light tracing marks half way through the bidirectional path tracing. We prove the symmetry indeed exist (with human eyes....) The next thing we are going to do is implementing all the s, t combination strategies and make sure they converges, after that... it will be the final boss (which is the most powerful feature for bidirectional path tracing in my opinion): combine all the strategy through multiple importance sampling!
Hi,
ReplyDeleteIf pathvertex bsdf contains delta function how do you connect it to the camera, as the value that bsdf returns is zero?
For example, connecting a specular reflection surface to camera would always results in having a black surface. Is that correct?
Yes the value in this case is always 0, this is similar to spec reflection can't capture direct lighting contribution from sampling light since every sample return 0 on spec bsdf evaluation. See later post "Combine multiple s,t strategies optimally", we discuss a bit how this black result will result to 0 MIS weight in bdpt :)
DeleteHi, Wei-Feng, I think the code has a very small problem. The problem occur on fsE. In the code, fsE = We, and We = 1/(pi * r^2) / (A * G) = camToimgDis^2/(pi * r^2 * A * cosTheta^4). In Veach's theis, We is splited into We(x0) = 1/(pi * r^2) and We(x0->x1) = 1 / (A * G) = camToimgDis^2/(pi * r^2 * A * cosTheta^4), and regarding We(x0->x1) as virtual BSDF fs(fsE). So the final contribution I think should be (We(x0) / p(x0)) * fsE * G * fs * Throughput = (W(x0) / p(x0)) * We(x0->x1) * G * fs * Throughput. I think this is the special case in BPT. And we can get the same result using sampling the camera directly, We * cosTheta_x1 * fs * Throughput / PdfW(x1->x0), where PdfW(x1->x0) = 1/(pi * r^2) * ||x0-x1||^2 / cosTheta. Moreover, I think the code can get the correct result if camera is pinhole, because (We(x0) / p(x0)) = We(x0) = 1.0. But for the lens camera, I think the it may lose the divisor p(x0) = 1/(pi * r^2). Is my understanding correct, or losing some implementation details ?
ReplyDeletePerspectiveCamera::samplePosition will generate a pdfCamera for sampling lens position, and it should include 1/(pi * r^2) term in that pdf when the lens radius is not 0 (pinhole). As for that separation for spatial/directional component in original Veach thesis, I intentionally combine them into one term since I feel implement a bsdf eval interface for camera and light source just for BDPT is a bit overkill :) Hope this helps
DeleteSorry, I miss some details in your code. Thanks :)
Delete