Sunday, March 27, 2016

Bidirectional Path Tracing 7 - Implement every s,t combination strategies

After implemented t=1 strategy light tracing and visually verify the result can converge to s=1 path tracing, I move on to implement t=0 strategy light tracing: shooting ray from light and bounce around the scene until it slams on camera lens. This is probably the worst strategy generally when lens is small compare to the scene since it has so little chance the sample can hit the lens. It can't even render anything if the camera is a pinhole model since ray will never intersect a infinitely small camera lens (just like s=0 strategy never works when the bouncing ray try to intersect a point light) But we are going to implement every strategy anyway, so I would like to prove this strategy can converge (when there are enough amount of samples) to correct result.

One small new things to add is that I need to implement disk geometry to simulate camera lens (you didn't implement disk geometry!? @@  yup....I live in the sphere and polygon mesh world for quite some while) so that ray can hit it. It was a small quadric exercise and there it is:

One good thing for t=0 strategy is that you don't need to implement shadow ray testing since you never connect light pah with eye path. The code logic looks simpler and each sample is cheaper to evaluate as the result, still....it is really really slow to converge compare to t=1 light tracing and s=1 path tracing....reallllllly sloooooooow!


This is how it looks like in a simple dof scene with 100 samples

Crank it up to 10000 samples per pixel, still noisy....

The Mitsuba right side path tracing reference uses around 200 spp

Two strategies work, now move to implement all different strategies, which is the first 50% of bidirectional path tracing. The rough idea is: we create a path from light, we create a path from camera, we connect every end points of light path and camera path to form paths with different strategies and evaluate the unweighted contribution \(C_{s,t}^{*}\) . For example, to render a 2 bounces render, we need to integrate paths with maximum length 4. We generate a length 4 path from light, we generate a length 4 path from camera, and we can connect the end points of two paths to form the following strategies:

length 1 : (s=2, t=0), (s=1, t=1), (s=0, t=2) direct visible light
length 2 : (s=3, t=0), (s=2, t=1), (s=1, t=2), (s=0, t=3) direct lighting
length 3 : (s=4, t=0), (s=3, t=1), (s=2, t=2), (s=1, t=3), (s=0, t=4) 1st bounce lighting
length 4 : (s=5, t=0), (s=4, t=1), (s=3, t=2), (s=2, t=3), (s=1, t=4), (s=0, t=5) 2nd bounce lighting


It's a bit of waste but I do throw away the combination that is longer than the specified max path length. I feel this makes the code logic looks cleaner and rendering behavior more consistent (there won't be path length 5 result added into render when specified max length 4 in this implementation) Since there can be multiple strategies to generate a path, simply added all the strategies to the final render image will result to a over blown image. The first draft of implementation add mDebugS, mDebugT flags to specify which s, t to be added into the final rendering. This is actually a good debug resource since I can specify a particular light path to render and isolate down whether the bug only happen in specific path. The implementation of eval contribution for every s, t combination is wrapped in BDPT::evalContribution:


The constructEyePath and constructLightPath are really similar that I just list one of them in the following code segments. In the beginning I was planning to wrap this two as one utility function but the slight difference stops me to do that in the first try, maybe it should be refactored to one in the long rung:




and here are some images only display 1st bounce lighting for debug purpose during the development:

s=1, t=3 first bounce for glossy material

s=2, t=2 first bounce for glossy material

s=3, t=1 first bounce for glossy material

the above debug images will converge when we cranked up the sample number, but even in this simple setup we can tell different strategy have different converge speed compare to other strategies. In the above images, s=1,t=3 path tracing strategy seems to be the better strategy compare to the other two. How we determine which strategy is good for which scenario, and which strategy is better for other scenario? This is where multiple importance sampling come to rescue :) and I'll try to talk about my interpretation in the next post.

10 comments:

  1. Hi Wei,

    Thanks for helping with previous explanations on importance and measurement equation.I now implemented the light tracing same code as you provided. The image looks similar to the path tracing one I had.

    I now am implementing this post, connecting every s, t; but the image for s=1, t>1 standard path tracing is coming much brighter
    compare to the ref path tracing. I am not sure of the way I
    constructed the camera path and camera pdf.
    Is there any chance you could share the project.

    ReplyDelete
    Replies
    1. the project live in github at this moment but it doesn't have much(or any...) documentation except comments
      https://github.com/bachi95/Goblin

      Delete
  2. If the camera path second vertex is determined by the given pixel coordinates on the image plane, would it's pdf(pdfForward) not equal
    to 1 as it is already predetermined?

    ReplyDelete
    Replies
    1. like I mentioned earlier, BDPT did the measurement equation integration across the whole lens/film so you can think of a sample of camera ray as a uniform sample across the film (unless you are doing adaptive sampling when shooting eye rays, that get things more complicated...) and the pdf should be something like 1/filmArea if sampled in area space or 1/filmArea*(cos/d^2) if you are sample in solid angle space

      Delete
  3. One more thing. I realised that the very bright spots on the three above images, are generated by the s=0 t>0 strategy where an eye
    path is a complete path on its own. But my question is why the result(pixel colour) is so different compare to other strategies(
    s=1, t>0). Is it not true that each estimator(strategy) should eventually converge to same result.

    ReplyDelete
    Replies
    1. the above 3 images contains no s=0 strategy since they represent s1t3, s2t2, s3t1. The s=0 strategy does tend to generate firefly since there may a path with low pdf happen to hit the light and generate the spike value. It should still converge to the same result of s=1 strategy once you crank up the sample (potentially really high number, like the above t=0 strategy needs extremely high number of photons to converge)

      Delete
  4. How would you add distant directional light source(like sun) in BDPT?
    If this is an indoor scene that gets sunlight through windows, how is this handled in BDPT(other than to consider Sun as a big disc far away and to sample a position on it's area, as this probably wouldn't be efficient for points within the room)

    Would it be valid instead to consider window polylines as area sources that emits light in one direction(directional source)? The light samplePosition would be as it is(with pdf 1/A ?) and sampleDirection pdf=1? How about connecting s,t and MIS weights? are they need to be modified or same codes can be used?

    Could I have your thoughts please?

    Thanks,

    ReplyDelete
    Replies
    1. theoretically you don't need to modify the MIS code for this type of "area directional light". and yes I think the pdf for area is 1/A and directional pdf is a delta distribution that will be 1 during sampleDirection and 0 for any other evaluation.

      The idea somewhat reminds me of portal light
      https://support.solidangle.com/display/AFMUG/Light+Portal
      Though in your case you probably gonna see a black window since the direction is delta distribution :)

      Delete
  5. Thanks for the reply.

    For directional light, like sun, the source area is usually
    considered to be oriented towards (sun)direction so the cosine is 1. But the portal opening normal vec can make angle with the sunlight direction. Is it correct then to say the cosine in
    pdfForward = pdfLightDirection / absdot(nLight, dir), accounts for that?

    Sorry for bothering you much but I appreciate if you take time
    one this question too. in path tracing and on direct lighting when using portal to sample environment map outside, at the moment I only take the visible sky through the portal seen from the shade point in direct lighting calculation, but many samples are occluded by the ground or obstructions outsides; therefore to account for them later I rely on the brdf sampling to luckily exit the window and sample outside environment. But to me it seems waste of samples to sample a portal and only accounts for light from the sky.

    But then to account for those occluded rays means if the sample ray hits an outside obstruction like ground, I assume one has to recursively call the method to get its radiance(whereas usually recursive calls or account for indirect radiance are avoided in direct calculation, is it correct?) and if by any chance the next bounce(from the outside hit point) comes again inside the room it becomes too complex. I am not sure if this is a correct why of thinking it.

    ReplyDelete
  6. For that portal directional light, I don't think you should have any cosine term for pdfFoward. You can think that photon come out of void instead of hard surface, it's just the entry area is limited to that portal.

    If you sample direct lighting based on portal and still got blocked by obstacle out of window, I can only say that it's a really tricky lighting setup that artists usually will try to avoid, and yes you probably can only rely on bsdf sample bounce to gather radiance info. Veach97 did mention some potential interesting idea like growing path from the middle (in this case, spawning path from window area) but I haven't seen too much further research on this area (most of the light transport algorithm nowadays still generate path from either light or camera)

    ReplyDelete