Saturday, January 31, 2015

BSSRDF Importance Sampling 6 - params tweaking, test render and future wishlist

This post actually has nothing to do with importance sampling at all already :P To get some relatively more interesting render images, I tried to tweak the reflection albedo for some different colors, but how does that get mapped to corresponding \(\sigma_{a}\) and \({\sigma_{s}}'\) ?(what BSSRDF takes as constructor input)

In Jensen 01 there is a BRDF approximation of BSSRDF based on the assumption that the incident illumination is uniform, and they can integrate the dipole diffusion approximation to get a total diffuse reflectance \(R_{d}\) at a surface intersection point:
$$R_{d} = 2\pi\int_{0}^{\infty}rR_{d}(r)dr = \frac{{\alpha}'}{2}(1 + e^{-\frac{4}{3}A\sqrt{3(1-{\alpha}')}}) e^{-\sqrt{3(1-{\alpha}')}}$$
Jensen 02 further purposed that using this approximated \(R_{d}\)(the diffusion reflection color) and diffuse mean free path \(l_{d}\) (the average distance light travels in medium before scattering, my understanding is how translucent the medium is) as input parameters to derived out \(\sigma_{a}\) and \({\sigma_{s}}'\): 
1. first get reduced albedo \({\alpha}'\) from above diffuse reflectance equation (the function is hard to invert but it's monotonic so Jensen suggests just using numerical approximation)
2. get effective transport coefficient: \(\sigma_{tr} \approx 1/l_{d}\)
3. get reduced extinction coefficient: \({\sigma_t}'=\frac{\sigma_{tr}}{\sqrt{3(1-{\alpha}')}}\)
4. get reduced scatter coefficient: \({\sigma_{s}}'={\alpha}'{\sigma_{t}}'\)
5, get absorb coefficient: \(\sigma_{a} = {\sigma_{t}}' - {\sigma_{s}}'\)
and we got the ingredient to feed in to BSSRDF. The code looks like this (the numerical approximation part I just modify it from pbrt, it actually looks kinda like some google style interview question :P)
with the above tweaking method in hand, we got some test rendering images, all use a same random \(R_{d}\) color (0.478431, 0.513725, 0.521569) from color picker and different mean free path parameters (1-8 from up to down, you can see it gets more and more translucent, while I feel the last few is kinda way too over the top already :P ):









and a diffuse Lambert render reference:

And it's postmortem time, what's not there and should get addressed later:
1. more dynamic splitting: at this moment each BSSRDF probe ray sample only pair with one light sample. For simple lighting setup maybe works fine, but it definitely going to be a pain when the lighting getting more complex. Should make it available to specify how many lighting sample can be pair with each probe ray sample. Also the single scattering evaluation share the same lighting sample at this moment, they can probably get split up too or let user have the option to skip single scattering(which is also quite expensive to evaluate but relatively subtle effect for BSSRDF material like skin)

2. look up surface mesh in scene hierarchy with material id: we are still shooting probe ray to the whole scene graph for intersection test, which is not necessarily and should only do the intersection test against the surface mesh with same material. If the probe ray only intersect with a limited portion of surface, it would definitely speed up the integration process.

3. indirect lighting. Yes I confess...we only calculate the direct lighting for each light sample we cast out for BSSRDF so all our GI information is only appeared on those specular reflection area. The naive implementation on this shouldn't be hard (just like regular path tracing shoot out bounce path for each probe intersection to gather bounce lighting information) but I'm kinda afraid that it's gonna be really expensive. Thinking about solution at this moment......

4. use different max probe radius \(R_{max}\) for different spectrum channel. There is only one probe radius for all spectrum channel now, but the diffusion profile shows clearly that red channel last much longer than the other two, it doesn't make sense to use the same probe radius for all of them. Kulla 13 mentioned about they also random picking the probe radius with different diffusion profile and combine the result with MIS for their skin rendering, while I haven't moved forward to this part yet.

5. the \({\sigma_{s}}'\) and \(\sigma_{a}\) coefficient are measured in per mm unit, which I believe the scale of geometry will cause look difference at this moment, it would be good if we can come up some normalization mechanics for these coefficient tweaking so the artistic control can be more intuitive.

There should be more, and I'll append them when they pop up in my mind :)


No comments:

Post a Comment