The implementation is really straightforward once we have the C∗s,t equation in the previous post. We set up a max path length (the level of bounces, 2 for only direct lighting, 3 for one bounce, 4 for two bounces....etc) then we evaluate C∗1,1, C∗2,1, C∗3,1, C∗4,1 ...... I tried to match the code variables as close as the original equation, but I do made an extra branch statement to deal with the L(0)e, L(1)e, fs(y−1→y0→y1) case mentioned in previous post. The guts and bone of light tracing is wrapped in LightTracer::splatFilmT1 and it looks like this:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
void LightTracer::splatFilmT1(const ScenePtr& scene, const Sample& sample, | |
const RNG& rng, std::vector<PathVertex>& pathVertices, | |
ImageTile* tile) const { | |
const vector<Light*>& lights = scene->getLights(); | |
if (lights.size() == 0) { | |
return; | |
} | |
// get the camera point | |
const CameraPtr camera = scene->getCamera(); | |
Vector3 nCamera; | |
float pdfCamera; | |
Vector3 pCamera = camera->samplePosition(sample, &nCamera, &pdfCamera); | |
PathVertex cVertex(Color(1.0f / pdfCamera), pCamera, | |
nCamera, camera.get()); | |
// get the light point | |
float pickLightPdf; | |
float pickSample = sample.u1D[mPickLightSampleIndexes[0].offset][0]; | |
int lightIndex = mPowerDistribution->sampleDiscrete( | |
pickSample, &pickLightPdf); | |
const Light* light = lights[lightIndex]; | |
LightSample ls(sample, mLightSampleIndexes[0], 0); | |
Vector3 nLight; | |
float pdfLightArea; | |
Vector3 pLight = light->samplePosition(scene, ls, &nLight, | |
&pdfLightArea); | |
pathVertices[0] = PathVertex( | |
Color(1.0f / (pdfLightArea * pickLightPdf)), | |
pLight, nLight, light); | |
float pdfLightDirection; | |
BSDFSample bs(sample, mBSDFSampleIndexes[0], 0); | |
Vector3 dir = light->sampleDirection( | |
nLight, bs.uDirection[0], bs.uDirection[1], &pdfLightDirection); | |
Color throughput = pathVertices[0].throughput * | |
absdot(nLight, dir) / pdfLightDirection; | |
Ray ray(pLight, dir, 1e-3f); | |
// start building a path from light by bouncing around the scene | |
int lightVertex = 1; | |
while (lightVertex < mMaxPathLength) { | |
float epsilon; | |
Intersection isect; | |
if (!scene->intersect(ray, &epsilon, &isect)) { | |
break; | |
} | |
const Fragment& frag = isect.fragment; | |
pathVertices[lightVertex] = PathVertex(throughput, isect); | |
BSDFSample bs(sample, mBSDFSampleIndexes[lightVertex], 0); | |
lightVertex += 1; | |
Vector3 wo = -normalize(ray.d); | |
Vector3 wi; | |
float pdfW; | |
Color f = isect.getMaterial()->sampleBSDF(frag, wo, bs, | |
&wi, &pdfW, BSDFAll, NULL, BSDFImportance); | |
if (f == Color::Black || pdfW == 0.0f) { | |
break; | |
} | |
throughput *= f * absdot(wi, frag.getNormal()) / pdfW; | |
ray = Ray(frag.getPosition(), wi, epsilon); | |
} | |
// evaluate path contribution Cs,1 | |
for (int s = 1; s <= lightVertex; ++s) { | |
const PathVertex& pv = pathVertices[s - 1]; | |
const Vector3& pvPos = pv.fragment.getPosition(); | |
Vector3 filmPixel = camera->worldToScreen( | |
pv.fragment.getPosition(), pCamera); | |
if (filmPixel == Camera::sInvalidPixel) { | |
continue; | |
} | |
// occlude test | |
Vector3 pv2Cam(pCamera - pvPos); | |
float occludeDistance = length(pv2Cam); | |
float epsilon = 1e-3f * occludeDistance; | |
Ray occludeRay(pvPos, normalize(pv2Cam), | |
epsilon, occludeDistance - epsilon); | |
if (scene->intersect(occludeRay)) { | |
continue; | |
} | |
Color fsL; | |
Vector3 wo = normalize(pCamera - pvPos); | |
if (s > 1) { | |
Vector3 wi = | |
normalize(pathVertices[s - 2].fragment.getPosition() - pvPos); | |
Color f = pv.material->bsdf(pv.fragment, wo, wi, | |
BSDFAll, BSDFImportance); | |
fsL = f * light->evalL(pLight, nLight, | |
pathVertices[1].fragment.getPosition()); | |
} else { | |
fsL = pv.light->evalL(pLight, nLight, pCamera); | |
} | |
float fsE = camera->evalWe(pCamera, pvPos); | |
float G = absdot(pv.fragment.getNormal(), wo) * absdot(nCamera, wo) / | |
squaredLength(pvPos - pCamera); | |
Color pathContribution = fsL * fsE * G * | |
pv.throughput * cVertex.throughput; | |
tile->addSample(filmPixel.x, filmPixel.y, pathContribution); | |
} | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
void LightTracer::render(const ScenePtr& scene) { | |
// where the light tracing actual work happen... | |
// skip the above meaty tasks codes | |
uint64_t totalSampleCount = 0; | |
for (size_t i = 0; i < tiles.size(); ++i) { | |
totalSampleCount += tiles[i]->getTotalSampleCount(); | |
} | |
film->mergeTiles(); | |
// normalize the film in the unbiased filter way | |
film->scaleImage(film->getFilmArea() / totalSampleCount); | |
// since we already done the normalization above | |
// set the normalization flag to false | |
film->writeImage(false); | |
} |
This was actually a question colleague asked me before I started working on bdpt, and the suggestion he made was: use the mighty Mitsuba for comparison. Face to face, fist to fist. as he suggested, I spent some times to build a Mitsuba scene that is 100% matching a simple personal scene with personal scene description format (manual conditioning...that kinda reminds me some not that pleasant tasks I've worked on before as a daily job...) and close my eye....finger crossed....hit enter!
the Mitsuba reference
the bloody face to face battle royale
The light tracing...as my memory served, didn't get that luck. The debug process was not pretty fun, but at least I isolate down it's light tracing having problem instead of the whole renderer having problem thanks to the existing reference. The bugs I remembered including: the wrong filter normalization, the wrong order of ωo, ωi , and the purest evil: I didn't reset the tfar after ray intersect something: this bug made the direct lighting looks identical but problem occurs after the first bounce, and it took me probably 3 days to find it.... :| Enough of complaints, after fixing this bug fixing that bug, light tracing also went to the same converge result at the end. Hooray~~
light trace vs path trace, battle royale round 2
Personally I feel implementing light tracing marks half way through the bidirectional path tracing. We prove the symmetry indeed exist (with human eyes....) The next thing we are going to do is implementing all the s, t combination strategies and make sure they converges, after that... it will be the final boss (which is the most powerful feature for bidirectional path tracing in my opinion): combine all the strategy through multiple importance sampling!
Hi,
ReplyDeleteIf pathvertex bsdf contains delta function how do you connect it to the camera, as the value that bsdf returns is zero?
For example, connecting a specular reflection surface to camera would always results in having a black surface. Is that correct?
Yes the value in this case is always 0, this is similar to spec reflection can't capture direct lighting contribution from sampling light since every sample return 0 on spec bsdf evaluation. See later post "Combine multiple s,t strategies optimally", we discuss a bit how this black result will result to 0 MIS weight in bdpt :)
DeleteHi, Wei-Feng, I think the code has a very small problem. The problem occur on fsE. In the code, fsE = We, and We = 1/(pi * r^2) / (A * G) = camToimgDis^2/(pi * r^2 * A * cosTheta^4). In Veach's theis, We is splited into We(x0) = 1/(pi * r^2) and We(x0->x1) = 1 / (A * G) = camToimgDis^2/(pi * r^2 * A * cosTheta^4), and regarding We(x0->x1) as virtual BSDF fs(fsE). So the final contribution I think should be (We(x0) / p(x0)) * fsE * G * fs * Throughput = (W(x0) / p(x0)) * We(x0->x1) * G * fs * Throughput. I think this is the special case in BPT. And we can get the same result using sampling the camera directly, We * cosTheta_x1 * fs * Throughput / PdfW(x1->x0), where PdfW(x1->x0) = 1/(pi * r^2) * ||x0-x1||^2 / cosTheta. Moreover, I think the code can get the correct result if camera is pinhole, because (We(x0) / p(x0)) = We(x0) = 1.0. But for the lens camera, I think the it may lose the divisor p(x0) = 1/(pi * r^2). Is my understanding correct, or losing some implementation details ?
ReplyDeletePerspectiveCamera::samplePosition will generate a pdfCamera for sampling lens position, and it should include 1/(pi * r^2) term in that pdf when the lens radius is not 0 (pinhole). As for that separation for spatial/directional component in original Veach thesis, I intentionally combine them into one term since I feel implement a bsdf eval interface for camera and light source just for BDPT is a bit overkill :) Hope this helps
DeleteSorry, I miss some details in your code. Thanks :)
Delete