One small new things to add is that I need to implement disk geometry to simulate camera lens (you didn't implement disk geometry!? @@ yup....I live in the sphere and polygon mesh world for quite some while) so that ray can hit it. It was a small quadric exercise and there it is:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
bool Disk::intersect(const Ray& ray, float* epsilon, | |
Fragment* fragment) const { | |
if (fabs(ray.d.z) < 1e-7f) { | |
return false; | |
} | |
// disk lies on plane z = 0 in local space | |
// Oz + Dz * t = 0 -> t = -Oz / Dz; | |
float t = -ray.o.z / ray.d.z; | |
Vector3 p = ray(t); | |
if (t < ray.mint ||t > ray.maxt) { | |
return false; | |
} | |
float squareR = p.x * p.x + p.y * p.y; | |
if (squareR > mRadius * mRadius) { | |
return false; | |
} | |
ray.maxt = t; | |
*epsilon = 1e-3f *t; | |
/* | |
* spherical coordinate | |
* | |
* u = Phi / 2PI | |
* v = r / mRadius | |
* x = r * cosPhi = r * cos(2PI * u) = mRadius * v * cosPhi | |
* y = r * sinPhi = r * sin(2PI * u) = mRadius * v * sinPhi | |
* z = 0 | |
* | |
* dPx/du = r * -sinPhi * 2PI = -2PI * y | |
* dPy/du = r * cosPhi * 2PI = 2PI * x | |
* dpdu = [-2PI * y, 2PI * x, 0] | |
* | |
* dPx/dv = mRadius * cosPhi = mRadius * x / r | |
* dPy/dv = mRadius * sinPhi = mRadius * y / r | |
* dpdv = [mRadius * x / r, mRadius * y / r, 0] | |
*/ | |
float r = sqrt(squareR); | |
float phi = atan2(p.y, p.x); | |
if (phi < 0.0f) { | |
phi += TWO_PI; | |
} | |
float u = phi * INV_TWOPI; | |
float v = r / mRadius; | |
Vector3 position(p); | |
Vector3 normal(Vector3::UnitZ); | |
Vector2 uv(u, v); | |
Vector3 dpdu(-TWO_PI * p.y, TWO_PI * p.x, 0.0f); | |
Vector3 dpdv(mRadius * p.x / r, mRadius * p.y / r, 0.0f); | |
*fragment = Fragment(position, normal, uv, dpdu, dpdv); | |
return true; | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
void LightTracer::splatFilmT0(const ScenePtr& scene, const Sample& sample, | |
const RNG& rng, std::vector<PathVertex>& pathVertices, | |
ImageTile* tile) const { | |
const vector<Light*>& lights = scene->getLights(); | |
if (lights.size() == 0) { | |
return; | |
} | |
const CameraPtr camera = scene->getCamera(); | |
// get the light point | |
float pickLightPdf; | |
float pickSample = sample.u1D[mPickLightSampleIndexes[0].offset][0]; | |
int lightIndex = mPowerDistribution->sampleDiscrete( | |
pickSample, &pickLightPdf); | |
const Light* light = lights[lightIndex]; | |
LightSample ls(sample, mLightSampleIndexes[0], 0); | |
Vector3 nLight; | |
float pdfLightArea; | |
Vector3 pLight = light->samplePosition(scene, ls, &nLight, | |
&pdfLightArea); | |
pathVertices[0] = PathVertex( | |
Color(1.0f / (pdfLightArea * pickLightPdf)), | |
pLight, nLight, light); | |
float pdfLightDirection; | |
BSDFSample bs(sample, mBSDFSampleIndexes[0], 0); | |
Vector3 dir = light->sampleDirection( | |
nLight, bs.uDirection[0], bs.uDirection[1], &pdfLightDirection); | |
Color throughput = pathVertices[0].throughput * | |
absdot(nLight, dir) / pdfLightDirection; | |
Ray ray(pLight, dir, 1e-3f); | |
int lightVertex = 1; | |
while (lightVertex <= mMaxPathLength) { | |
float epsilon; | |
Intersection isect; | |
if (!scene->intersect(ray, &epsilon, &isect)) { | |
break; | |
} | |
const Fragment& frag = isect.fragment; | |
pathVertices[lightVertex] = PathVertex(throughput, isect); | |
// last light vertex hit the camera lens, evaluate result and done | |
if (isect.isCameraLens()) { | |
const Vector3& pCamera = frag.getPosition(); | |
const Vector3& pS_1 = | |
pathVertices[lightVertex - 1].fragment.getPosition(); | |
Vector3 filmPixel = camera->worldToScreen(pS_1, pCamera); | |
if (filmPixel != Camera::sInvalidPixel) { | |
Color L = light->evalL(pLight, nLight, | |
pathVertices[1].fragment.getPosition()); | |
float We = camera->evalWe(pCamera, pS_1); | |
tile->addSample(filmPixel.x, filmPixel.y, | |
L * We * pathVertices[lightVertex].throughput); | |
} | |
break; | |
} | |
// continue random walk the light vertex path until it hits lens | |
BSDFSample bs(sample, mBSDFSampleIndexes[lightVertex], 0); | |
lightVertex += 1; | |
Vector3 wo = -normalize(ray.d); | |
Vector3 wi; | |
float pdfW; | |
Color f = isect.getMaterial()->sampleBSDF(frag, wo, bs, | |
&wi, &pdfW, BSDFAll, NULL, BSDFImportance); | |
if (f == Color::Black || pdfW == 0.0f) { | |
break; | |
} | |
throughput *= f * absdot(wi, frag.getNormal()) / pdfW; | |
ray = Ray(frag.getPosition(), wi, epsilon); | |
} | |
} |
This is how it looks like in a simple dof scene with 100 samples
Crank it up to 10000 samples per pixel, still noisy....
The Mitsuba right side path tracing reference uses around 200 spp
Two strategies work, now move to implement all different strategies, which is the first 50% of bidirectional path tracing. The rough idea is: we create a path from light, we create a path from camera, we connect every end points of light path and camera path to form paths with different strategies and evaluate the unweighted contribution C∗s,t . For example, to render a 2 bounces render, we need to integrate paths with maximum length 4. We generate a length 4 path from light, we generate a length 4 path from camera, and we can connect the end points of two paths to form the following strategies:
length 1 : (s=2, t=0), (s=1, t=1), (s=0, t=2) direct visible light
length 2 : (s=3, t=0), (s=2, t=1), (s=1, t=2), (s=0, t=3) direct lighting
length 3 : (s=4, t=0), (s=3, t=1), (s=2, t=2), (s=1, t=3), (s=0, t=4) 1st bounce lighting
length 4 : (s=5, t=0), (s=4, t=1), (s=3, t=2), (s=2, t=3), (s=1, t=4), (s=0, t=5) 2nd bounce lighting
It's a bit of waste but I do throw away the combination that is longer than the specified max path length. I feel this makes the code logic looks cleaner and rendering behavior more consistent (there won't be path length 5 result added into render when specified max length 4 in this implementation) Since there can be multiple strategies to generate a path, simply added all the strategies to the final render image will result to a over blown image. The first draft of implementation add mDebugS, mDebugT flags to specify which s, t to be added into the final rendering. This is actually a good debug resource since I can specify a particular light path to render and isolate down whether the bug only happen in specific path. The implementation of eval contribution for every s, t combination is wrapped in BDPT::evalContribution:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
void BDPT::evalContribution(const ScenePtr& scene, | |
const Sample& sample, const RNG& rng, | |
std::vector<PathVertex>& lightPath, | |
std::vector<PathVertex>& eyePath, | |
std::vector<BDPTMISNode>& misNodes, | |
ImageTile* tile) const { | |
// construct path randomwalk from light | |
const vector<Light*>& lights = scene->getLights(); | |
if (lights.size() == 0) { | |
return; | |
} | |
int lightVertexCount = constructLightPath(scene, sample, rng, | |
lights, lightPath); | |
// construct path randomwalk from camera | |
const CameraPtr camera = scene->getCamera(); | |
int eyeVertexCount = constructEyePath(scene, sample, rng, | |
camera, eyePath); | |
for (int pathLength = 1; pathLength <= mMaxPathLength; ++pathLength) { | |
int pathVertexCount = pathLength + 1; | |
for (int s = 0; s <= pathVertexCount; ++s) { | |
int t = pathVertexCount - s; | |
// for debug purpose, only take certain strategy into account | |
if (((mDebugS != -1) && (s != mDebugS)) || | |
((mDebugT != -1) && (t != mDebugT))) { | |
continue; | |
} | |
// we don't have long enough light or eye path for | |
// this s/t combination case | |
if (s > lightVertexCount || t > eyeVertexCount) { | |
continue; | |
} | |
// can not form a path with 0 or 1 vertex | |
if ((s == 0 && t < 2) ||(t == 0 && s < 2) || (s + t) < 2) { | |
continue; | |
} | |
if ((t == 0 && !lightPath[s - 1].isCameraLens) || | |
(s == 0 && !eyePath[t - 1].isLight())) { | |
continue; | |
} | |
// need to re-evaluate the contributed pixel coordinate for | |
// t = 0 (light particle hit the camera lens) and | |
// t = 1 (light path end vertex connect with camera lens) case | |
Vector3 filmPixel(sample.imageX, sample.imageY, 0.0f); | |
if (t == 0) { | |
filmPixel = camera->worldToScreen( | |
lightPath[s - 2].getPosition(), | |
lightPath[s - 1].getPosition()); | |
} else if (t == 1) { | |
filmPixel = camera->worldToScreen( | |
lightPath[s - 1].getPosition(), | |
eyePath[0].getPosition()); | |
} | |
if (filmPixel == Camera::sInvalidPixel) { | |
continue; | |
} | |
// geometry factor between the light path end vertex and | |
// eye path end vertex | |
float Gconnect = 1.0f; | |
Color unweightedContribution = evalUnweightedContribution( | |
scene, camera, lightPath, s, eyePath, t, Gconnect); | |
// no need to do the MIS evaluation if the path combination | |
// has no contribution | |
if (unweightedContribution == Color::Black) { | |
continue; | |
} | |
float weight = mDebugNoMIS ? | |
1.0f : | |
evalMIS(scene, camera, lightPath, s, eyePath, t, | |
Gconnect, misNodes); | |
tile->addSample(filmPixel.x, filmPixel.y, | |
weight * unweightedContribution); | |
} | |
} | |
} | |
Color BDPT::evalUnweightedContribution( | |
const ScenePtr& scene, const CameraPtr& camera, | |
const std::vector<PathVertex>& lightPath, int s, | |
const std::vector<PathVertex>& eyePath, int t, | |
float& G) const { | |
// eval unweighted contribution | |
Color aL = s == 0 ? | |
Color::White : lightPath[s - 1].throughput; | |
Color aE = t == 0 ? | |
Color::White : eyePath[t - 1].throughput; | |
Color cst; | |
// eval connection factor (light path end point to eye path end point) | |
if (s == 0) { | |
cst = eyePath[t - 1].getLight()->evalL( | |
eyePath[t - 1].getPosition(), | |
eyePath[t - 1].getNormal(), | |
eyePath[t - 2].getPosition()); | |
} else if (t == 0) { | |
cst = Color(camera->evalWe(lightPath[s - 1].getPosition(), | |
lightPath[s - 2].getPosition())); | |
} else { | |
const PathVertex& sEndV = lightPath[s - 1]; | |
const PathVertex& tEndV = eyePath[t - 1]; | |
Vector3 connectVector = tEndV.getPosition() - sEndV.getPosition(); | |
Vector3 connectDir = normalize(connectVector); | |
Color fsL; | |
if (s == 1) { | |
fsL = sEndV.light->evalL(sEndV.getPosition(), | |
sEndV.getNormal(), tEndV.getPosition()); | |
} else { | |
const Vector3 wi = connectDir; | |
const Vector3 wo = normalize(lightPath[s - 2].getPosition() - | |
sEndV.getPosition()); | |
fsL = lightPath[s - 1].material->bsdf(sEndV.fragment, wo, wi, | |
BSDFAll, BSDFImportance); | |
} | |
if (fsL == Color::Black) { | |
return Color::Black; | |
} | |
Color fsE; | |
if (t == 1) { | |
fsE = Color(camera->evalWe(tEndV.getPosition(), | |
sEndV.getPosition())); | |
} else { | |
const Vector3 wi = -connectDir; | |
const Vector3 wo = normalize( | |
eyePath[t - 2].getPosition() - | |
tEndV.getPosition()); | |
fsE = tEndV.material->bsdf(tEndV.fragment, wo, wi, | |
BSDFAll, BSDFRadiance); | |
} | |
if (fsE == Color::Black) { | |
return Color::Black; | |
} | |
G = evalG(sEndV, tEndV); | |
if (G == 0.0f){ | |
return Color::Black; | |
} | |
// occlude test | |
float occludeDistance = length(connectVector); | |
float epsilon = 1e-3f * occludeDistance; | |
Ray occludeRay(sEndV.getPosition(), connectDir, | |
epsilon, occludeDistance - epsilon); | |
if (scene->intersect(occludeRay)) { | |
return Color::Black; | |
} | |
cst = fsL * G * fsE; | |
} | |
return aL * cst * aE; | |
} |
The constructEyePath and constructLightPath are really similar that I just list one of them in the following code segments. In the beginning I was planning to wrap this two as one utility function but the slight difference stops me to do that in the first try, maybe it should be refactored to one in the long rung:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
int BDPT::constructLightPath(const ScenePtr& scene, const Sample& sample, | |
const RNG& rng, const std::vector<Light*>& lights, | |
std::vector<PathVertex>& lightPath) const { | |
// get the light point | |
float pickLightPdf; | |
float pickSample = sample.u1D[mPickLightSampleIndexes[0].offset][0]; | |
int lightIndex = mPowerDistribution->sampleDiscrete( | |
pickSample, &pickLightPdf); | |
const Light* light = lights[lightIndex]; | |
LightSample ls(sample, mLightSampleIndexes[0], 0); | |
Vector3 nLight; | |
float pdfLightArea; | |
Vector3 pLight = light->samplePosition(scene, ls, &nLight, | |
&pdfLightArea); | |
float pdfBackward = pdfLightArea * pickLightPdf; | |
float pdfLightDirection; | |
BSDFSample bs(sample, mLightPathSampleIndexes[0], 0); | |
Vector3 dir = light->sampleDirection( | |
nLight, bs.uDirection[0], bs.uDirection[1], &pdfLightDirection); | |
float pdfForward = pdfLightDirection / absdot(nLight, dir); | |
lightPath[0] = PathVertex(Color(1.0f / pdfBackward), | |
pLight, nLight, light, pdfForward, pdfBackward); | |
Color throughput = lightPath[0].throughput / pdfForward; | |
Ray ray(pLight, dir, 1e-3f); | |
int lightVertexCount = 1; | |
while (lightVertexCount <= mMaxPathLength) { | |
float epsilon; | |
Intersection isect; | |
if (!scene->intersect(ray, &epsilon, &isect)) { | |
break; | |
} | |
const Fragment& frag = isect.fragment; | |
// for some light types (delta light like point/spot light), | |
// we can't evaluate radiance before we know the intersection of | |
// light particle shoot out from light | |
if (lightVertexCount == 1) { | |
throughput *= light->evalL(pLight, nLight, frag.getPosition()); | |
} | |
BSDFSample bs(sample, mLightPathSampleIndexes[lightVertexCount], 0); | |
Vector3 wo = -normalize(ray.d); | |
Vector3 wi; | |
float pdfW; | |
BSDFType sampledType; | |
Color f = isect.getMaterial()->sampleBSDF(frag, wo, bs, | |
&wi, &pdfW, BSDFAll, &sampledType, BSDFImportance); | |
bool isSpecular = (sampledType & BSDFSpecular) == BSDFSpecular; | |
pdfForward = pdfW / absdot(wi, frag.getNormal()); | |
if (isSpecular) { | |
pdfBackward = pdfForward; | |
} else { | |
pdfBackward = isect.getMaterial()->pdf(frag, wi, wo) / | |
absdot(wo, frag.getNormal()); | |
} | |
lightPath[lightVertexCount] = PathVertex(throughput, isect, | |
pdfForward, pdfBackward, isSpecular); | |
lightPath[lightVertexCount].G = evalG(lightPath[lightVertexCount], | |
lightPath[lightVertexCount - 1]); | |
lightVertexCount += 1; | |
if (f == Color::Black || pdfW == 0.0f) { | |
break; | |
} | |
throughput *= f / pdfForward; | |
ray = Ray(frag.getPosition(), wi, epsilon); | |
} | |
return lightVertexCount; | |
} |
and here are some images only display 1st bounce lighting for debug purpose during the development:
s=1, t=3 first bounce for glossy material
s=2, t=2 first bounce for glossy material
s=3, t=1 first bounce for glossy material
the above debug images will converge when we cranked up the sample number, but even in this simple setup we can tell different strategy have different converge speed compare to other strategies. In the above images, s=1,t=3 path tracing strategy seems to be the better strategy compare to the other two. How we determine which strategy is good for which scenario, and which strategy is better for other scenario? This is where multiple importance sampling come to rescue :) and I'll try to talk about my interpretation in the next post.
Hi Wei,
ReplyDeleteThanks for helping with previous explanations on importance and measurement equation.I now implemented the light tracing same code as you provided. The image looks similar to the path tracing one I had.
I now am implementing this post, connecting every s, t; but the image for s=1, t>1 standard path tracing is coming much brighter
compare to the ref path tracing. I am not sure of the way I
constructed the camera path and camera pdf.
Is there any chance you could share the project.
the project live in github at this moment but it doesn't have much(or any...) documentation except comments
Deletehttps://github.com/bachi95/Goblin
If the camera path second vertex is determined by the given pixel coordinates on the image plane, would it's pdf(pdfForward) not equal
ReplyDeleteto 1 as it is already predetermined?
like I mentioned earlier, BDPT did the measurement equation integration across the whole lens/film so you can think of a sample of camera ray as a uniform sample across the film (unless you are doing adaptive sampling when shooting eye rays, that get things more complicated...) and the pdf should be something like 1/filmArea if sampled in area space or 1/filmArea*(cos/d^2) if you are sample in solid angle space
DeleteOne more thing. I realised that the very bright spots on the three above images, are generated by the s=0 t>0 strategy where an eye
ReplyDeletepath is a complete path on its own. But my question is why the result(pixel colour) is so different compare to other strategies(
s=1, t>0). Is it not true that each estimator(strategy) should eventually converge to same result.
the above 3 images contains no s=0 strategy since they represent s1t3, s2t2, s3t1. The s=0 strategy does tend to generate firefly since there may a path with low pdf happen to hit the light and generate the spike value. It should still converge to the same result of s=1 strategy once you crank up the sample (potentially really high number, like the above t=0 strategy needs extremely high number of photons to converge)
DeleteHow would you add distant directional light source(like sun) in BDPT?
ReplyDeleteIf this is an indoor scene that gets sunlight through windows, how is this handled in BDPT(other than to consider Sun as a big disc far away and to sample a position on it's area, as this probably wouldn't be efficient for points within the room)
Would it be valid instead to consider window polylines as area sources that emits light in one direction(directional source)? The light samplePosition would be as it is(with pdf 1/A ?) and sampleDirection pdf=1? How about connecting s,t and MIS weights? are they need to be modified or same codes can be used?
Could I have your thoughts please?
Thanks,
theoretically you don't need to modify the MIS code for this type of "area directional light". and yes I think the pdf for area is 1/A and directional pdf is a delta distribution that will be 1 during sampleDirection and 0 for any other evaluation.
DeleteThe idea somewhat reminds me of portal light
https://support.solidangle.com/display/AFMUG/Light+Portal
Though in your case you probably gonna see a black window since the direction is delta distribution :)
Thanks for the reply.
ReplyDeleteFor directional light, like sun, the source area is usually
considered to be oriented towards (sun)direction so the cosine is 1. But the portal opening normal vec can make angle with the sunlight direction. Is it correct then to say the cosine in
pdfForward = pdfLightDirection / absdot(nLight, dir), accounts for that?
Sorry for bothering you much but I appreciate if you take time
one this question too. in path tracing and on direct lighting when using portal to sample environment map outside, at the moment I only take the visible sky through the portal seen from the shade point in direct lighting calculation, but many samples are occluded by the ground or obstructions outsides; therefore to account for them later I rely on the brdf sampling to luckily exit the window and sample outside environment. But to me it seems waste of samples to sample a portal and only accounts for light from the sky.
But then to account for those occluded rays means if the sample ray hits an outside obstruction like ground, I assume one has to recursively call the method to get its radiance(whereas usually recursive calls or account for indirect radiance are avoided in direct calculation, is it correct?) and if by any chance the next bounce(from the outside hit point) comes again inside the room it becomes too complex. I am not sure if this is a correct why of thinking it.
For that portal directional light, I don't think you should have any cosine term for pdfFoward. You can think that photon come out of void instead of hard surface, it's just the entry area is limited to that portal.
ReplyDeleteIf you sample direct lighting based on portal and still got blocked by obstacle out of window, I can only say that it's a really tricky lighting setup that artists usually will try to avoid, and yes you probably can only rely on bsdf sample bounce to gather radiance info. Veach97 did mention some potential interesting idea like growing path from the middle (in this case, spawning path from window area) but I haven't seen too much further research on this area (most of the light transport algorithm nowadays still generate path from either light or camera)