Archive for the 'General' Category
Progressive Photon Mapping
In 2011 I added the progressive version of photon mapping from “Progressive Photon Mapping: A Probabilistic Approach” by Knaus and Zwicker.
The technique is simple and easy to implement but also amazing. Great bang to buck ratio! For anyone starting off with photon mapping, once you have direct visualization of the photon map I would skip irradiance caching and final gathering and just do this. It’s a lot easier, better looking, and still pretty fast.
Here are some test renders. The Cornell Box took about 9 hours for 50,000 passes with 100,000 photons per pass. The Cubo prism took about 18 hours for 50,000 passes using 200,000 photons per pass. Both were done on an Intel i7 920 CPU at 2.67GHz (4 physical cores, 8 logical).
The Cubo prism image looks a lot better than my earlier rendering of it. The caustic is cleaner and the bright reflections of the light are visible.
Dispersion, Spectral Filtering, Subpixel Sampling
Bunny model available here. Ajax bust available here (thanks jotero!). Bunny scene was under 3 hours, Ajax scene was 8 hours.
The Cubo prism model is available here. For comparison, here is a proper rendering by luxrender. The prism shows off a new feature, spectral rendering. I used 20,000,000 photons which took about 1h20m to emit, and another 10 hours for rendering. (Without diffuse final gather it renders in about 2.5 hours.)
Dispersion in the glass is computed using Cauchy’s equation with dense flint glass (SF10) coefficients. Spectral computations are done in a way similar to this project, but with support for arbitrary wavelengths. I sample a wavelength from the visible spectrum and compute a “wavelength filter”, which is just an RGB color. I convert the CIE XYZ response for that wavelength to RGB and normalize such that a uniform sampling of the entire spectrum produces white (1, 1, 1) instead of (0.387463, 0.258293, 0.240652). Then I scale the emitted photon color and primary rays’ radiance by the normalized color. I sample the wavelengths with importance sampling according to the average CIE XYZ response.
With this spectral filtering I have to take more samples to eliminate the chromatic noise, but the result is consistent with the non-spectral result, provided there is no wavelength dependent reflection such as dispersion. That is, without dispersion, the spectral and non-spectral results match. If there is wavelength dependent reflection, then you get results like the prism image.
Finally, the cubo prism image and the last batch of sphere BRDF tests use subpixel sampling (similar to what’s done in smallpt). I divide each pixel up into 4×4 subpixels. Basically I scale the image resolution by 4, render, do all the tone mapping, and then shrink the image back down to the desired resolution using averaging. This produces much sharper results at the cost of increased memory usage. This is partially based on what Sunflow does, whose source I used as a guide, but without the adaptive anti-aliasing.
9 commentsMultiple Importance Sampling
Fixed lots of bugs in my Ashikhmin & Shirley and Schlick BRDF’s. Schlick anisotropic is still not 100% correct. Added explicit BRDF calculation for both BRDF’s, allowing for light sampling in addition to BRDF importance sampling. This helped a great deal with small light sources like the sun. To incorporate the benefits of both sampling strategies (light and BRDF), I also added Multiple Importance Sampling (MIS) using the one-sample model and balance heuristic as described in section 9.2.4 of Eric Veach’s dissertation. This solves the glossy highlights problem (section 9.3.1).
The MIS is actually incredibly simple to do, once you can compute the BRDF for an arbitrary direction. Choose weights for two sampling strategies (light sampling or brdf sampling). I do .5 and .5. Then select one of the sampling techniques according to its weight. Trace the ray and compute the BRDF. Then divide by the PDF as done for regular importance sampling, but now the PDF is weight1*pdf1+weight2*pdf2 (regardless of technique chosen). Done. Well… actually, it took some refactoring and debugging in lots of places to add MIS everywhere.
I was hung up for a long time because I tried to be cheap and only compute the simplified BRDF/PDF ratio when importance sampling the BRDF. Pointless. Just compute both the BRDF and PDF, since you’ll need both for MIS.
Also added direct lighting computation for emissive spheres.
Times are about 42 minutes with 1024 samples per pixel (spp), on a Q6600 quad core cpu.
5 commentsDistance Field Experiments
Here are the results from some distance field intersection experiments.
Inspired by slisesix by Inigo Quilez. A better explanation is on my reading list.
I also posted these on ompf. A single sample at 1024×768 takes about 6 seconds on a 2.4GHz Q6600 quad core CPU. Intersection works by stepping the field distance with a minimum step until the isosurface is intersected, and then the distance is refined with bisection method.
Glossy reflections
Added Ashikhmin & Shirley’s Anisotropic Phong BRDF and Schlick’s BRDF (the ’94 paper) support for pure path tracing (implicit light paths only). My Schlick BRDF is probably broken. Adding these entailed abstracting a BRDF interface class, implementing importance sampling and computing BRDF/PDF. The daylight image is after sunset for a good reason. The sun would only contribute if a ray found it by chance… talk about noise.
Front row is Ashikhmin & Shirley Anisotropic Phong BRDF. 2nd row is Schlick’s anisotropic BRDF. Third row is my preexisting set of materials. Back row is A&S BRDF with measured reflectance values (in RGB. See previous post.) The floor is A&S BRDF and my fascimile of their texture.
Left image is Preetham daylight model. Right is Debevec’s Uffizi HDR environment probe.
2048 spp, 24-30 minutes each on a Q6600 2.4GHz Core 2 Quad.
This work is from May 2008.
No commentsMeasured reflectance
Inspired by this beauty I set out for rendering with “measured reflectance” values.
FreeSnell comes with spectral data for a variety of materials in a series of .nk files, including the index of refraction (n), the extinction coefficient (k, aka absorbtion), and most importantly in this case, the reflectance. I converted the spectral reflectance into RGB by multiplying by a 6500 K blackbody emission spectrum, converting to XYZ, then to RGB, and normalizing by dividing by the red channel of a 6500 K spectrum converted to RGB. RGB values and code are below. The Spectrum class is in the pane source code. This is work from May 2008.
Metal | R G B values |
---|---|
Chromium | 0.637800 0.628488 0.676412 |
Copper | 0.934587 0.606404 0.509491 |
Iron | 0.561852 0.533074 0.557609 |
Lithium | 0.911137 0.839446 0.767515 |
Nickel | 0.654712 0.575060 0.499619 |
Silicon | 0.348758 0.347516 0.415774 |
Tungsten | 0.513084 0.468396 0.448025 |
void nkToRefl(const char *file){ ifstream inFile; inFile.open(file); if (!inFile){ cerr << "Error opening " << file << "\n"; return; } string str; getline(inFile, str); getline(inFile, str); fprintf(stderr, "%8s %30s reflectance = ", file, str.substr(18).c_str()); double eV, n, k, r; double h = 6.626068e-34; // m^2 kg/s double c = 299792458; // m/2 double ec = 1.60217646e-19; // C map reflsMap; while (!inFile.eof()){ inFile >> eV >> n >> k >> r; double nrg = eV * ec; // V = J/C => J = VC double wavelen = h*c/nrg; // nrg = h*c/l => l=h*c/nrg double nm = wavelen*1e9; reflsMap[nm] = r; } inFile.close(); vector wavelens, refls; for (map::iterator i=reflsMap.begin(); i!=reflsMap.end(); i++){ wavelens.push_back(i->first); refls.push_back(i->second); } Spectrum lightSpec(6500); // start out with light Spectrum reflSpec(&wavelens[0], &refls[0], wavelens.size()); lightSpec.scaleSpec(1/4.73223e+06); // normalize by red channel reflSpec.scaleSpec(lightSpec); // mult light by refl SbColor rgb = reflSpec.getRGB(); fprintf(stderr, "(%5f, %5f, %5f)\n", rgb[0], rgb[1], rgb[2]); } |
Heightfield and media rendering
Another lazy update. This work is from the middle of 2008.
First image shows a failed attempt at rendering a displacement mapped triangle mesh. I tried to implement a variation on Interactive Smooth and Curved Shell Mapping (Jeschke et al.) (shown) but my variation was a failure. I also looked at Direct Ray Tracing of Displacement Mapped Triangles (Smits et al.) but became impatient.
Eventually I gave up and just went with brute ray marching on a single plane (second image). Marching proceeds using a hard-coded Lipschitz constant which adjusts step size based on a multiple of the current vertical distance. Once a crossing is found the intersection is refined using bisectional search to the desired accuracy. This is very slow as it evaluates the displacement shader (the height function) like crazy. The height field in the second image is a ridged multi fractal procedural function from Texturing & Modeling: A Procedural Approach pg. 504. I lifted both the illumination and terrain type from the beautiful renderings by Picogen.
I added path tracing for participating media (third image). This helped me find a bug in the photon mapping code and now they all match up. The full size version of the third image shows a comparison of path tracing (4 min) vs. path tracing with explicit light sampling (20 min) vs. photon mapping (43 min). Update: The variation in timings is from me working around noisy bugs in my integration.
smallpt: Global Illumination in 99 lines of C++
Finally releasing a little project I’ve been working on. A small project, although I spent an insane amount of time tweaking it. My hope is that it can serve as a concise reference to anyone wondering what’s involved in a basic global illumination renderer. Honestly, since I don’t explain the concepts or the shortcuts used, and the code is ugly in some places, I don’t know how valuable it will be to a someone just entering the field. A basic renderer can be kicked out in less than a day, perhaps a concrete working example like this can fill in any missing details for someone just starting off.
smallpt: Global Illumination in 99 lines of C++
Indexed triangle sets, plane primitive
Another stab at Julia quaternion set, from March. In April I added support for indexed face sets, resulting in fantastic memory savings. Here are test renders of a 7 million and a 28 million polygon model, respectively. Lucy, the 28M tri model, required a fair amount of swap on my 2GB machine. There’s still some memory savings to be had I think, especially if I added BIH and used that instead of Octree-R. Models courtesy Stanford 3D Scanning Repository.
Preetham et el. sun and sky light model
I was really inspired by the cool images lycium and greenhybrid were making using the sky model by Preetham et al., so in March I read the paper again and overcame my laziness and gag reflex at the spaghetti mess of equations, and hacked together the sky and sun light part of the paper (first image). In hindsight I guess it’s not that complicated really. The sun is straightforward yet also tricky. The sky is basically parameterized by the altitude of the sun and now near the view vector is to it and the horizon. They precomputed a bunch of coefficients that are essentially interpolated. The major hangup I had is that a division by zero happens for turbidities less than 1.7 (or so) since one of the interpolated values goes negative! Watch out for that. Also I seem to recall something numerically terrible happens near the horizon which necessitates some epsilon clamping.
The second image shows the sun’s glare (using another of Shirley’s papers, implemented a few years ago). I modified the third function (f_3) to not be scaled by the glare lines but be angularly symmetric, which produces less pronounced lines.
I added support for baking the sky light to an environment map which I can sample efficiently using the large area light importance sampling from PBRT. One to four shadow rays are always sent to the sun, and a number of samples are taken from the environment. Photons emit from the sun or the sky light with 50% probability each. An example environment map is shown in the third image.