Master's Thesis
Kevin Beason /
Precomputed Global Illumination of Isosurfaces
Level set of a nucleons dataset rendered with local illumination (left)
and global illumination (right).
Adding global illumination required only an additional 50 milliseconds
after isosurface extraction to display, using a precomputed 3D texture map.
The 3D texture map was precomputed
in 2.7 hours using a novel technique to render the
graph of the dataset in four dimensions using photon mapping.
Dataset courtesy Dr. Jorge Piekarewicz and Brad Futch at
Florida State University.
Abstract
Three dimensional scalar heightfields, also known as volumetric datasets,
abound in science and medicine. Viewing the isosurfaces, or level sets,
is one of the two main ways to display these datasets,
the other being volume visualization.
Typically the isosurfaces are rendered on a personal computer (PC)
allowing the scientist or doctor analyzing the dataset
to interactively change the isovalue, and rotate or zoom the isosurface.
Unfortunately,
out of necessity due to the
PC's video card,
current techniques
render the isosurfaces with
a basic hardware-accelerated lighting model.
This lighting model lacks important features such as shadows,
and as a result the isosurfaces are
more difficult to interpret than if they
had been rendered with a physically based lighting model.
My thesis is that isosurfaces
can be displayed
with realistic illumination
at interactive rates
on a typical PC.
I present a method for applying global illumination to
interactively created isosurfaces,
using a physically based lighting model,
with a negligible increase in the time required to render the isosurfaces.
The result is convincing shading that is easy to interpret by
the human visual system, including features such as
soft shadows, inter-reflection, caustics, and color bleeding.
This is achieved by solving the rendering equation
for all isosurfaces within the volume,
storing the solutions in a 3D texture, and then
texture mapping the result onto a polygonal
approximation of the isosurface. This process is
called ``heightfield rendering''.
Summary
Title |
Precomputed Global Illumination of Isosurfaces |
Status |
Successfully defended on 7/26/05.
|
Log |
here
|
Downloads |
Thesis (PDF, 112 pages, 5.6 MB)
Presentation (Powerpoint, 50 slides, 13 MB)
|
Code |
pane4D-1.19.tar.gz
|
Images
Here are two before and after comparisons of visualizations of 3D scalar
data. The dataset on the left is a MRI of a brain
from McGill University, and has resolution 217x217x217, with a texture
precomputation time of 90 minutes. The dataset on the right is a scan
of living mouse neuron from Debra Fadool and Wilfredo Blanco at Florida State University,
with resolution of 150x150x150. Here illumination texture required
77 minutes to precompute.
In both cases the added time to apply the illumination after arbitrary isosurface extraction
was about 10 milliseconds.
Before |
After |
Before |
After |
The images below are from a ceramic dataset of resolution
100x100x100. Since both sides of the isosurface are often visible,
two textures were created: one for each side of the isosurface.
Texture precomputation required about 2 hours on an 8 processor SGI prism.
Before |
After |
Before |
After |
Here are two more datasets. On the left is a MRI scan of a human head from the
Stanford volume data
archive, with a resolution of 256x256x109. The
time to compute the 3D texture containing the illumination was
4.3 hours on a dual Xeon 3.0 GHz.
Before |
After |
Before |
After |
Movies
Each of these movies shows a typical user-interaction with a dataset.
The isosurfaces are first examined using ordinary OpenGL diffuse shading.
The dataset is then examined using a 3D texture computed using my
new technique.
Nucleons, 35 MB |
Laser Assisted Partical Removal, 14 MB
|
Neuron, 25 MB |
Brain, 22 MB |
Additional work
As part of my thesis, I implemented a volume ray tracer to create the 3D
textures. The ray tracer uses hierarchical spatial
subdivision of the volume dataset and scene geometry to perform both
ray-geometry and ray-isosurface intersection.
In addition to creating 3D textures, this softare
can render images, such as the following, which uses the
Uffizi light probe for
image based lighting.
Direct isosurface rendering
Dataset from Department of Radiology, University of Iowa |
I also implemented photon mapping in two dimensions (2D), for illustration purposes
in my thesis. In this "flattened" light transport, light remains within the
same 2D plane as it is emitted (except for a final bounce to the camera so
that it may be viewed from any angle). To do this I had to come up with a
2D emittance distribution for both the direct illumination calculation and
photon emission, 2D reflection capabilities including a 2D brdf and 2D photon scattering,
a 2D irradiance estimate (new) for estimating the irradiance using the photon map,
and a 2D final gather.
The following images demonstrate the differences of 2D lighting versus
ordinary 3D lighting. The most noticable difference, shown in the bottom
right image for the 2D case, is the prominent shadows and colored indirect
illumination in the shadow regions. These shadows are pronounced because
the light, which extends from the floor to the ceiling,
is unable to shine over the bumps into the regions behind them, since
2D light cannot flow up or down. As a result, these regions only
receive indirect light which has a colored cast, an example of color bleeding.
Ordinary light, emitting and scattering in 3D.
|
|
Bump box rendered with ordinary 3D light. |
|
"Flattened" light, emitting and scattering in 2D. |
Bump box rendered with "flattened" 2D light. |
Top row: 3D light.
Bottom row: 2D light.
Committee Members
Support
This research was supported by the SCS Visualization Laboratory and by
NSF Grants #0083898 and #0430954.
|