Lots of New Features (Motion Blur, Videos, etc)

I’ve made a lot of new updates to the engine since my last post.

These include:

  1. Motion Blur – Both translational and rotational motion blur are supported for both scene objects and the camera.
  2. Volume Rendering Updates – Dreamwork’s OpenVDB library is now supported allowing for a much wider range of volumetric effects.
  3. Embedded Scripting – The engine now supports python scripting to control scene objects, materials, camera, etc.
  4. Video Creation – Now full mpeg videos can be generated in addition to static images. To make an animated video all I have to do now is create a scene, apply the proper Python scripts for animation and let the engine run until it reaches a user set threshold at which time it’ll automatically stitch together a video. I’ll be uploading videos I’ve created to the “Videos” tab above.

Check out some new renderings showcasing a few of these features below: IridescentTest1ExplosionGI.pngScreen Shot 2016-02-28 at 11.03.37 PM.png

By chris_f

Volumetric Caustics

I’ve reimplemented the way volumetric multi-scattering is handled. In traditional photon mapping, the viewing ray is sampled at discrete points and the N nearest photons are collected, followed by a radiance estimate. The sum of the contributions of each of the sampled points is then added to the final ray color.

The problem with this approach is its inherent inaccuracy. Many photons would inevitably be left out and others would wind up being double-counted if ray sample points were two close.

What I’ve done here instead is use a beam-gathering estimate. All photons are treated as being at the center of a sphere and all photons are stored in a hierarchical data structure. The ray then traverses the structure and adds the contribution of any photon whose bounding sphere it intersects. This allows the ray to take into account every photon within reach without double counting in only one single step (point sampling is no longer necessary).

The above approach coupled with my new progressive photon map (wherein the photon map is recreated after every pass and a smaller search radius is computed based on the previous one) allows for multi-scattering and volumetric caustic effects of very precise detail (as seen below).

Volume Caustics Volumetric Caustics

Underwater Light Shafts

By chris_f

Revamp

It’s been a long while since I’ve posted, but that doesn’t mean I haven’t still been working on new features. In the past year I’ve started my renderer over completely from scratch. This time around it is GPU accelerated with openCL and boasts interactive frame-rates for most scenes (even on my below average laptop). Along with the GPU boost comes a completely revamped progressive photon-map based global illumination engine, beam-traced participating media, subsurface scattering and much more.  Stay tuned!

By chris_f

Moon and Sky

I’ve updated my sky rendering algorithm to better integrate it with the rest of my program. Now objects placed in the scene can be properly illuminated by outdoor lighting with visibility being reduced the further away from the camera they are placed. This effect can be exploited to create images  of celestial bodies. To create the images of the moon I’ve posted below all I had to do was apply a moon diffuse texture to a sphere and place the sphere far away from the camera outside of the earth’s atmosphere. Depending on the time of day and the orientation of the sun, the colors of the sky and the way the moon is illuminated change dramatically.

Additionally the atmosphere can now be rendered from outer space. The third image you see here is a rendering of the earth with the camera placed above its atmosphere. The earth is simply represented as a sphere with an earth diffuse texture surrounded by an atmospheric volume. The atmosphere is clearly visible as a blue ring surrounding the earth.

Crescent fullmoon

EarthFromSpace

Participating Media

I’ve recently added participating media to my renderer.  Both homogeneous and heterogeneous media are supported with the latter requiring the use of a density file that divides up the bounding box for the medium into voxels, each with a value in the range [0,1] that is used to modulate its material properties such as the scattering and absorption coefficients.

Volumes are rendered by approximating the volume rendering equation by means of ray marching with the radiance coming from light sources and reflected off of solid objects also being attenuated also by ray marching.

 

Participating media showing volumetric shadows.  A "CompositeVolume" class is used to simultaneously render two separate volumes as one.

Participating media showing volumetric shadows. A “CompositeVolume” class is used to simultaneously render two separate volumes as one.

 

In this image one can see volumetric shadows cast by the red balls as well as decreased visibility in the spheres that are further away.

In this image one can see volumetric shadows cast by the red balls as well as decreased visibility in the spheres that are further away.

 

By placing two dense clouds within a larger, less dense fog, god rays appear as beams of light shining through the clouds.

By placing two dense clouds within a larger, less dense fog, god rays appear as beams of light shining through the clouds.

By chris_f

Atmospheric Scattering

I’ve recently implemented a relatively simple atmospheric scattering simulation to my renderer. The color of the sky is due to the scattering of light by air molecules. Rayleigh scattering, which is caused by particles much smaller than the wavelengths of visible light, is responsible for the blue-ish color of the sky during the day and the red/orange color of the sky during the twilight hours.  Mie scattering on the other hand is caused by aerosols and is responsible for the “haze” of light that surrounds the sun. I use both Rayleigh and Mie scattering together in order to achieve the correct the effect.

The sky itself is modeled using a 6420 kilometer sphere encapsulating a 6360 kilometer sphere used to represent the earth. The camera is translated as to be on top of the earth, so if the user places the camera at one meter above the ground at Point(0,1,0) my renderer will actually position the camera at Point(0, 6360001, 0).

Since the density of the atmosphere changes with altitude, I use ray marching along the viewing ray to take this into account. Additionally, at each sample point along the viewing ray, I use ray marching along the line of sight from that point to the sun in order to calculate the amount of light that is attenuated due to scattering before reaching that point.

Here are some of my test results thus far:

 

Sky

 

Sunset

 

Rendering of the sky using atmospheric scattering with both rayleigh and mie scattering

Rendering of the sky using atmospheric scattering with both rayleigh and mie scattering

Fisheye view of sky looking directly at the zenith

Fisheye view of sky looking directly at the zenith. Same as above image.

Environment Map Image Based Lighting

I’ve recently been playing around with image based lighting, which is where an omnidirectional panoramic image is used as a spherical map at an “infinitely far distance” to light the synthetic objects in the scene.

Below you can see a few example renders I have done so far. Both scenes make use of a cathedral environment map – the first showcases a mirror and glass ball respectively and the second is a dispersive diamond.

There are still a few details that need to be worked out however. First, there are some bugs in my importance sampling algorithm which causes speckling on diffuse objects. Secondly, I am currently not taking advantage of the full high dynamic range of the environment map which is why my renderings may appear a bit washed out. Hopefully soon I will be able to save my renderings as HDR .exr files.

Cathedral_Image_Map

Cathedral_map_diamond

Dispersion with Spectral Rendering

I recently decided that it would be really cool to include the ability to handle light dispersion in dielectric materials, like diamonds. Most renderers out there do not account for dispersion at all, which is a shame because it produces really amazing visual affects.

Dispersion  is caused by the fact that a material’s index of refract is dependent on the wavelength of light passing through.  The index of refraction for a given wavelength of light can be determined by using the Sellmeier equation with material-specific coefficients.

To implement dispersion, every camera ray is randomly given a wavelength of light associated with it in the range 360-700 with a step of 5 between each successive wavelength. Thus, when a ray hits a dispersive material, the index of refraction is calculated on the spot and the ray reflects/refracts probabilistically based on the Fresnel equations. The contribution for each wavelength is multiplied by the RGB value for that wavelength.  This is accomplished by finding the XYZ response sensitivity for that particular wavelength and then converting the XYZ value to an RGB representation. Every wavelength is assumed to have the same intensity and the resulting RGB value is normalized such that if one were to add up all of the RGB values for each wavelength, each color channel would sum to 1. This is to ensure the conservation of light energy.

I have included three images below. In the first, the wavelength for each camera ray is randomly assigned. The end result is kind of dark and it doesn’t look as good as it could.

My second image was rendered using importance sampling. Initial wavelengths are chosen probabilistically based on their XYZ curves, and are then modulated by the appropriate weights, as per Monte Carlo integration. As you can see, this approach gives the diamonds a much more vivid appearance.

The third image was simply an attempt to go all out and see if I can create a really cool looking scene. This scene was rendered using my double gaussian lens system with moderate depth of field effects. Each diamond is a transformed instance of the same mesh with each instance sharing the same set of vertices in memory. This image was rendered quite large at 720p so I would recommend opening it in a new page to get the full effect.

diamond1 diamond2 Diamonds

Subsurface Scattering with Hybrid Monte Carlo BSSRDF Extended Dipole Approximation

I’ve been spending time updating my subsurface scattering algorithm to incorporate more recent state of the art research. The original papers I was using were written over 10 years ago and since then many improvements have been made to classical diffusion theory. Incorporating these updates have definitely been worthwhile.My current algorithm combines techniques from four different papers:
 
1) I use the single-scattering term as described in the original SSS paper by Jensen:
http://www.graphics.stanford.edu/papers/bssrdf/bssrdf.pdf
2) For multi-scattering, I make use of a hierarchical irradiance-caching point cloud as described here:
http://graphics.ucsd.edu/~henrik/papers/fast_bssrdf/fast_bssrdf.pdf
3) I use some improved definitions of several terms as described in the following paper. These include improved boundary condition and diffusion coefficient terms amongst others.
http://naml.us/~irving/papers/deon2011_subsurface.pdf
4) Finally, I replace the standard dipole-diffusion algorithm with a hybrid extended-dipole source / Monte Carlo simulation as described here:
http://graphics.pixar.com/library/PhotonBeamDiffusion/paper.pdf
 
My subsurface scattering implementation is a two-step process. In the first step, before rendering begins, the mesh is uniformly sampled an an irradiance calculation is performed at each sample. These samples are then stored in a hierarchical point cloud represented by an octree for fast lookup.
 
The second step is the rendering pass. This step implements a BSSRDF (Bidirectional Surface Scattering Reflectance Distribution Function) which is the sum of two terms: a single scattering term and a multi-scattering term.The single scattering term is used for light that enters the material and then exits again after a single bounce. It is calculated by integrating the illumination over the length of the outgoing light ray and makes use of a phase function (in my case I use the Henyey-Greenstein function) to determine the degree to which the material is anisotropic (whether the light scatters mostly forward, backward or uniformly/isotropically).
 
The multiple scattering term is used for light that bounces around inside the material many times before exiting. I use a Monte Carlo extended diffuse dipole-light source approximation combined with the irradiance samples computed in the first step to simulate multiple scattering. One pole of the source is placed above the material and the other inside it – the distance determined by the material’s properties. This differs from the classical dipole approximation in that several dipoles are calculated at different distances (which are importance sampled) and averaged based on their respective weights.
 
You can see several of my newest images below. The first image is rendered using an opaque BRDF as a reference and the rest are rendered with my subsurface scattering implementation with the translucency increasing in each successive photo. In addition to the hue of the materials matching reality more closely, increasing the translucency of the material does not cause the model to brighten up and “glow” like it did before – instead the illumination simply becomes more blurred and soft.
MarbleOpaque
marblenormal
marble2x