I’ve made a lot of new updates to the engine since my last post.
- Motion Blur – Both translational and rotational motion blur are supported for both scene objects and the camera.
- Volume Rendering Updates – Dreamwork’s OpenVDB library is now supported allowing for a much wider range of volumetric effects.
- Embedded Scripting – The engine now supports python scripting to control scene objects, materials, camera, etc.
- Video Creation – Now full mpeg videos can be generated in addition to static images. To make an animated video all I have to do now is create a scene, apply the proper Python scripts for animation and let the engine run until it reaches a user set threshold at which time it’ll automatically stitch together a video. I’ll be uploading videos I’ve created to the “Videos” tab above.
Check out some new renderings showcasing a few of these features below:
I’ve reimplemented the way volumetric multi-scattering is handled. In traditional photon mapping, the viewing ray is sampled at discrete points and the N nearest photons are collected, followed by a radiance estimate. The sum of the contributions of each of the sampled points is then added to the final ray color.
The problem with this approach is its inherent inaccuracy. Many photons would inevitably be left out and others would wind up being double-counted if ray sample points were two close.
What I’ve done here instead is use a beam-gathering estimate. All photons are treated as being at the center of a sphere and all photons are stored in a hierarchical data structure. The ray then traverses the structure and adds the contribution of any photon whose bounding sphere it intersects. This allows the ray to take into account every photon within reach without double counting in only one single step (point sampling is no longer necessary).
The above approach coupled with my new progressive photon map (wherein the photon map is recreated after every pass and a smaller search radius is computed based on the previous one) allows for multi-scattering and volumetric caustic effects of very precise detail (as seen below).
It’s been a long while since I’ve posted, but that doesn’t mean I haven’t still been working on new features. In the past year I’ve started my renderer over completely from scratch. This time around it is GPU accelerated with openCL and boasts interactive frame-rates for most scenes (even on my below average laptop). Along with the GPU boost comes a completely revamped progressive photon-map based global illumination engine, beam-traced participating media, subsurface scattering and much more. Stay tuned!
I’ve updated my sky rendering algorithm to better integrate it with the rest of my program. Now objects placed in the scene can be properly illuminated by outdoor lighting with visibility being reduced the further away from the camera they are placed. This effect can be exploited to create images of celestial bodies. To create the images of the moon I’ve posted below all I had to do was apply a moon diffuse texture to a sphere and place the sphere far away from the camera outside of the earth’s atmosphere. Depending on the time of day and the orientation of the sun, the colors of the sky and the way the moon is illuminated change dramatically.
Additionally the atmosphere can now be rendered from outer space. The third image you see here is a rendering of the earth with the camera placed above its atmosphere. The earth is simply represented as a sphere with an earth diffuse texture surrounded by an atmospheric volume. The atmosphere is clearly visible as a blue ring surrounding the earth.
I’ve recently added participating media to my renderer. Both homogeneous and heterogeneous media are supported with the latter requiring the use of a density file that divides up the bounding box for the medium into voxels, each with a value in the range [0,1] that is used to modulate its material properties such as the scattering and absorption coefficients.
Volumes are rendered by approximating the volume rendering equation by means of ray marching with the radiance coming from light sources and reflected off of solid objects also being attenuated also by ray marching.
Participating media showing volumetric shadows. A “CompositeVolume” class is used to simultaneously render two separate volumes as one.
In this image one can see volumetric shadows cast by the red balls as well as decreased visibility in the spheres that are further away.
By placing two dense clouds within a larger, less dense fog, god rays appear as beams of light shining through the clouds.
I’ve recently implemented a relatively simple atmospheric scattering simulation to my renderer. The color of the sky is due to the scattering of light by air molecules. Rayleigh scattering, which is caused by particles much smaller than the wavelengths of visible light, is responsible for the blue-ish color of the sky during the day and the red/orange color of the sky during the twilight hours. Mie scattering on the other hand is caused by aerosols and is responsible for the “haze” of light that surrounds the sun. I use both Rayleigh and Mie scattering together in order to achieve the correct the effect.
The sky itself is modeled using a 6420 kilometer sphere encapsulating a 6360 kilometer sphere used to represent the earth. The camera is translated as to be on top of the earth, so if the user places the camera at one meter above the ground at Point(0,1,0) my renderer will actually position the camera at Point(0, 6360001, 0).
Since the density of the atmosphere changes with altitude, I use ray marching along the viewing ray to take this into account. Additionally, at each sample point along the viewing ray, I use ray marching along the line of sight from that point to the sun in order to calculate the amount of light that is attenuated due to scattering before reaching that point.
Here are some of my test results thus far:
Rendering of the sky using atmospheric scattering with both rayleigh and mie scattering
Fisheye view of sky looking directly at the zenith. Same as above image.
I’ve recently been playing around with image based lighting, which is where an omnidirectional panoramic image is used as a spherical map at an “infinitely far distance” to light the synthetic objects in the scene.
Below you can see a few example renders I have done so far. Both scenes make use of a cathedral environment map – the first showcases a mirror and glass ball respectively and the second is a dispersive diamond.
There are still a few details that need to be worked out however. First, there are some bugs in my importance sampling algorithm which causes speckling on diffuse objects. Secondly, I am currently not taking advantage of the full high dynamic range of the environment map which is why my renderings may appear a bit washed out. Hopefully soon I will be able to save my renderings as HDR .exr files.
I recently decided that it would be really cool to include the ability to handle light dispersion in dielectric materials, like diamonds. Most renderers out there do not account for dispersion at all, which is a shame because it produces really amazing visual affects.
Dispersion is caused by the fact that a material’s index of refract is dependent on the wavelength of light passing through. The index of refraction for a given wavelength of light can be determined by using the Sellmeier equation with material-specific coefficients.
To implement dispersion, every camera ray is randomly given a wavelength of light associated with it in the range 360-700 with a step of 5 between each successive wavelength. Thus, when a ray hits a dispersive material, the index of refraction is calculated on the spot and the ray reflects/refracts probabilistically based on the Fresnel equations. The contribution for each wavelength is multiplied by the RGB value for that wavelength. This is accomplished by finding the XYZ response sensitivity for that particular wavelength and then converting the XYZ value to an RGB representation. Every wavelength is assumed to have the same intensity and the resulting RGB value is normalized such that if one were to add up all of the RGB values for each wavelength, each color channel would sum to 1. This is to ensure the conservation of light energy.
I have included three images below. In the first, the wavelength for each camera ray is randomly assigned. The end result is kind of dark and it doesn’t look as good as it could.
My second image was rendered using importance sampling. Initial wavelengths are chosen probabilistically based on their XYZ curves, and are then modulated by the appropriate weights, as per Monte Carlo integration. As you can see, this approach gives the diamonds a much more vivid appearance.
The third image was simply an attempt to go all out and see if I can create a really cool looking scene. This scene was rendered using my double gaussian lens system with moderate depth of field effects. Each diamond is a transformed instance of the same mesh with each instance sharing the same set of vertices in memory. This image was rendered quite large at 720p so I would recommend opening it in a new page to get the full effect.