A Scene Editor and Raytracer

The original raytracer was created in fall of 2009. In 2010 the program was completely rewritten to improve efficiency, the original scene graph editor was replaced by a more robust and user-friendly one. Several new algorithms were also implemented for intersection tests and bounding volumes, and several rendering techniques such as Fresnel refraction and supersampling were added.

The old (and somewhat scroungy) scene editor interface...
...and the simple image it produces.
The new (and shiny) scene editor interface...
...and the much better image it produces.


The program is composed of two major components, the scene editor where the user can create and interact with the scene, and the renderer where the scene is rendered via raytracing.

Scene Graph


Tools and External Libraries

The program was coded in C++ using OpenGL/GLUT.

For mathematical calculations, a modified version of C++ Vector and Matrix Algebra routines that was included in Graphics Gems IV was used (other recommended libraries include OgreMath and Boost).

SOIL was used to write the rendered image to a BMP file.

Scene Graph

A scene graph is a collection of nodes in a graph or tree structure used to represent the logical and spatial representation of a graphical scene. It is used to maintain and draw a hierarchically structured scene with OpenGL. A mouse-based FPS-style camera is implemented to offer more flexibility for preparing the scene for rendering.


The scene graph supports the following geometric primitives: sphere, cube, and complex mesh objects. The user can modify the scene by creating, deleting, and copying spheres, cubes, as well as furniture such as table, chair, and filing cabinet. Each node has pointers to its parent, first child, as well as the left and right neighbors. A node contain many attributes, including translation, rotation, scaling, as well as bounding box, material type, and other information. Once an attribute is modified, the change is propagated through all of its children via recursion:

     1: void SceneGraph::doSomethingToNode(Node *node)
     2: {
     3:     // Recursively process all children
     4:     if (node->child != NULL)
     5:         doSomethingToNode (node->child);
     6:     if (node != selected) {
     7:         if (node->next_sibling != NULL)
     8:             doSomethingToNode (node->next_sibling);
     9:     }
    10:     // Do something to node
    11:     Do something to node;
    12: }

Mesh Objects

The scene renderer supports a robust mesh class that is capable of representing complex objects. The mesh class stores an arbitrary set of triangles that represent the geometry of an object.

A mesh object is first created by reading from a configuration file that contains a set of points, which are then used to create complex objects through either extrusion or surface revolution. The face list, vertices, and corresponding normal are then calculated and stored in the mesh class for rendering.


An extrusion is essentially a prism. The polygonal base is either retrieved from the configuration file or defined by the user. If the base is a convex polygon, then the endcaps is displayed, otherwise only the sides appear on the screen. Afterwards, the sides of the extrusion are generated automatically by building triangles out of the vertices.

To check whether a polygon is convex, we check each vertex of the polygon to see if the angles formed by the previous and the next edge have the same sign (using cross product), if one of the angles have a different sign, then we know the polygon is concave:

     1: bool Mesh::isConvex()
     2: {
     3:         if (# of points < 2) // Check if it's a polygon
     4:                 return false;
     6:         // Check if the turning angle at each point has the same sign
     7:         int turn_angle = 0;
     8:         for (int ii = 0; ii < # of points ; ii++) {
     9:                 angle = (points[ii + 1] - points[ii]) X
     10:                         (points[ii + 2] - points[ii + 1]);
    12:                 if (fabs(angle[1]) > EPSILON) {
    13:                         // If it's negative, the cross product of
    14:                         // all vertices should all be negative
    15:                         if (angle[1] < -EPSILON) {
    16:                                 if (turning angle == 1) {
    17:                                         return false;
    18:                                 }
    19:                                 turning angle = -1;
    20:                         }
    21:                         // If it's positive, the cross product of
    22:                         // all vertices should all be positive
    23:                         else {
    24:                                 if (turning angle == -1) {
    25:                                         return false;
    26:                                 }
    27:                                 turning angle = 1;
    28:                         }
    29:                 }
    30:         }
    31:         return true;
    32: }

Surface Revolutions

A surfrev takes a polyline and rotates it about the Y-axis to create a 3D object. The rotation is made in discrete steps, and a side is created during each step. For example, with a step size of 30 degrees, we end up with a 12-sided shape. If the polyline ends at the Y-axis, the object is closed at the end by default. If the polyline does not end at the Y-axis, the endcaps are constructed automatically. After the vertices have been determined, triangles are created in a similar fashion as in extrusion.

Bounding Volumes

For bounding boxes, min/max points for each of the x, y, and z component of each node are calculated. After getting the min/max values to solve for the AABB, the OBB and the bounding sphere can be obtained fairly easily by either applying a node's local transformation to the AABB for its original state to get the OBB, or find the maximum radius from the max/min AABB points for the bounding sphere.

When changes are made to a node's transformation, the bounding box is updated to reflect the new change. AABB is the simplest to calculate, and is the first to be calculated. To obtain the new AABB, simply find out where the original min/max points for x, y, and z are, then apply the transformation to the points, and compare the new values to the old min/max:

     1: for (int ii = 0; ii < 6; ii++) {
     2:     matrix new transformation*old points[ii];
     3:     if (position_x < aabb_min_x) aabb_min_x = position_x;
     4:     if (position_x > aabb_max_x) aabb_max_x = position_x;
     5:     if (position_y < aabb_min_y) aabb_min_y = position_y;
     6:     if (position_y > aabb_max_y) aabb_max_y = position_y;
     7:     if (position_z < aabb_min_z) aabb_min_z = position_z;
     8:     if (position_z > aabb_max_z) aabb_max_z = position_z;
     9: }

Intersection Tests

An important part of raytracing involves ray-object intersection tests. So before moving on to the raytracer, we want to make sure we have working (and fast) intersection testing routines. Intersection tests were implemented to support the existing three geometric primitives: sphere, cube, and triangle (mesh).

For each intersection test, we are given the following: ray origin, ray direction, transformation matrix for the geometric primitive. To make calculations easier, we can use the transformation matrix to transform the ray from world-space to the object-space of the geometric primitive. After we obtain the desired t value, we transform the results back into world space.

To transform the ray, we need to calculate the inverses of the transformation matrix. To improve performance, we pre-compute the required transformation matrices for each node before we begin the raytracing project.

Once we have determined that there is an intersection, we need to calculate the normal (a vector that is perpendicular to the object) to use for lighting later.

For intersection tests, both the algebraic ray-sphere intersection and the geometric ray-sphere intersection tests were implemented. For ray-triangle intersection, the program uses the method described by Moöller and Trumbore in their paper Fast, Minimum Storage Ray/Triangle Intersection. Finally, one simple ray-cube intersection test is the 12 ray-triangle test, but it's very inefficient, so instead we implemented the Slabs method for testing Ray-AABB intersection, as well as a test using Plücker Coordinates. Below are some good resources for intersection tests:


When rendering, the raytracer shoots a ray from the eye (viewport) to each pixel on the screen. As the ray intersects objects in its path, it calculates the diffuse/specular effect of lighting on the object at that point and accumulates color. If the object is reflective, the ray will bounce and continue in a new direction, all the while gathering color. To prevent the ray from bouncing infinitely, the default maximum number of bounce is 5. If the ray encounters a transparent object, it will go through the object while obeying Snell's law of refraction and Beer's law.

With spherical bounding volumes, we can use Bounding Volume Hierarchies (BVHs) to speed up the rendering process even more.

Below are some resources used while working on this part of the project:


There still seems to be a bit of hiccup in quality when rendering using Fresnel reflection. OBJ file support should be added so the scene renderer can import .OBJ files. Another improvement is to add a spline creator, so users can "draw" shapes, which are then converted to mesh objects. Photon mapping is another nice-to-have effects.


No anti-aliasing.
With anti-aliasing.
High reflection.
Low specular, regular refraction.
Low specular, regular refraction.
Mesh object.