First Project in OpenGL – Duplication of the Portal Ending
This project involved learning how to write and format text to the screen, as well as link audio to the animation. The entire program is event driven using an event file (essentially a script) to change the speed of the text playback, the image that appears on the lower right, and the text printing. This was a first investigation into event driven programming and scripting. See the animation here.
Octree Partitioning and Ray Tracing
This project involved working with de-projection of the camera Frustrum matrix to perform ray tracing from pixels on the screen. Additionally, the octree spatial partitioning example involved a dynamic construction and allocation of an octree structure in C++ making use of square bounding boxes and some special properties of 3D geometry. The octree test video can be seen here .
Another project involved a prototype for a space game, which required a mechanism to plot faster than light jumps from one star system to another. This project looked at creating a 3D spherical coordinate system that could be easily controlled with the mouse for viewing, along with the ability to do a vector drag and snap from one star to another. The camera control code was used by a fellow classmate in one of their computational astrophysics projects, allowing the user to jump from planet to planet in the solar system and move their view around. The video demonstrating this test can be seen here.
Sim City Type City Management Game Prototype
This project involved developing a framework for height map terrain generation, terrain tessellation, compact binary storage, GPU state management, and real time variable surface ray casting. Essentially, a height map is loaded from a tga image, representing the terrain. The program begins by tessellating a plane to allow for each point on the tga image to correspond to a single vertex. Simultaneously, the program generates a data storage map which allows for each square on the terrain to be represented by a pixel. When the rendering begins, a flat plane, height map, and data map are passed to the shader, which then applies the correct height mapping from the height map, numerically computes the normals from differentiation, and then dynamically allocates a texture for each of the different terrain types (sand, grass, snow, and rock for steep terrains), blending the textures together. Next the shader checks the data map to see if anything specific should be drawn, such as a road or intersection, and which direction it should go in. The user can then activate certain tools, to allow roads to be drawn, zones to be drawn, and tiles to be cleared. This is done by ray tracing the cursor from the screen to an intersection on the geometry, and then projecting it onto the corresponding pixel of the data map, where data is then written. The video demonstration of this program can be seen here.