Reflective Presentation – Studio 3

Ray Tracer Optimisation


  • Threading

Learned how to use OpenMP and some of the issues of threading/need for thread safety. Also how altering the order of loops can increase/decrease render times.

Continue reading “Reflective Presentation – Studio 3”


Personal Progress – Studio 3

With the trimester drawing to a close it’s time for another post about where I’m at and where I’m going. Since this is the last studio unit I’ll be doing I need to look toward my goals after graduation rather than just going into next trimester. I mentioned in my last one of these posts that I wanted to focus on learning other languages. Over the course of this trimester I’ve worked with C# Windows Forms, as well as a little bit of lua and node.js. Between studio and final project I also got to work on a lot more project management documentation, working with Trello and UML (using were areas of documentation I hadn’t taken as much part in before this trimester.

In terms of next trimester I want to get my resume and portfolio updated (which need to be done for internships anyway) as well as build my own website to host this blog and further expand my skills. Beyond next trimester I’m still looking at indie development as my primary goal. I have a couple of ideas I want to prototype to see if they’re worth pursuing. Even if I don’t take them further than that they could be used as additional portfolio pieces. There may also be an opportunity in documenting the development of these prototypes through twitch/youtube to build an audience and help get attention for any projects I decide to pursue further.

I’ve been doing some research to see what existing indie studios do to try and mitigate loss and ensure they can keep pursuing game development as a career. Lost Gardens article talks about chance of success and needing to produce multiple failures for each successful game. They also mention heavy use of prototypes to help determine a concepts chance of success before investing in it fully. Simon Roth has a video that talks about using marketing and PR research to get a better idea of potential sales and improve visibility of your product. They both have similar messages in that research and proper planning are paramount to increasing your chance of success. They also talk about the time benefits of using third party marketing companies to more effectively target your audience, or just using tools to make it more efficient. I’ve also been reading the Game Career Guide 2015 which has an interesting article about getting funding for your game by targeting new platforms.

To sum up I want to take some time after graduating to prototype some small ideas and expand my portfolio. I’ll continue to apply for any positions in the local games industry or software development in general that I may be suitable for, as any experience I can gain early will be valuable. Since I already have some experience working with youtube/twitch I will also look at ways to document my processes and potentially build an audience/supplement my income that way.

Affective Programming Task

As I mentioned in one of my earlier posts this trimester, one of our tasks was to create an external input tool that would enable the design students to use a webcam, eye tracker, or some other form of non standard input within their Unity projects. Following on from this post in particular, we decided against trying to convert eyeLike into a library since it was looking more and more difficult to actually create a working crossover in Unity.

We opted for an alternative approach that utilized some of our networking code from the draw client. Instead of running eyeLike within Unity we will have the Unity application launch eyeLike itself and, using some additions to the code base, create a local network connection between them. This means that eyeLike can send a constant stream of update packets to the Unity side of the application, which will give the user a number of different pieces of information to draw from.

With our target being a system that can tell when the user is looking at their keyboard I think we came up with a decent solution. While it still has quite a few false positives and some instances where it won’t properly detect at all, it does provide the required functionality. Considering the use of a standard webcam rather than dedicated hardware I’m impressed with the result.

Repo Link.

Flocking Sim: Data Metrics and Wrapping

For the last part of the Flocking Task we were asked to create some data metrics to enable the user to tweak the simulation and see the results. I decided to go with a heat map since I’m interested in learning how to visualize positional data in that way. After hours of crashes due to some silly mistakes (I was trying to put -300 to 300 position data in a 0 to 600 array without converting) I managed to get the array to store an incremental value each time an agent was on that position. I then took that data and incremented a low alpha colour over each pixel.


The result when run with the base simulation configuration:


Next I want to extend this into a proper heat map by having the data on each point radiate out slightly, and then change the colour of the pixel based on the value. To start with I found some example code for extending the colour:


This gave an interesting result compared to the original:

HeatMap - Complex02HeatMap - Basic02

To finish the heat map I wanted to try adding different colours based on weighting. I tried the formulas from this site. Unfortunately they don’t have the desired effect with my heat maps:


HeatMap - Colour

Since I was rushing to get it working after dealing with errors all morning I decided to hard code the size of the heat map image. Currently there is no limit on how far an agent can travel which means that they can eventually move beyond the range of the map. As it is currently coded however the position data will clamp to the map range.

I still needed to add some kind of feature to the simulation as part of the first task. I decided that a wrapping feature could be handy as a way to contain the agents without having them bump into a surrounding wall.


The result with wrapping at 200 by 200:

HeatMap - Complex200

Flocking Sim: Progress

Decided to sit down today and try to complete the flocking task (since its overdue). To start with I added all of the data options to the configuration tool. This put the total data entries being sent to the simulation up to 98. Once the data was in the form I was able to convert it into my flockData[] int array. On the simulation side I changed all the default data settings to access data from the input array sent from the form. Because of the method I used it was easy enough to add the data since each element in the array matched a named enumerator on both code sides.


I didn’t have the time to test that each individual setting is converting correctly. I did however change the size of the objects and alter the number of agents which had an obvious effect on the simulation to show that it’s working.


First window is the default, second is with the changes listed in the third. You can see the rectangle object is on the other side of the start point, and the dimensions are changed to be less square.

Open Data: .KML conversion

After cleaning up my data set using SQL Queries I used an online CSV to KML converter to convert the data. While this converted fine it didn’t offer much customisation. After seeing the data set in Google Earth it was suggested that I change some settings in the KML for each pin. Since the converter I used was unable to do this I decided to pick up my earlier attempt at manually converting CSV to KML and go from there.


Originally I had been able to successfully import the .csv and store its data within a data structure. However I had run into some issues with condensing and sorting the information. This led to using SQL Queries as an alternative. Now that I have the data in the condensed format that I want it was relatively easy to write the second half of the program and convert the data structure into KML. In particular, I need to adjust the extrude and altitudeMode settings in KML. With these changed Google Earth will display the pin in the air based on altitude and draw a line down to its ground point.


The result:

OpenDataResultI did run into some minor errors with syntax at first, mostly accidentally adding spaces where there shouldn’t be any, and getting the Latitude and Longitude backwards in the KML, resulting in the pins being in the middle of the ocean.

Open Data: .CSV and SQL Queries

For the open data task we have been asked to find a publicly available data set and convert it into a useful format that can be viewed on Google Earth. I decided on a data set of Rail Crossing Incidents over the last few years (Link). This mostly involved incidents where a vehicle hit the crossing boom.


My first attempt involved simply throwing the data set into a CSV to KML converter. The result created a number of pins on each crossing, however it was unable to process all the data into the description field. This meant that the data was fairly useless. My second attempt was to write a converter myself in C++ and specifically tailor it to the data set. While I was able to pull the data in just fine I ran into some issues when trying to manipulate it to make it more useful. It was suggested that we use SQL queries to try and clean up the data and make it more useful when converted to KML. After running into some issues trying to not only understand the format of an SQL Query, but also the specific formatting required to use it inside a Google Sheets document, Greg introduced me to SQLiteStudio. After importing the data set into a new database I started experimenting with different queries to try and get the data into a better format.

To start with I tried removing some of the less useful information (collision code, type, incident level, etc.) and condense the data so it only shows one entry for each crossing.

select “Crossing Road Name”, “Nearest Station”, Latitude, Longitude from RailCrossingDataOriginal group by “Crossing Road Name”

I decided to change this slightly by adding a new column that shows the total number of entries for each crossing.

select “Crossing Road Name”, count(“Level Crossing ID”) as ‘Frequency’, Latitude, Longitude from RailCrossingDataOriginal group by “Crossing Road Name”

Since I was able to get the frequency for each crossing I decided to try making a combined “wieght” value for each crossing based on number of occurrences, incident level, injuries, and fatalities.

select “Crossing Road Name”, “Nearest Station”, Latitude, Longitude, (4 / “Incident Level”) * count(“Level Crossing ID”) + (1 * count(“Minor Injuries”)) + (2 * count(“Serious Injuries”)) + (3 * count(Fatalities)) as Weighting from RailCrossingDataOriginal group by “Crossing Road Name”


Unfortunately I can’t get Google Earth to install on my PC at the moment so I will need to wait till my class tonight to see if this version is better than the previous one. My current plan, based on the range of the weighting values, is to use them as altitude in the positional data and see how the result looks.

Moving forward from this I want to look into two alternatives. First is to write a custom converter and turn the weighting value into a colour for the pin to better show the “danger” level of that crossing. The other is to condense the data as I have above but include the individual entries for each crossing within a single pin.