Affective Programming Task

As I mentioned in one of my earlier posts this trimester, one of our tasks was to create an external input tool that would enable the design students to use a webcam, eye tracker, or some other form of non standard input within their Unity projects. Following on from this post in particular, we decided against trying to convert eyeLike into a library since it was looking more and more difficult to actually create a working crossover in Unity.

We opted for an alternative approach that utilized some of our networking code from the draw client. Instead of running eyeLike within Unity we will have the Unity application launch eyeLike itself and, using some additions to the code base, create a local network connection between them. This means that eyeLike can send a constant stream of update packets to the Unity side of the application, which will give the user a number of different pieces of information to draw from.

With our target being a system that can tell when the user is looking at their keyboard I think we came up with a decent solution. While it still has quite a few false positives and some instances where it won’t properly detect at all, it does provide the required functionality. Considering the use of a standard webcam rather than dedicated hardware I’m impressed with the result.

Repo Link.

Advertisements

Using C++ in Unity

One of the tasks we have been given for Studio 3 this trimester is to help the Game Design students get feedback in Unity from a non standard external input device (i.e. a webcam, eye tracker, heartbeat monitor, etc.). The task I chose to work on involves using a webcam to determine if the user is looking at the keyboard/their hands while trying to touch type. We decided on using OpenCV as the base for our tool. There is also a library called OpenCVSharp that allows OpenCV functionality in C#.

The problem however is that most examples of eye or face tracking that we have found do not use OpenCVSharp, and I have run into problems trying to convert those projects across. I hit a snag yesterday while trying to rewrite a small open source gaze tracking project called eyeLike, which works on an image gradient-based eye center algorithm by Fabian Timm.

C++ OpenCV:

copy eyeLike c++

My conversion to OpenCVSharp:

copy eyeLike c#

This was working fine until I came across some function calls that I don’t believe I can convert to C# (as they use pointer math):

copy eyeLike c++ pointers

I’ve spent a few hours trying to think of a way to convert this function to C# and have come up with nothing. After class yesterday it was suggested that we create C# bindings for the program rather than trying to convert it. Since I hadn’t dealt with bindings or cross language coding in general I was reluctant to attempt it. After doing some research this morning however I think it will be possible (or at least worth trying). To start with I looked at options involved SWIG, COM, and Facade, which all seem to be programs for converting or creating bindings for an existing library. While it might be easier to go with one of these options I want to try a more manual approach (which will hopefully result in a better personal understanding of the process). I found this example on a blog that goes over creating a simple library in C++ and using it in C#.

Following the first example shown I was able to create a basic math library, build it as a .dll, and access its functionality in Unity by changing the example C# script slightly:

c# binding

and the result:

c# binding output

While it works in this case, the downside as mentioned in the blog post is that each function needs to be declared in the C# script, which would create a lot of extra code when dealing with a large library of functions. My next step will be to try and apply this to eyeLike and create a binding for C#

Navigating a Hex Grid – 02 – Library

Progress! I’ve just finished writing up a Hex Library based on the ideas that i discussed in my last post. It ended up being fairly straightforward, the work that we did earlier in the trimester creating maths libraries definitely helped. One thing I was uncertain about was creating functions that could be accessed anywhere within a Unity project. I hadn’t tried creating a library for use in a Unity project before now, but after a little searching I found the answers I needed. Simply making all variables static will make them available in other scripts, similar to working in c++ projects. With that said, I started with a couple of Enumerators to handle the direction and side values in the Edge and Vertex structs

public enum Direction { N, E, W };
public enum Side { L, R };

I realized after I started writing the library that I would never have a need for the South direction in the current version, so it was removed. Next I created structs to hold the data for each type (Face/Edge/Vertex) along with basic constructors to make initializing them a bit easier.

public struct HexFace
{
 public int u, v;

 public HexFace(int uu, int vv)
 {
 u = uu;
 v = vv;
 }
}

public struct HexEdge
{
 public int u, v;
 public Direction d;

 public HexEdge(int uu, int vv, Direction dd)
 {
 u = uu;
 v = vv;
 d = dd;
 }
}

public struct HexVert
{
 public int u, v;
 public Side s;

 public HexVert(int uu, int vv, Side ss)
 {
 u = uu;
 v = vv;
 s = ss;
 }
}

Finally I started adding the actual functions. Each function takes either a face, edge, or vertex and returns an array of faces, edges, or vertices.

public static class HexLibrary 
{
 public static HexFace[] GetNeighbours(HexFace face)
 {
 HexFace[] faces = new HexFace[6];

 faces[0] = new HexFace(face.u, face.v + 1); // (u, v+1)
 faces[1] = new HexFace(face.u + 1, face.v); // (u+1, v)
 faces[2] = new HexFace(face.u + 1, face.v - 1); // (u+1, v-1)
 faces[3] = new HexFace(face.u, face.v - 1); // (u, v-1)
 faces[4] = new HexFace(face.u - 1, face.v); // (u-1, v)
 faces[5] = new HexFace(face.u - 1, face.v + 1); // (u-1, v+1)

 return faces;
 }
...
}

When dealing with edges and vertices, where there are different directions/sides within the same coordinate I used a switch statement.

public static HexVert[] GetEndPoints(HexEdge edge)
 {
 HexVert[] verts = new HexVert[2];

 switch (edge.d)
 {
 case Direction.N: // (u, v, N)
 verts[0] = new HexVert(edge.u + 1, edge.v, Side.L); // (u+1, v, L)
 verts[1] = new HexVert(edge.u - 1, edge.v + 1, Side.R); // (u-1, v+1, R)
 break;
 case Direction.E: // (u, v, E)
 verts[0] = new HexVert(edge.u, edge.v, Side.R); // (u, v, R)
 verts[1] = new HexVert(edge.u + 1, edge.v, Side.L); // (u+1, v, L)
 break;
 case Direction.W: // (u, v, W)
 verts[0] = new HexVert(edge.u - 1, edge.v + 1, Side.R); // (u-1, v+1, R)
 verts[1] = new HexVert(edge.u, edge.v, Side.L); // (u, v, L)
 break;
 default:
 break;
 }

 return verts;
 }

To finish up I created a small script to try some of the functions and print their returns to the console.

Hex Library check

Testing against the 1,1 tile gave the correct returns so I’m happy enough with it to move on to integrating it with my existing hex grid. Unfortunately looking at my current iteration I will have to make some major changes in order to work it with this system.

One last thing, I wrote the function names with the same names used in the example for each relationship (GetNeighbours, GetCorners, GetTouches, etc). I think I’ll change them to a generic name based on the return array. So all functions that will return a HexVert[] will be called GetVerts(). since each function that returns a HexVert[] takes a different parameter it will just overload and make the library a lot easier to use (no need to remember specific names).

Project Methodology for In My Image

With our group projects for Studio 2 finally drawing to a close I’ve been turning my attention to the learning outcomes that I still need to get checked off. One of which requires that we choose and implement an appropriate project management method. Unfortunately our documentation was lacking for most of the project and we were more concerned with getting the game made in time for the exhibit. This meant that we didn’t have a chance to decide on our methodology.

That being the case, I decided to do some research and see what methodologies would best fit the way that we did do the project. Pick a methodology that fits our workflow, rather than the other way around I guess. This led me to the concepts of Adaptive and Predictive development methods. Predictive methods focus on planning the entire project life cycle in detail, and having procedures for any expected variations. Adaptive methods on the other hand focus on identifying milestones and working towards them one at a time, with little consideration for how those milestones are reached. The adaptive method definitely fits our workflow better as we had been working with the playtest sessions and final exhibit as our major milestones.

One of the more popular methodologies in the Adaptive category is called Agile. To summarize, Agile’s focus tends to be on “working software over comprehensive documentation”, and “responding to change over following a plan”. These two points in particular seem very fitting to our workflow on In My Image. Because of our lack of documentation, the focus was on getting the game working in order to test and iterate/balance based on the feedback after each playtest. This same feedback also led to a lot of changes to the scope and design over the course of the project.

In the end our choice of workflow worked to a degree. Although I would have preferred a stronger focus on documentation early in the development cycle, the lack of such did mean that we could begin coding very early and iterate on that existing code base later.

This did cause some issues however. One such case was my needing to rewrite a large amount of the Player Controller code in preparation for one of the playtests. This was due to an issue that arose from some changes elsewhere in the design. Due to the lack of documentation and my unfamiliarity with the code it was quicker to rewrite the scripts than try to figure them out and make edits.

Fractals

I was introduced to catlikecoding.com during this weeks studio class while we were learning about splines. I want to extend the Spline tutorial they provide with some added functionality to make a handy Unity Tool. But first I thought I would work through the prerequisite tutorials to make sure I’m able to understand everything it talks about, starting with Fractals.

Fractals02

Every time the program is run it randomly generates objects with mesh/colour/rotation variations. Some turn out more interesting than others.

Fractals01

Continue reading “Fractals”

Generating a Hex grid on 3D terrain

I’ve been looking around for some tutorials or hints about making hex grids for turn based games. I found this example by Chronos-L on answers.unity3d.com and decided to try implementing it in my existing RTS project. I started by just copying the sample code into a new script, attaching to an empty GameObject and setting a basic sphere as the spawn target:

Hex01

Continue reading “Generating a Hex grid on 3D terrain”