Blog

Technologies in my Procedural Game The Cluster

This post is about The Cluster, my 2.5D platform game under development with focus on non-linear exploration, set in a big, continuous world. You can read all the posts about The Cluster here.

In this post I'm creating an overview of the different technical aspects of The Cluster and listing links to posts I've written in each area. I'm also describing areas I haven't written about (yet).

If there's any of these areas you'd like to hear more about, let me know in the comments. This can help me decide which areas to prioritize in future posts.

Level Design

The level design in The Cluster is "deliberate", meaning that aspects of a level are created on purpose rather than randomly, like it is the case in games like MineCraft. With gameplay that doesn't allow removing or adding terrain, the levels have to be guaranteed to be traversable from the start.

Terrain Generation

The terrain in The Cluster is based on a 3D grid of blocks (popularly known as "voxels" although they are not really) and mesh geometry is generated based on this. The geometry has smoothed edges and support for textured transitions at edges and between blocks of different materials.

AI / Path-finding

The enemies in The Cluster are not simply patrolling back and forth on a platform - they have the ability to follow the player almost anywhere, or flee away.

Camera Behaviour

A side-scrolling game is not the hardest to create a basic working camera behavior for, but there is a lot that can be done to elevate the behavior from "good enough" to "great". One thing is having the camera look ahead rather than trailing behind, but without introducing sudden camera moves when changing direction. (The classic Sonic games did this too.) Another is to frame what's important at the position where the player is currently at.

  • Tips for a Platformer Camera
  • Level Aware Framing

Use of Threading

The Cluster originally used a clumsy mix of Unity co-routines and threading for its procedural generation, but it has since been rewritten to do all generation completely in a background thread, except for the actual instantiations, which happen in the main thread, spread out over multiple frames.

Layers of Infinite Grids

Originally, The Cluster was divided into "levels". The levels lined up seamlessly so wouldn't be noticed by the player, but in the implementation they were completely separate. Tons of code was used to deal with the boundaries of the levels. Every algorithm that handled some kind of grid, and which looked at neighbor cells in that grid, had to have special boundary case handling for the edges, which was a pain.

In 2012 I completely rewrote this to instead be based on a set of layers of infinite grids. Each layer loads/generates chunks on demand, but exposes an interface where this is abstracted away making the layer in essence "infinite". Layers can depend on lower layers with a specified amount of "padding". A layer can then freely access a lower layer as long it is doesn't try to access it outside of the contractual bounds. This approach has removed the need for almost all boundary case handling related to edges of grids. At the same time it has enforced a much more well-structured division of information and storage in the generation algorithms.

Debug Options

With a game as complex as The Cluster there is a need for tons of different kinds of debug information that can easily be switched on and off while running the game. For a long time, this was a bit tedious to maintain, but I recently found a more or less elegant way to add debug options in-place in the code where it's needed and have the GUI that displays the available debug work pick-up all the options automatically, without any maintenance.

  • Automatic GUI for Debug Options

I'll update and maintain this post as I write more posts, and as the game evolves.

Automatic Setup of a Humanoid

Feb 16, 2013 in ,

I wrote a blog entry on the Unity site called Automatic Setup of a Humanoid detailing the work I did for the Mecanim animation system in Unity 4 on automatically detecting human bones in arbitrary skeleton rigs.

Names of bones, bone directions, bone length ratios, and bone topology all provide hints to which bones should be mapped to what. While none of those sources are very reliable on their own, they can be combined and used in a search algorithm to get rather reliable results.

Generating a tree structure from a maze

I got a mail from Robert Stehwien asking me how to generate an environment tree (as described on Squidi.net) from a 2D maze - something I wrote previously that I do in my game The Cluster but didn't go in details with.

An environment tree is basically a tree structure representing connectivity in a spatial space, where inner nodes represent junctions where for example a corridor splits into two or more, and leaf nodes represent dead ends. An environment tree of a 2D maze would have inner nodes where the maze corridors split into two or more and leaf nodes at dead ends.

2D maze with internal and leaf nodes marked.

I wrote the code for doing it more than five years ago. It's rather convoluted but anyway, this is how it works:

I use a "recursive back-tracker" algorithm which is basically the same as a depth-first search. Despite the name of the method, I implemented it as a loop and not as a recursive function. Every time it hits a dead end, it marks the cell as a leaf node and adds it to a list of nodes. While backtracking, every time it hits a junction (3 or 4 passages meeting up) it marks the cell as an internal node and adds it to the list as well if that cell wasn't already added before. While walking the maze it also stores for each cell what the previous cell is. After this process we have a flat list of all nodes in the tree and a way to always go backwards towards the root.

Next, it simply iterate through all the nodes in the list and for each walk backwards towards the root until it hits a cell that's a node too. It marks this node as the parent of the other node and continues the iteration. At the end, all nodes except the root should have a parent, and from this it can also easily find out what the children of any nodes are. And thus the tree structure is complete.

Should I write the code today I'd have done it with an actual recursive function instead. Though I haven't tested it, I think it would take a parent node, the cell of that node, and an initial direction to walk in as parameters. Then it would walk along that path until it either reached a junction or a dead end. At a junction it would create an internal node for that junction, mark it as the child of the node passed as parameter, and call the function recursively for each direction in the junction except the direction it came from. At a dead end it would create a leaf node and mark it as the child node of the node passed as parameter, and return the function with no further recursive calls. At the end the tree structure would be complete.

Still making that Cluster game...

Apr 23, 2012 in , ,

Though I haven't posted about it for the good part of a year, I haven't dropped my game The Cluster. Google AI Challenge ended up taking up quite some time last fall and after that was a period where I was preoccupied with other things, but I've recently gotten some new ideas and enthusiasm for continuing this long-running spare time project.

Stay tuned...

Google really wants my resume

Mar 9, 2012 in ,

I just got this mail from one of the Google AI organizers:

Hello,

Thanks for taking part in the latest AI Challenge. I hope you enjoyed it!

Since you ranked in the top 100, Google really wants to see your resume (curriculum vitae). If you want to be contacted by a Google recruiter, send me a quick reply letting me know.

I will put in a good word for anybody who is interested. With more than 7000 entries in the final tournament, ranking in the top 100 is a serious achievement. I was really impressed by the skill of all the top-ranking bots.

If you are unsure, my advice is to go for it. It's a wonderful opportunity if you love figuring out tough problems and writing awesome code. Your recruiter will be happy to answer any questions you have!

I'm not unsure though. With my 4th place I guess I'd be high on their want list, but I'm happy with my job at Unity. I'll apply my "love for figuring out tough problems and writing awesome code" there instead. :)

By the way; sorry for never getting around to writing part 2 and 3 of the explanation of my bot. After seeing how the announcement of the winners got practically no attention anywhere I got a bit demotivated. You can still download the source though and see how it's done, and if you have any questions, just ask.

4th Place

Dec 24, 2011 in ,

What do you know, it looks like I got 4th place in the AI Challenge!

Final Rankings

I began writing an explanation of my bot and you can read Part I here as well as downloading the source code. Right now is holiday time with the family; I'll write up the remainder afterwards. :)

AI Challenge - My Bot Explained, Part I

Dec 23, 2011 in ,

So my ant colony bot is doing rather well in the final AI Challenge tournament. It's ranked around between 4 and 11 depending on when you look.

Reading about some of the strategies used by other bots, I'm surprised mine has performed so well, since it's not really that sophisticated. Well, I'm releasing the source code here. This is in the same state as when I submitted it without further cleanup, so not the most tidy code but not a complete mess either.

runevision bot C# source code

Note that the bot behaves completely different depending on whether it's compiled as a DEBUG or a RELEASE build. As a debug build it will look for a file called parameters.txt describing parameters to use for various constants in the code and when executed multiple times it will run with those parameters having different values and keep track of statistics of which parameter values work best. More about that later. As a release build the constants are hard-coded in the code and that's what is used in the actual competition.

The bot have these primary functions:

  • Combat logic to evaluate each of 5 possible moves (4 directions plus staying) for each ant. Afterwards each ant has a score for each move.
  • Path-finding done using "flood-filling" / breadth-first search from the goal locations. Afterwards each ant has an assigned goal and know which direction to move in to get there.
  • Goal handling. This basically adds extra points to one or more of the 5 move scores based on the direction obtained from the path-finding.
  • Moves resolution. This resolves which ant has the right to move into which position based roughly on "who needs it the most".

Path-Finding

The first thing I implemented was path-finding. This was originally done using a flood-fill / breadth-first search from the goal locations. The default path-finding algorithm choice for computer games is A* but it's not useful for being reused by many agents which is why I used the flood-fill instead. It might also be the same or similar to what is called collaborative diffusion - not entirely sure about that; I only heard about that term after I had implemented my solution.

Initially I simply used a queue for the search. I later switched to a priority queue so I could us various cost functions for the path-finding. For example, by giving positions occupied by my own ants a higher cost, ant behind them will prefer to go around them instead of just being stuck.

I also implemented support for specifying for a goal the maximum number of ants assigned to it, the maximum distance to search out from the goal, and an initial cost (which gives the goal lower overall priority).

The goals I ended up using are:

  • Defense positions around my own hills (1 ant each, no cost, max dist 30)
  • Enemy hills (a third of all my ants divided by number of known enemy hills, no cost, no max dist)
  • Food (1 ant each, cost equal to 4 dist, no max dist)
  • Explore (1 ant for each explore position, cost equal to 25 dist, no max dist)
  • Enemy ants (20 ants each, cost equal to 42 dist, max dist 13)

The defense positions are positions immediately around my own hills. 12% of my ants are assigned to defense, divided between my hills if more than one.

Any unassigned ants are assigned to enemy hills as well.

This post is ending up much longer than I thought so I'm splitting it up. Read on in part II about goal handling and moves resolution.

Epic Ant Battle Videos

Dec 20, 2011 in , ,

The final tournament has begun. Here's some interesting battles my bot has been in that I've captured on video.

My 5th game where my bot annihilates 8 opponents in an epic battle!

My 6th game where my opponent just won't die (or my massive army is not quite aggressive enough). Also: Dancing!

Google AI Challenge Update

Dec 18, 2011 in ,

For the past - hmm, almost 2 months now - I've taken a break from my other hobbies and let myself become distracted by Google's AI Challenge - an Artificial Intelligence (AI) programming contest.

Not for much longer though since the final submission deadline is in 10 hours at the time of writing.

When I wrote about it a months ago I'd only been at it for a few weeks but my bot was already ranked 39th in the world out of 10000+.

Since then I've made a lot of improvements - but so have the other contestants, so the overall level has continuously risen.

Whenever a new version of a bot is uploaded to the content servers, the skill and rank is completely reset, so the bot will have to fight its way all the way back up from the bottom. That can take a few days to a week depending on how high a ranking it can get before it's stabilized.

I last uploaded a bot four days ago. It has gotten up to rank 22 in the world; my highest ranking yet. It's also the top ranked bot in Denmark by a large margin and the 3rd ranked bot written in C#.

When working on improving the bot, it's usually hard to know if some new idea will actually improve the bot or just make it worse without having tested it first. I usually test by running a game with the new version against the old version. However, most changes only make a small difference that will only slightly increase the bot's chances of winning, so it has to be in many games before it's clear if it's better or not. Did it win 4 out of 6 games? Might just be a coincidence due to random luck.

You can see it quickly becomes tiresome to test these things manually. That's why I programmed a simple framework to automatically run many games in a row between the various version of the bot and gather statistics on which versions win the most times. I've often let this run overnight so I have statistics from hundreds of games the next day.

Sometimes, however - sometimes an idea turns out to be a significant improvement. By significant I mean that the new version consistently beat the previous version - as in 10 out of 10 times. Those times when I've managed to make such an improvement have been the most satisfying moments of this competition.

The most interesting improvements have all been related to how the ants handle combat. The way battle resolution works in the game follows simple rules but has complex consequences and my understanding of it has increased in stages.

The combat is basically about ants being in majority winning over ants that are in minority. If one red ant is within combat range of one white ant, they both die. But if one red ant is within combat range of two white ants, only the red ant die and both of the white ants survive.

Initially I thought combat was a matter of being cautious. Only move into combat range if you have more ants you can move within range than the opponent has. But it's not that simple. There's various potential outcomes to consider and also an element of gamble. I might write some more about it after the competition has ended if not somebody higher ranking does it first. Not that I'm an expert by any count - I feel like I've only scratched the surface.

Anyway, since I uploaded that last bot four days ago I've made several significant improvements in the combat skillz of my bot, meaning that the newest version I have here totally and consistently kicks the ass of that version I uploaded four days ago. Now I've just uploaded this new version. It will probably be the last unless I get some last-minute epiphany which is not likely by now.

We'll see how it performs in the final tournament.

Ant Colony - Google AI Challenge

Nov 23, 2011 in ,

For the past few weeks I've taken a break from my other hobbies and let myself become distracted by Google's AI Challenge.

It's an Artificial Intelligence (AI) programming contest about programming the smartest virtual colony of ants which fight against other colonies for domination.

The matches are played entirely by the submitted programs on online servers and replays of them can be seen online at the website. My matches can be followed here, such as this one.

As mentioned, I've been at it for a few weeks, and my colony is now the 39th highest ranked world-wide out of several thousands, and the highest ranked in Denmark out of 138. :)

It's quite good fun, so I'll try to make a few more improvements, although I'm beginning to run out of ideas. It's tough competition for sure!

If you know a bit of programming, it's easy to get started, and starter packages are provided for all kinds of different programming languages including Java, C#, Python, C++ and more.

Let me know if you're participating as well!