Blog

Procedural creature progress 2021 - 2024

For my game The Big Forest I want to have creatures that are both procedurally generated and animated, which, as expected, is quite a research challenge.

As mentioned in my 2024 retrospective, I spent the last six months of 2024 working on this — three months on procedural model generation and three months on procedural animation. My work on the creatures actually started earlier though. According to my commit history, I started in 2021 after shipping Eye of the Temple for PCVR, though my work on it prior to 2024 was sporadic.

Though the creatures are still very far from where they need to be, I'll write a bit here about my progress so far.

The goal

I need lots of forest creatures for the gameplay of The Big Forest, some of which will revolve around identifying specific creatures to use for various unique purposes. I prototyped the gameplay using simple sprites for creatures, but the final game requires creatures that are fully 3D and fit well within the game's forest terrain.

creatures from prototype → replace with 3D procedural creatures → put into procedural terrain

I've seen a fair amount of projects with procedural creatures. It's not too hard to create a simple torso with legs, randomizing the configuration, and animating movement by transitioning the feet between footsteps in straight lines or simple arcs.

This works fine for bugs, spiders, lizards and other crawly critters, as well as for aliens and alien-like fantasy creatures. The project Critter Crosser is doing very cool things with this. While it features mammals too, I'm not sure they'd translate well outside the game's intentionally low-res, quirky aesthetic.

If we also consider games where only the animation is procedural, Rain World is another example where this works great for its low-definition but highly dynamic alien-like creatures. Spore is a classic example too, though its creatures often end up looking both alien and goofy.

For The Big Forest though, I want creatures that feel like they truly belong in a forest, with movement and design reminiscent of quadruped mammals like foxes, bears, lynx, squirrels, and deer. The way mammals look and move is far too distinct to simply "wing it" — at least in my game's aesthetic. Achieving realistic mammalian creatures requires thorough study, so although I also plan to include non-mammal creatures, I’m focusing primarily on mammals for now.

Procedural generation of creatures

The basic problem is to generate forest creatures with plausible anatomy with a small set of parameters and ensure that:

  1. The parameters are meaningful so I can use them to get the results I want.
  2. Any random values for the parameters will always create valid creatures.

My main challenge is identifying what constitutes meaningful parameters. This is something I have to discover gradually, starting with a large number of low-level parameters based on minimal assumptions and eventually narrowing down to a smaller set of high-level parameters as I refine the logic.

From the beginning, I decided to focus on basic proportions for the foreseeable future, without worrying about more subtle shape details. For this reason I limited the generated meshes to extruded rectangles. I had to start somewhere, and here's the glorious first mesh:

A very boxy creature.

Later I specified the torso, each leg, each foot, the neck, head, jaw, tail, and each ear as multi-segmented extruded rectangles. I found this approach easily sufficient for capturing the likeness of different animals. By the end of 2023, I had produced these three creatures and the ability to interpolate between them:

Simple 3D models of a bull elk, a coyote and a cat.

This was based on very granular parameters, essentially a list of bones with alignment and thickness data. Each bone length, rotation values, and skin distance from the bone in various directions was an input parameter, totaling 503 parameters.

Creating creatures from scratch using these parameters was impractical, so I developed a tool to extract data from skinned meshes. The tool identified which vertices belonged to which bones and derived the thickness values from that. However, it was error-prone, partly due to inconsistent skinning of the reference models. For example, in the cat model, one of the tail bones wasn’t used in the skinning, which confused my script. This is why part of the cat’s tail is missing in the image above.

I tried to implement workarounds for such edge cases, and various ways to manually guide the script towards better results for each creature, but each new reference 3D model seemed to come with new types of quirks to handle. After resuming work in 2024, I had these seven creatures, with their reference 3D models shown below them:

Seven generated creatures. Five of them have hand-crafted reference 3D models below.

(I lost two of my original reference 3D models — the coyote and bull elk — because they were in Maya format. Since I don’t have Maya installed, when a project reimport was triggered, Unity couldn’t import the models, and they vanished. Since it's not standard practice to commit Unity’s imported data (the Library folder) to source control, I couldn’t recover them.)

Anyway, so far all my efforts had been focused on representing diverse creatures through a uniform structure that can be interpolated, but I hadn't yet made progress on defining higher-level parameters.

Failed attempts at automatic parametrization

Once I shifted my focus to defining higher-level parameters, the first thing I tried out was using Principal Component Analysis (PCA) (Wikipedia) to automatically identify these parameters based on my seven example animals. In technical terms, it worked. The PCA algorithm created a parametrization of seven parameters, and here's a video where I manipulate them one at a time:

As I suspected though, the results weren't useful because each parameter influenced many traits at once, with lots of overlap, making the parameters not meaningful.

Why did it create seven parameters? Well, when you have X examples, it's easy to create a parametrization with X parameters that can represent all of them. You essentially assign parameter 1 to represent 'contribution from example 1', parameter 2 to represent 'contribution from example 2', and so on. This is essentially a weighted average or interpolation. While this isn't exactly what Principal Component Analysis does, for my purposes it was just as unhelpful. Manipulating the parameters still felt more like interpolating between examples than controlling specific traits.

When I talk about "meaningful parameters", I mean parameters I can understand — something I could use in a "character creator" to easily adjust the traits of a creature to achieve the results I want. Say, parameters such as:

  • Bulkiness
  • Tallness (relative to back spine length)
  • Head length (relative to back spine length)
  • How pointy the ears are
  • Thickness of the tail at the base (relative to torso thickness)

However, PCA doesn’t work that way. Each parameter it produces influences many traits at once, making it impossible to control one trait at a time. I encountered the same issue in the academic research project The SMAL Model. While this project is far more sophisticated than what I could do, and is based on a much larger set of example animals, their PCA-based parametric model (which you can try interactively here) suffers from the same problem. Even though it can represent a wide range of animals, I wouldn't know how to adjust the parameters to create a specific animal without excessive trial and error.

I'm also convinced that throwing modern AI at the problem wouldn't work either. Not only would it require far more example models (which I don't have) and AI training expertise (which I have no interest in acquiring); it still wouldn't address the fundamental issue: An automated process can't understand what correlations and parameters are meaningful to a human.

Another problem with automated parameters is that they don't seem to guarantee valid results. When experimenting with my own PCA setup, or with the SMAL Model I linked to, it's easy to come across parameter combinations that produce deformed glitchy models. Part of finding meaningful parameters is figuring out what constitutes meaningful ranges for them. For example, the parameter "Thickness of the tail at the base" might range from 0 (a minimal tail thickness, like a cow's) to 1 (a tail as thick as the torso, like a crocodile's or sauropod's).

This ensures that the tail thickness can't accidentally exceed the torso’s. It also means that changing the torso thickness may affect the tail thickness too. While this technically involves "affecting multiple things at once", it’s done in a way that’s sensible and meaningful to a human (specifically, me). An automated process can't know which of these "multiple things at once" relationships feel meaningful or arbitrary.

Manual parametrization work

Having concluded there was no way around doing a parametrization manually, I began writing a script with high-level parameters, which would produce the values for the low-level parameters (bone alignments and thicknesses) as output.

I made a copy of all my example creatures, so now I had three rows: The original reference models, the extracted creatures (automatically created based on analyzing the skinned meshes of the references) and the new sculpted creatures that I would try to recreate using high-level parameters.

Three rows of creatures.

Initially, the high-level parameters controlled only the torso thickness (both horizontal and vertical) and tapering, while the other features remained as defined by the extracted creatures. Gradually, I expanded the functionality of the high-level parameters, ensuring that the results didn't deviate too much from the extracted models — unless they were closer matches to the reference models.

Why keep the extracted models around at all instead of directly comparing with the original reference models? Well, I was still working with only extruded rectangles, and there's only so much detail that approach can capture compared to the high-definition reference models. The extracted models provided a realistic target to aim for, at least for the time being.

From there, my methodology for moving towards more high-level parameters was, and still is:

  1. Gradually identify correlations between parameters across the example creatures, and encode those into the generation code. The goal is to reduce the number of parameters by combining multiple lower-level parameters into fewer higher-level ones, all while ensuring that all the example creatures can still be generated using these higher-level parameters.
  2. As the parameters get fewer and more high-level, it becomes easier to add more example creatures, which provides more data for further study of correlations.

I repeat these steps as long as rolling random parameter values still doesn't produce sensible results.

Here's a messy work in progress:

To help me spot correlations between parameters, I wrote a tool that visualizes the correlation coefficients between all pairs of parameters, and shows the raw data in the corner when hovering the cursor over a specific pair. (In this video, the tool did not yet display all parameter types, so a lot of parameters are missing.)

Focus on joint placement within the body

In 2024, most of my focus on parametrization revolved around the sensible placement of joints within creatures. For instance, in all creatures with knees, the knee joint is positioned closer to the front of the leg than the back. While the knee placement was fairly straightforward, determining the placement of hip, shoulder, and neck joints across creatures with vastly different proportions proved significantly more challenging.

Many of my reference 3D models looked sensible externally, but had questionable and inconsistent rigging (placement of joints) from an anatomical perspective.

Comparison of anatomy of cat 3D model with anatomical picture of a cat's skeleton.

Anatomical reference images of bones and joints in various animals were also frequently inconsistent. I could find detailed references from multiple angles for dogs, cats and horses, but not much else. For instance, depictions of crocodile anatomy varied greatly: Some showed the spine in the neck centered, while others placed it near the back of the neck.

Comparison of four different reference images showing the skeleton within a silhouette of a crocodile. The spine in the neck is inconsistent across them.

All of this uncertainty meant I was constantly second-guessing joint placement — both when contemplating how the procedural generation should work and when adding new example creatures. I wanted to solve this one and for all, so I could stop worrying about it.

Solving the joint placement would also simplify the process of adding additional example creatures, since I could just focus on making them look right "from the outside" without worrying about joints placement. Eventually, I did largely solve it. By this point, I had established 106 high-level parameters that controled all the 503 low-level parameters.

Speeding up creation of additional example creatures

Once joint placement was mostly automated, I come up with the idea of accelerating the creation of example creatures using Gradient Descent (Wikipedia). My goal was to implement a Gradient Descent-based tool that could automatically adjust creature parameters to make the creature match the shape of a reference 3D model.

To my surprise, the approach actually worked. In the video below, the tool I created adjusts the 106 parameters of a creature to align its shape with the silhouettes of a giraffe:

The tool works by capturing silhouettes from multiple angles of the reference model (once) and of the procedural model (at each step of the iterative Gradient Descent process). It calculates how different the procedural silhouettes are from the reference silhouettes, using this difference as the penalty function for the Gradient Descent.

To measure the difference between two silhouettes, the method creates a Signed Distance Field (SDF) from both silhouettes being compared. To speed up the process, I made my SDF-generator of choice Burst-compatible (available here). For each pixel near the silhouette border (pixels with distance values close to zero) the process retrieves the corresponding pixel's distance value from the other SDF to determine how far away the other silhouette border is at that point. The penalty function sums these distances across all tested pixels in all silhouette pairs, yielding the overall penalty.

This explains why, in the video, the legs growing are the first feature to change. Increasing the leg lengths reduces the most distance penalties at once. After that, extending the neck length has the greatest impact.

Notably, the process doesn't get the ears of the giraffe right. The reference model has both ears and horns, while my generator can't create horns yet. So the Gradient Descent process, which aims to match the silhouettes as closely as possible, made the ears large and round so they effectively serve double duty as both ears and horns. I later worked around this by hiding the horns of the reference model.

I also experimented with a standard optimization method called the "Adam" optimizer, but the results were not good. Ultimately, the automated process wasn't perfect, but it complemented my manual tweaks to speed up the creation of example creatures to some extent.

By this point, I had spent three months in 2024 working on procedural creature generation and had developed eleven example creatures covering a variety of shapes. I eventually scrapped the "extracted creatures" because the combination of high-level parametrization and Gradient Descent tooling made them unnecessary as an intermediate step.

Eleven creatures.

However, I was not even close to a sufficient high-level parametrization yet. Feeling the need for a change of pace, I decided to shift my focus to the procedural animation of the creatures.

Intermission

As I was writing this, I realized that while I’m convinced I haven’t yet achieved the goal of ensuring that "any random values for the parameters will always create valid creatures", I hadn’t actually put it to the test. So, I quickly wrote a script to assign random values to all the high-level parameters. These are the results:

While a few of the results look cool, most are not close to meeting my standards, as I expected. Still, this randomizer could prove useful going forward. It might help me identify which aspects of the generation are still insufficient and why, guiding my further refinement of the procedural generation.

Anyway, back to what happened in 2024.

Procedural animation of creatures

When it comes to procedural animation, I have the advantage that I wrote my Master's Thesis in 2009 about Automated Semi-Procedural Animation for Character Locomotion, accompanied by an implementation called the Locomotion System. That was based on modifying hand-crafted animations to adapt to arbitrary paths and uneven terrain.

Even before that, I implemented fully procedural (pre-rendered) animations as far back as in 2002 when I was 18.

In the summer of 2022, I tried to essentially recreate that, but this time in real-time and interactive. As a starting point, I used my 2009 Locomotion System, stripping out the parts related to traditional animation clips. Instead, I programmed the feet to simply move along basic arcs from one footstep position to the next. To test it, I manually created some programmer-art "creatures" with various leg configurations.

In 2024, I resumed this work, now applying it to my procedurally generated creature models.

The result looked somewhat nice but were rather stiff, and the approach only works for taking small steps. As I've touched on before, animating procedural, mammal-like creatures to look natural is a tough challenge:

  • Quadruped mammals like cats, dogs, and horses move their limbs in more complex ways. Or at least, we're so familiar with their movements that any inaccuracies makes the movements look weird to us.
  • Fast gaits, such as galloping, are more complicated than walking.
  • The animation must control not only the legs, but also the spine, tail, neck, etc.
  • Since the creatures are generated at runtime, the animation must work out of the box without any manual tweaks for individual creatures.

That's a tall order even with my experience, but hey, I like a good challenge.

My approach to procedural animation is purely kinematic, relying on forward kinematics (FK) and inverse kinematics (IK). This means I write code to directly control the motion of bodies and limbs without considering the forces that drive their movement. In other words, there’s no physics simulation involved and no training or evolutionary algorithms (like this one).

A training-based approach isn't viable for creatures generated at runtime, and frankly, I have no interest in pursuing it. The only way the animation interacts with the game engine’s physics system is by using ray casts to determine where footsteps should be placed on the ground.

Hilarity ensues

As an aside, one of the fun parts of working on procedural animation is that works in progress often look goofy. When I first applied the procedural animation to my procedurally generated creatures, I was impatient and ran it before implementing an interface to specify the timing of the different legs relative to each other. As I wrote on social media, "It might be hard to believe, but this animation is actually 100% procedural."

The system itself already supported leg timing, as it was inherited from the Locomotion System, where timing was automatically set based on analyzing animation clips. However, in the modified fully procedural version, I hadn't yet implemented a way to manually input the timing data. Once I did, things looked a lot more sensible, as shown in the previous video.

Other people's comments about the absurd animations sometimes inspire me to create more silly animations, just for fun. "Somebody commented that the procedural spider and the procedural dog are destined to fight, but actually they are friends, they are best pals and like to go on adventures together."

Around this time, I wanted to better evaluate my procedural animation by directly comparing it to hand-crafted animation. To do this, I applied the procedural animation to 3D models I had purchased from the Asset Store, comparing it to the included animation clips that came with those models.

Continuing the theme of goofiness, my first attempt at this comparison had a bug so hilariously absurd that it became my most viral post ever on Twitter, Bluesky and Mastodon. (The added music adds a lot to this one, so consider playing with sound on!)

People would inevitably suggest, and sometimes even plead, that I bring this absurd silliness into the game I’m making, for fun and profit. While I understand the sentiment, the reality is that my vision for The Big Forest is not that kind of game. There's definitely room for some light-hearted moments, but that’s not the primary focus. For reference, think of a Ghibli movie — there’s a certain kind of silliness that would fit well there, but it would need to align with the overall tone.

Incremental progress and study

I started studying my procedural animation alongside the handcrafted animations to better understand the subtle but important differences. Here's the comparison tool I made again, this time without the hilarious bug.

Even without the bug, it still looks very rough. At higher speeds, it becomes clear that the approach of simply moving the feet from one footstep position to the next doesn't work.

In real life (and in the reference animations), at high speeds like galloping, the feet don't stay in contact with the ground for most of their backward trajectories. They only touch the ground for short durations. My procedural animation didn't account for this yet. Instead, the hips got pulled down to be within distance of the feet, and that's what caused the torsos of the procedurally animated creatures to drag along the ground.

Once I tried to account for this, things improved slightly — at least in that the torsos no longer dragged along the ground.

To better make informed decisions about how far up the feet should lift, how much time they should spend touching the ground, and similar animation aspects, I developed a tool to plot various properties of the reference animations. For example, the plot below shows that as the stride distance (relative to the length of the leg) increases, the proportion of time the foot is in the air also increases. The white dotted curve represents my own equation that I attempted to fit to the data.

A scatter plot with colored dots and a white curve approximating them.

Below is another plot that shows the signed horizontal distance between the hip and the foot (actually, the footstep position) along the horizontal axis, and the foot's lift on the vertical axis. In the center, where the foot is directly below the hip, the lift is generally zero. Notably, the plot includes data from gaits at different speeds (walking, trotting, galloping), but the distance at which the foot lifts off the ground is fairly consistent across those speeds. So while higher speeds are associated with longer step lengths, the "distance" (relative to the creature) that a foot is in contact with the ground is not proportional to the step length. Instead, it it's closer to a constant value, relative to the leg length.

A scatter plot with colored dots and a white curve approximating them.

I made many plots with this tool, visualizing data from the reference animations in all kinds of ways. Some of the plots revealed clear trends, like the two above. Others resulted in a jumble of unrelated curves or dots that I couldn't use for anything. In this way, my process was (and is) very much a classic research approach: Formulating hypotheses, testing them, discarding those that don't pan out, and building upon the ones that do.

Inverse kinematics overhaul

I had noticed that many animals kind of bend or curl their feet — and sometimes the entire lower part of their front legs — when lifting them. However, my procedural animation didn't capture this behavior at all. I could hack around this by dynamically adjusting the foot rotations over the walk cycle, but it often resulted in unnatural poses.

Eventually, I concluded that I needed to revamp the inverse kinematics (IK) algorithm I was using, with focus on better handling foot roll and bending scenarios. My old Locomotion System employed a two-pass IK approach, where the foot rotation was given as input to the first IK pass. Based on the orientation of the lower leg found by the IK, the foot rotation would be adjusted — rotated around either the heel or toe — to ensure a more natural ankle angle, followed by a second IK pass to make the leg match the new ankle position. This two-pass approach worked all right for the Locomotion System, which was applied on top of hand-crafted animation. However, I found it insufficient for the fully procedural animation I was working on now.

In principle, this two-pass approach could be changed to run iteratively, rather than just twice. However, this would be computationally expensive, since the IK algorithm itself is already iterative. Instead, I implemented a new IK algorithm where the foot rotation is not a static input, but is controlled by the IK itself.

Having the IK handle the foot rotation is a bit tricky, as it must behave quite differently depending on how much weight is on the foot, ranging from none to full weight.

I made significant progress with this approach, although there’s a tricky issue: There are edge cases where multiple solutions can satisfy the given constraints. This makes the leg poses, which are state-less, sometimes snap from one configuration to another based on even tiny changes in the input positions. I have some ideas on how to address this, but I haven't tested them yet.

After incorporating the new IK system and adding logic to make the feet bend when lifted, my results looked like this at the end of 2024:

While it's still a bit glitchy and far from perfect, the bending of the feet and legs is at least a step in the right direction.

And that's how far I got so far

Both the procedural model generation and the procedural animation still have a long way to go, after spending around three months on each, and that can feel a bit demotivating. On the other hand, I've been making steady progress, even if it's slow. Writing this post has actually helped me realize just how much I've accomplished so far after all.

That said, I feel it's time for a break from the creatures. When I return to them later, I'll hopefully do so with renewed energy.

I wish I could have wrapped up this post with a satisfying milestone or a neat conclusion, but there was already so much to cover, and I didn't want to delay this write-up any further. I also think there's value in showing the messiness of creative processes and research. Let's see where I'm at when I write about the procedural creatures next time!

Read More »

2024 retrospective

Jan 5, 2025 in , ,

Another year went by as an indie game developer and what do I have to show for it?

In last year's retrospective I wrote that apart from working on my game The Big Forest in general, I had four concrete goals for 2024:

  • Present my Fractal Dithering technique
  • Release my Layer-Based ProcGen for Infinite Worlds framework as open source
  • Wrap up and release The Cluster as a free experimental game
  • Make better use of my YouTube channel

I ended up doing only two of those, but it was the two most important ones to me, so I'm feeling all right with that.

Release of LayerProcGen as open source

I released my LayerProcGen framework as open source in May 2024. LayerProcGen is a framework that can be used to implement layer-based procedural generation that's infinite, deterministic and contextual.

I wrote extensive documentation describing not only the specifics of how to use it, but also the overarching ideas and principles it's based on. I also did a talk at Everything Procedural Conference about it, which was well received.

I'm unsure how many people are using the framework directly for their games, but I know of several who've been inspired by its underlying ideas and made their own implementations.

Sythelux Rikd ported LayerProcGen to Godot. The core framework worked in Godot out of the box, but Sythelux ported various of its Unity-dependent optional utilities and made example scenes for Godot. Oli Scherer created a Rust implementation of LayerProcgen with some slightly different architectural choices. For his game "Around The World" Thomas ten Cate aka Frozen Fractal wrote his own implementation that works on a sphere.

Overall I feel like LayerProcGen has made at least a little dent in the procedural generation community, and I'm happy to have been able to make such a contribution.

Release of The Cluster as a free experimental game

I finally wrapped up and released The Cluster, a game I'd been working on on-and-off since 2003! I already wrote about it here, so no need to repeat myself further, but despite not having much of an impact on anything, I'm personally happy to at long last have this "unfinished business" out of the way.

Stuff I didn't get to do

I didn't get around to presenting and releasing my Fractal Dithering technique. Hopefully this year!

I also never made more "proper" YouTube videos after the one in January 2024 about the terrain generation. I considered doing one about LayerProcGen, but there's already the video of my conference talk about it. I could probably explain the framework and ideas slightly better by creating a video with nice animated graphics and diagrams, but it would be a lot of work for marginal added utility. I also considered doing one about all the interesting tech in The Cluster, but I didn't feel like diverting more of my time to that game over working on my current game. Why no new videos about my current game The Big Forest then? Well, read on...

Progress on The Big Forest

The Big Forest is the game I'm currently working on, and it's still in its early stages, just like I said last year. The game will be fully procedurally generated, and so far I've been working on a series of disconnected experiments and proofs of concept that will eventually all be rolled into the game. These include procedural generation of terrain, gameplay progression, creatures and music.

I had worked on the terrain generation in 2023, and I continued this work in the first few months of 2024.

  • Previously the paths in the terrain had simply gone from one chunk corner to the diagonally opposite one. I now generated points of interest (with towers for now) and made the paths connect those, although it's not yet completely robust.
  • I managed to radically speed up the generation by making use of Unity's Burst compiler (without Unity Jobs, which are not a good fit). I'm very happy with that. For fun, I made a video where I had drastically increased the player's speed, like they're some kind of fast hedgehog.
  • I changed the terrain chunk implementation to use chunks of different sizes, with larger chunks of lower resolution further away from the player. This made it possible to expand the draw distance greatly without slowing the generation down.
  • Using chunks of different resolutions introduced cracks in the terrain, so I implemented a skirt solution.

On the subject of YouTube videos though, I'm not sure I could make an interesting video of the 2024 terrain progress mentioned above, and there's plenty of other videos out there that cover very similar ground.

After this I kind of needed progress with other aspects of the game before it made sense to focus more on the terrain. After taking a break to wrap up and release LayerProcGen and The Cluster, I returned to The Big Forest to work on its procedural creatures.

I spend the latter half of 2024 working on that - three months on creature model generation and three months on procedural animation. I was writing a whole section about that here, but it got so long that I've written a whole separate post about the procedural creatures instead. Long story short though, I had some progress but it's still very far from where it needs to be.

The fact that it's still well short of being any good means I don't consider it good material for a YouTube video either, as I think such videos work best when they can conclude in a satisfying wrap-up of something. I did write the blog post though, and now I need a bit of a break before I resume working on the creatures again.

Oh right, a video of a bug in the procedural animation I was working on became my most viral post ever on Twitter, Bluesky and Mastodon. (The creatures in this video are not procedurally generated; I was testing the procedural animation on purchased animal models for research purposes.)

Other tidbits

I can take no credit for it, but my business partner for the Quest port of Eye of the Temple, Salmi Games, released an update for Quest 3 that adds real-time shadows from the torch, higher frame rates (90fps almost everywhere), improved texture resolutions and aniso, and higher clarity due to foveated rendering being mostly disabled. This brings the Quest version (on Quest 3) even closer to the PCVR version. UploadVR covered it here.

I very occasionally make music, and in 2024 I released two very different tracks. And as a first, these ones even have vocals (by me): Just to Say Goodbye, a sad song about an alien with bad timing, and Sorte Gryde, a metal re-imagining of an old Danish singing game, with a dash of Saturday-morning cartoon villainy.

I was briefly mentioned in this video about making games feel mysterious by Mark Brown of Game Maker's Toolkit due to my 2021 article about designing for a sense of mystery and wonder. Neat!

When I was around 11 I would draw "video games" on paper during breaks at school. Not sketches or design docs, but games that were fully playable using your finger to indicate where your avatar is, and following simple rules. Earlier this year I had these games scanned (42 pages!). I made a zoomable annotated display of one of the games. I'd like to do more with the other games too, maybe even make a (digitally) playable game out of the one with 3D perspective dungeons.

A drawing inside a box-like dungeon drawn in perspective where there's grid-based tiles on the floor and ceiling.
Grid paper with a platformer level drawn with pencil. A drawing of an isometric maze with various special elements like one-way passages and secret passages. A drawing of a platformer level with various cave entrances that are connected pair-wise.

Goals for 2025

I'll keep this short.

  • Develop the creature generation and animation sufficiently to put them into the game. If I'm still not there by 2026, I need to seriously reevaluate some things.
  • Make at least one new YouTube video about The Big Forest.
  • Present my Fractal Dithering technique.

Wish me luck!

Read More »

Procedural game progression dependency graphs

In 2022 I came up with some new ideas for what kind of game The Big Forest (working title) could be. During the year, I developed a way to procedurally create dependency graphs and also procedurally create fully playable game levels based on the graphs.

In the video below you can see the prototype I had near the end.

A dependency graph is a concept in mathematics and computer science which has been independently "discovered" by lots of game developers because it turns out to be pretty central to designing any non-linear game.

At its simplest you can think of locks and keys. A locked door has the corresponding key as a dependency to be able to progress through the door. But dependencies can also be abilities in a Metroidvania game, quest objectives in an RPG, or required inventory items for a puzzle in a point-and-click adventure.

Here's a few different articles by others that discuss what they are. Note the lack of standardized terminology. I personally use "game progression dependency graph" since the concept is applicable to all non-linear games, not just puzzles or dungeons.

I posted about my game progression dependency graph tech on social media (Mastodon and Twitter) throughout developing it. But when people ask me about it now, it's hard to point to posts scattered across a year of social media posts.

I've copied all those posts (nearly verbatim) into this blog post so it's documented conveniently in one place. For this reason it contains not only the conclusions at a single point in time, but also the questions I asked, my confusion, and my developing understanding of the problem space over time. Each header corresponds to a new thread. Images and videos generally relate to the text above them.

Let's start the journey.

March 30 2022

I'm slowly starting to experiment with novel generated graph structures again, here with an early and rough game progression dependency tree. I'll need to merge certain locations, turning it into a directed acyclic graph, and later generate a spatial graph based on it.

I must say I find it tricky to work out data structures for this. Which concepts should be nodes and which should be edges? And with both different types of nodes and different types of edges, should those types be represented as class types (inheritance) or just with data?

I keep having to revise my thinking about which concepts are represented as nodes in the graph and which as connections...

The graph generation now creates a directed acyclic graph rather than a tree structure. It took a long time to get the layout to be nice, avoiding crossed edges when possible and minimizing wasted space.

In the replies someone mentioned Metazelda. It looked familiar so I might have seen it a long time ago. I also did basic lock+keys for my game Cavex myself back in 2007. (Cavex eventually turned into The Cluster.)

The concept I'm working on now takes place in a bit more of an open world where areas are accessible by default and only certain places locked off. The "tunnel/dungeon carving" mindset feels like it might not be as helpful in this context, but I'm still figuring things out.

April 6 2022

If I have to spend a lot of time looking at these generated game progression dependency graphs, I might as well make them nice to look at.

I revised my thinking on the nodes again and added a "location" requirement to almost all the node types. On the first try, the resulting graph had multiple new neat ways of bending the connections, without me having changed the layout function at all. Robust algorithm I guess. :)

Unfortunately it doesn't seem as robust in avoiding crossed edges anymore.

Hmmmmmmmm

First glimpse of an idea to generate a dependency graph and a spatial graph simultaneously from the same underlying data structure.

Really, no need for distant parts of the spatial graph to repel each other quite so much. This can make the graph more curvy, more interesting looking, and more compact at the same time.

April 10 2022

Left: Game progression dependency graphs.
Right: Spatial graphs that could be the basis of where things are located in the world.
A key for a locked gate is a direct dependency, but can be located far away spatially.

Some people may be reminded of articles on squidi.net about procedural generation, which discuss how you could generate an entire game using these principles. It’s a classic resource, and those articles cover a lot of ground, some of which one will inevitably hit when trying to do procedural progression (as this article on puzzle trees also states). Light on implementation details but great food for thought.

I'm going towards an approach of generating the structures by incrementally injecting new dependencies anywhere rather than a simple recursive top-down approach. And I decided to generate both graphs simultaneously rather than as consecutive passes.

These two things combined should hopefully make it possible to inform new dependency injections both on what would be a good spot dependency-wise and a good spot spatially. That's the next thing I'll focus on.

The game progression dependency graph and spatial graph visualizations now support changing the data structure on the fly. This makes it possible to see the node injections as they happen, and exactly how they modify the graph. Alas the jiggles had to go.

I found out I can create more balanced game progression dependency graphs by switching to an approach I call ... (checks notes)
Exploding The Graph, Then Picking Up The Pieces.

The approach so far is powerful in the theoretical possibility space of graphs it can create, but it tends to only open up one new location at a time, stifling player choice. I'm struggling to find a way for the generation to embody the "just right" shapes described by Gareth Rees.

The problem definitely relates to the branching factor, though whether to focus on in-going or out-going edges or both is unclear. The question is then how to construct a DAG with one start node, one end node and n in-between nodes with a given desired branching factor.

April 18 2022

I guess I'm getting closer to something I might be able to use: Dividing n nodes into m columns and then connecting them. But I'm not entirely happy with how specific/hardcoded/restrictive this approach is. For example, this will never create connections that skip rows.

By the way, the graphs here are at a different abstraction level than the previous graphs I've shown. Here, only location nodes are considered. All the other node types can be injected and connected later in a way that respects the same dependencies.

After sleeping on it I came up with a much better approach that creates more random and organic graphs, and always ensures ingoing and outgoing edges are either 1 or 2. Just looking at these results makes me much happier than the results in the previous video.

The new approach is based on a simple graph rewrite rule I keep applying on random edges until the desired number of nodes is reached.

I may have been inspired by a chapter in Dormans Joris' thesis Engineering Emergence which I read recently after some people mentioned it.

Graph layout rewrite

At this point I took some time improving the graph layout algorithm for the dependency graph, but didn't post on social media about it. The graph layout takes cues from the algorithm explained on the Wikipedia page about Layered Graph Drawing (Sugiyama-style) and I improved my implementation by taking ideas from these papers:

After the layout improvements I took a six month break from the game progression dependency graph research and focused on other things for a while, before taking it up again in November.

November 13 2022

Top: Game progression dependency graphs.
Bottom: Spatial graphs that show where things are in the world.
A key for a locked gate is a direct dependency, but can be located far away spatially. Next step is making gameplay objects from the spatial graph nodes.

The graph rendering used to be based on IMGUI with ugly matrix hacks but I spent today changing it to be based on Shapes by Freya Holmer so it supports perspective and is just much nicer. It's an awesome library and the switch was very easy.

November 19 2022

"I'll just add some icons to my game progression dependency graph," I thought. Haha, except, what do the icons really represent? The obstacle or the reward? Both? One week later, I've ended up with this schematic representation.

Here's two quite similar graphs before and after the iconographic makeover. Icons inside a node represent the obstacle/medium of that node, while icons shown at the top edge of a node represent the reward/outcome.

November 24 2022

With the dual game progression dependency graph and spatial graph, I can now begin to construct a world to explore. Right now it's still looking rather abstract. 😅

November 28 2022

Whee, I now have actual "gameplay" created from my game progression dependency graph and spatial graph (starting from 16s in the video). Only a few node types for now, but it's a cool milestone! 🗝🍎🐱🚩

The game will be focused on exploration, paying attention to the environment and finding clues in it about how to progress.

A note on the spatial graph. The spatial positions of elements are laid out based on a node relaxing / repulsion algorithm, and then Voronoi cells are created at each position and walls created between cells that don’t belong to the same location.

November 30 2022

Every gate, key, creature etc. now has a unique generated pattern to identify it by, so I could get rid of all the letters. One step closer to a world full of visually depicted clues.

Adding some trees. Trees make everything 1000% better.

December 3 2022

Generating a strange walled garden full of secret clues, based on a game progression dependency graph.

Someone noted in the replies that the "kitty keys" are an unusual concept, and that they couldn't think of an example where you need item A to lure creature B to point C to unlock gate D. I don’t know of that exact constellation elsewhere either. In some animes (and also The Fifth Element) a human is a key. And in the game Rime there’s some robots that serve as keys. I thought animals could be a fun variation on this already strange theme.

December 19 2022

Why did I implement this mechanic!? It'll just reveal I'm terrible at remembering things! Anyway, just a few new mechanics implemented for my strange walled garden that's procedurally generated based on a game progression dependency graph.

December 23 2022

Playing an instrument and finding songs with curious powers in this strange garden. (The game progression dependency graph it's procedurally generated from is shown at the end.)

End of prototype

At this point I stopped working on this prototype. As a proof of concept it had succeeded with the successful generation of fully playable little levels with a variety of gameplay mechanics and clue types.

The limiting factor was no longer the dependency graphs themselves.

One limiting factor now is the iconographic gameplay. Different creatures are all little cat head icons only differentiated by different colored patterns. For things to be more evocative, immersive, and easy to remember, I need to actually generate 3D animals of a wide variety. So that's the next thing I started working on, though out of scope for this blog post.

Another limiting factor is that I envision exploration to play an important rule in making the game satisfying to play, but the aesthetic of the prototype does not make exploration interesting at all. Luckily, another aspect of the game I've been working on is beautiful terrain generation. So eventually I need to integrate the procedural progression gameplay into those generated landscapes. But again, I need to first put work into creating procedural 3D models of the various gameplay elements before it can all be meaningfully integrated.

I hope you enjoyed this chronology! If you have any questions, let me know in the comments. And if you want to follow the ongoing development of The Big Forest, check out the social media links in the menu, or copy-paste my blog URL into your RSS reader.

Read More »

The Cluster is now released

The Cluster is finally released and available for free on Itch. It's a 2.5D exploration platformer set in an open world that's carefully procedurally planned and generated, and does a few interesting things I haven't yet seen in other games (check out the links for more info).

Here's a trailer: My last blog post about The Cluster was in 2016 and titled "Development of The Cluster put on hold", and by that I meant put on hold indefinitely.

At that time I had reached the conclusion that I had to give up my ambition of releasing The Cluster in the form I had envisioned. That would have required way more variety and features to be implemented, and due to a variety of burdens detailed in the post, that was not really viable after all, due to various details of the game's design and implementation.

The context for this decision was that I had envisioned The Cluster to be a commercial title, and it was my primary game development project at the time. (Back then I was a game developer only in my spare time.) I wanted to switch my focus to different projects that were more viable to be able to be released as commercial games, specifically Eye of the Temple, which has since been completed and released to decent success, and The Big Forest (working title), which I'm working on these days.

Still, I had put an incredible amount of work into The Cluster over many many years (going back to an initial prototype in Game Maker in 2003!), so I never managed to get reconciled with the idea of it never being available for others to see and play. Especially since, like I mentined, it does some things I haven't seen elsewhere. So at some point I decided I wanted to release it for free sooner or later, as I also mentioned in this 2022 blog post. Not in the ambitious form originally envisioned, but in the limited but fully playable state it had already reached, plus a bit of polish to smooth out rough edges - for example adding robust gamepad support.

And so, between 2016 and now I returned now and then to work on The Cluster for a little while, polishing it, fixing bugs, adding gamepad support, adding a settings menu, adding interaction prompts for how the controls work, and lastly, getting a round of playtesting by volunteers and addressing feedback from them. At the beginning of 2024 I said I wanted to get it out this year, and well, now it's finally out.

The graphics are nothing special and the gameplay can get a bit repetitive over time, but still, for people who are content exploring a big world in search of artefacts, where there is a clear structure and purpose to the world layout, the game can be fun for a few hours, and provide an experience different from other games out there.

It's also a showcase game for my open-source framework for layer-based infinite procedural generation, LayerProcGen. Oh yeah, I released that a few months ago, which I guess I never blogged about!

Anyway, the game is free, so give it a try. And if you do, let me know what you think! :)
Read More »

2023 retrospective and goals for the new year

Jan 8, 2024 in , , ,

2023 was a pretty good year for me!

I'll touch here briefly on my personal life, then go on to talk about the Quest 2 release and sales of Eye of the Temple, and finally talk about my new game project and goals for 2024.

Personal life

It's the first year since the pandemic that didn't feel affected by it. I moved from Denmark to Finland in 2020, just as the pandemic began, so on the social side it was some slow years initially.

Things picked up in 2022, but especially in 2023 we had lots of family and friends from Denmark visit us here and have a great time, and we also made more strides on the local social network front.

Particularly memorable was a wonderful weekend celebrating the 40th birthdays of me and a friend, with some of my closest family and friends from Denmark and Finland at a site called Herrankukkaro in the beautiful Finnish archipelago.

Eye of the Temple released on Quest and turned a profit

In April 2023, a year and a half after the original PC release on Steam, my VR game Eye of the Temple was finally released for Quest 2, with the help of Salmi Games. While it was super tough getting there, in the end we managed to ship the game at a level of quality I'm very proud of. Others agreed; it got a great critical reception, as well as a high user rating of 4.7 out of 5 stars.

It's super gratifying regularly seeing new reviews of the game from people who say it's the best VR experience they've had. Oh, and recently, UploadVR ranked it the 5th best game for Quest 3 and Screen Rant ranked it the 6th best game for Quest 2. Wow, what an achievement for my little game! (But remember, critical acclaim does not equal sales…)

I’m no longer working on the game at this point. After being occupied with it over a span of seven years, I really want to move on, and I'm also done with VR in general for now. But the sales of the game are still developing, so let's talk a bit about that.

My thinking about the game’s sales performance has changed a lot over time. I didn't pay myself a regular salary during the game’s three years of full time work. But when evaluating the game financially, I use the old salary from my previous job as reference, and calculate whether my time investment at that salary (I’ll refer to it as just “my investment”) would be covered retroactively by the game’s revenue. Of course, I also keep in mind that the covered percentage would be higher if I based it on a more moderate salary.

I was initially slightly disappointed in the Steam sales. As I wrote about back in November 2021, the projected year one sales would only cover 25% of my investment. Back then I expected the Steam year one revenue to make up the majority of the game's lifetime revenue. One year later, the sales had outperformed that projection, and my investment was actually covered 40%.

A lot has happened since then, in particular due to the Quest launch.

Comments from many VR developers in 2021 and 2022 had indicated that Quest sales could commonly be 5x-10x as large as Steam VR sales. For Eye of the Temple, the Quest week one revenue was merely twice of what the Steam week one revenue had been, so it was not quite as high as Salmi Games and I had hoped for. Speaking with other VR developers in 2023, it seems that the time when Eye of the Temple launched on Quest was generally a bad period for Quest game sales.

Still, Quest is easily the most important VR platform, and later the sales picked up significantly, with the recent Black Friday and Xmas sales combined having as big an impact on revenue as the launch sales. Already, 70% of total revenue has come from Quest and 30% from Steam, with the Quest version having been out for a shorter time.

Cumulative revenue from Steam and Quest

My investment is now covered 140%. In other words, even based on a proper salary for myself that's fitting for my experience, Eye of the Temple has recently flipped well into profitability. That still doesn't make it a runaway hit, but it's really nice to know that it's a success not only creatively and as a passion project, but also in terms of financial sustainability. Back in 2020 when I was still developing the game, I had not expected that at all for my first commercial title.

My new project: The Big Forest

So what comes after Eye of the Temple? Like I wrote above, I'm done with VR for now.

The working title of the new game I'm developing is "The Big Forest". It's set in a big mystical forest and has a strong focus on exploration. The gameplay will involve light puzzles based on connecting clues found through exploration and gradually gaining access to new areas.

The project is in its early stages. The game will be fully procedurally generated, and so far I've been working on a series of disconnected experiments and proofs of concept that will eventually all be rolled into the game. These include procedural generation of terrain, gameplay progression, creatures and music.

I started working on the game in 2022, but I put it on hold shortly after, when I started working on the Quest 2 version of Eye of the Temple. In the last third of 2023, I started working on the procedural landscape for the game again. You can see an overview of that progress in the video below.

As for what the game is about in general, I made this page about The Big Forest where you can read more about it.

Goals for 2024

I expect The Big Forest to be my focus for several years (and that might end up being an understatement).

But I also have a list of more concrete things I hope to get done in 2024, not all directly related to my game:

Present my Fractal Dithering technique

I want to release a video, blog post, and source code for a rendering technique I call Fractal Dithering. It's unrelated to my game — just something I developed because I had an idea for it and had to try it out. The code is all done, and I worked on an explainer video (not the one here below) back in June-July 2023. But making the video took so long that I ended up having to take a break from it because I was losing motivation. Let's get that wrapped up.

Release my Layer-Based ProcGen for Infinite Worlds framework as open source

Update May 2024: It's done

I've put more than a decade into developing a framework for contextual procedural generation of infinite worlds. I wrote about it here in 2013. I originally developed it for the game The Cluster I was working on at the time. I since abandoned that game, but now I'm using the framework for The Big Forest. I think the framework could be useful for others as well, which is why I'd like to open-source it.

The generality of the framework is proven by using it for two entirely different games - The Cluster being a 2.5D platformer where the world is made out of generated meshes and The Big Forest being a first/third-person game based on generated terrains. In 2023 I put some work into removing cruft from the framework that was only relevant to The Cluster, and generally streamlining it. There's still some more of that work left to do.

Wrap up and release The Cluster as a free experimental game

Update August 2024: It's done

I already wrote about it here in 2022, but I'd like to wrap up my old game The Cluster and release it for free as an unfinished game, partly because the game is fully playable and one can easily have a few hours of fun playing it, and partly because the game demonstrates a lot of interesting things that can be done with my Layer-Based Procedural Generation for Infinite Worlds framework in areas including level design and pathfinding. It's the longest-running project of my life, and it'd be nice to get it out there, even if it's in an incomplete state.

Make better use of my YouTube channel

Until recently, my YouTube channel just had various videos that show off things with little or no comment, and without any channel branding or identity. I'd like to start making videos in a more "classic YouTube format" where I discuss a subject with a proper intro and outro, channel branding, background music, etc.

I've just recently kickstarted that effort with the video about developing a procedural landscape for The Big Forest that I embedded further up this page. I aim to produce a few more such videos in 2024, for example one video for each of the subjects mentioned above.

What would you like to see?

The goals above all relate to things I'd like to share, so I'm very interested in gauging which of those things might be of interest to you all. If there's anything you'd be interested in in particular, let me know!

Read More »

Charts to visualize how much you owe Unity for their per-install Runtime Fee

Sep 15, 2023 in ,

Unity Technologies has announced a new Unity Runtime Fee that charges developers a fee of up to $0.20 per installed game above certain thresholds. According to my calculations, it can be a bankruptcy death-trap, at least in certain cases.

Shockingly, the owed percentage is unbounded to the point that the owed amount can exceed gross revenue, since it depends on installs, not sales.

Update 1: Unity since backtracked and apologized for the announced changes. With the new updates to the terms, Unity will clamp the install fees to be at maximum 2.5% of revenue. And the changes will not be retroactive after all. Furthermore, John Riccotello is stepping down as CEO. There are more details in the linked blog post.

Update 2: About a year later, Unity canceled the runtime fee altogether. Good.

Nevertheless, Unity has suffered a tremendous decrease in trust and goodwill, which already was not great before. With the cancellation, there is less urgency for developers to switch to a different engine, but the whole situation has highlghted the importance of being prepared for such a scenario and have eyes and ears open towards other engines as well.

The original post continues below.

You can check out the specifications in their blog post. Based on those, I've made two charts where you can look up how big a percentage of your gross revenue you would owe Unity, based on the number of installs and on how much revenue you make for each of those installs. The fee specifications are different for Unity Personal and Unity Pro, so there is a chart for each.

If you want to check the math used for the charts, you can check out the source for the chart for Unity Personal and the chart for Unity Pro.

Effect on free-to-play games

The first takeaway is that free-to-play games are kind of screwed, since the average revenue per user is often very low in that monetization model. This is probably deliberate. See, Unity is reducing or waiving the new fees for games that make use of certain services of theirs, such as Unity Gaming Services or Unity LevelPlay mediation for mobile ad-supported games. And there are reports that getting more games to adopt these services and kill off the competition was the whole point of introducing the new fees. This sounds like a rather shitty anti-competitive practice.

Enough about free-to-play though; I don't personally care that much about it and my knowledge about this area is very limited. What about premium games? Even if those are just collateral damage in Unity's strategy, they are still profoundly impacted.

Effect on premium games

Premium games (games that are bought with an up-front purchase) can typically have price tags such as $10, $20 or $60, and at those prices, Unity's Runtime Fee of $0.20 (or lower) per install is just a tiny fraction, right? No, unfortunately that's not how it works at all.

Sure, if you assume that a premium game has one install per unit sold, and think of the revenue per install as the original price the game is sold at, then the new fees don't look too bad for premium games. But it's a trap to think that way.

The average lifetime price for a premium game is lower than you think

The average price a game sells at over its lifetime can be a fraction of the price it sells for initially. Deep discounts (e.g. 90% off on Steam) and bundles (Humble Bundles and similar) can rapidly drag down the lifetime average price of a game, since it often happens that way more people buy it at these lower price points than at the original price.

So when using the charts above, don't consider a game's initial price. Instead, think of the lowest price the game could be sold at when it's at the maximum discount it will ever be, or when it's sold in a bundle together with other games, each game earning only very little per sale.

And that low price, that's just the starting point.

The number of installs is not the same as the number of copies sold

Don't think of the X axis in the charts above as the number of copies sold. The number of installs can be much higher, due to a variety of factors:

  • A customer can install their purchase copy on multiple devices. Unity has said this will count as multiple installs.
  • A customer can uninstall the game and later install it again. Unity initially said this will count as separate installs and later said it won't. They have not disclosed how they would be able to know.
  • On some devices and stores, updating a game to a newer version might count as a new install. Unity has not disclosed their methodology.
  • Pirated versions of a game are unrelated to copies sold. Unity has said those won't be counted, but in practice they have no way of knowing if a given install is a pirated copy or not. They have said that developers are welcome to contact them if they suspect they have been victims of piracy. If this does not fill you with confidence, you're not alone.
  • Sometimes, full premium games are made free to play for a weekend or similar. It's a kind of demo, but using the same build as the full game. It's just limited in time. Meta also has a try before you buy functionality where players can try certain games for e.g. 15 or 30 minutes before deciding if they want to buy it. These types of demos, that are not separate builds, count towards the install counts of the full game, and obviously the number of people who try the game can be much higher than the number who end up buying it. It essentially behaves similar to the free-to-play model.

How to actually use the charts

The problem is that you probably have little idea what the sold-copies-to-installs ratio will be for your game, given all the unpredictable factors mentioned above (multiple devices, reinstalls, updates, piracy, time-limited full-game demos if applicable). But you can start by pretending one sale equals one install and plot a point in the chart based on that.

  • Plot an initial starting point in the chart based on sold copies and revenue per sold copy:
    • For the X axis, predict how many copies your game will sell.
    • For the y axis, predict the average price the game will sell at, keeping in mind that it'll probably be closer to the lowest price the game will ever sell at than to the initial price.

Now spot the diagonal black line that goes through the chart, which represents the 0% threshold.

  • Draw a line from your chosen initial starting point in the graph and down and to the right, parallel to that black 0% line.

The higher the sold-copies-to-installs ratio is, the further you will move down and to the right along that line. This is because a higher sold-copies-to-installs ratio will both increase the number of installs and decrease the revenue per install. And in these particular charts, it forms a straight line because both the X axis and Y axis are exponential.

Example

Let's say you expect your game will sell on average for $10, taking discounts etc. into account, and that you expect to sell 300k copies. You want to know what you might owe Unity if you stay on the Unity Personal license (which is actually allowed regardless of revenue according to their announced changes).

So in the chart for Unity Personal you plot in an initial point, which is at 300k on the X axis and at $10 on the Y axis. This would represent what you owe if there is exactly one install per sold copy.

But to be aware of potential effects of the sold-copies-to-installs ratio being higher, you draw in a diagonal line starting from your initial point and going parallel to the bottom black 0% line.

Following this line shows you the effect of higher sold-copies-to-installs ratios, such as 2x and 10x, and you can see how this drastically affects how big a percentage of your gross revenue you owe to Unity.

Consequences

Unfortunately I can't help you figure out what is a realistic sold-copies-to-installs ratio to expect, because I have no idea even for my own games, let alone others'. And that's kind of the problem here. Charging based on installs is utterly bonkers due to how unpredictable it is.

It's impossible to budget for. And it makes all kinds of things which are a normal part of game development (like discounting your game, accepting that piracy is inevitable, etc.) into suddenly nerve-wrecking issues since they could have drastic effects on what you owe Unity.

On top of the number of installs being unpredictable even if they were counted correctly, you also can't trust Unity to do that, since the way they track it is proprietary and unaccountable. It relies on Unity saying "trust us, you owe us this much money" and you not being able to inspect their methodology, data, or calculations at all. And many of the things they claim they will correctly count or correctly refrain from counting, are simply not practically possible.

In fact, an alleged Unity employee (and it looks legit to me) posted anonymously that Unity is aware developers may in some cases lose more per install than they earn, even to the point of bankruptcy, but that they would "fix this with the customer to not bankrupt them".

The issue with that is that no serious business would leave a matter of potentially going bankrupt to be addressed with some future ad-hoc fix at another company's whim. That stuff needs to be up-front and contractual, even with companies you trust, let alone with Unity.

In short, this is an absolute train wreck.

On a related note, I also wrote here about how contrary to what Unity says themselves, Unity 2022.x or earlier and Unity 2021.x LTS or earlier are not affected by the Unity Runtime Fee as far as I can tell, based on Unity's own Terms of Service. It covers how a previous version of the Terms of Service allowed to stick with that version of the Terms of Service if you don't upgrade Unity, and how it doesn't matter that Unity removed that clause in a later version of the Terms of Service, since the older terms did not give them the right to change the terms like that.

Read More »

Behind the design of Eye of the Temple, out on Quest 2

My VR adventure Eye of the Temple, that I've been working on since 2016, has landed on the Meta Quest 2! It was released last week on April 27th.

Get Eye of the Temple for Quest 2 on the Oculus Store

Originally released for SteamVR in October 2021, so many people have asked for it to be brought to the Quest 2 as a native app, so I'm happy it's finally a reality. The Quest 2 version was co-developed with Salmi Games and it took all our combined and complimentary skills to bring the game to life at target framerate on the Quest 2 mobile hardware.

We also made this new trailer:

The game got a fantastic reception! UploadVR called it "A Triumphant Room-Scale Adventure" and has labeled it an essential VR experience, and it got great video coverage by Beardo Benjo, BMFVR and many others. It also got great user reviews and a high review score on the Oculus Store.

Behind the design

To mark the Quest 2 launch of Eye of the Temple, I've written no less than three articles - published elsewhere - about different aspects of its design.

The Origins and Inspirations of ‘Eye of the Temple’

To celebrate the launch, I spoke with Meta about the origins of Eye of the Temple and the wide variety of inspirations (from classic platformers to Ico and Indiana Jones) behind the game.

Read the article on the Meta Quest blog

Approachable and Immersive Design in ‘Eye of the Temple’

Immersion can mean many things, and VR has lifted the ceiling for immersion in games higher than ever before. In this article, I’ll detail how a design dogma of “embodied immersion” in Eye of the Temple goes hand-in-hand with making the game highly approachable—even for people who don’t normally play video games.

Read the article on the Oculus Developer blog

The Hidden Design Behind the Ingenious Room-Scale Gameplay in ‘Eye of the Temple’

Eye of the Temple is one of the rare VR games that focuses on not just on pure room-scale movement, but dynamic room-scale movement. The result is a uniquely immersive experience that required some clever design behind the scenes to make it all work. This guest article explains the approach.

Read the guest article on Road to VR

Other bits and pieces

  • Here's a reddit post talking about how we had to completely change the water effect implementation on Quest 2 in order to keep the same aesthetic on the mobile hardware.
  • Here's a short YouTube video where I explain how the room-scale platforming gameplay works behind the scenes. It essentially covers some of the same subject as the Road to VR guest article I mentioned above, but in less details.
  • Here's an FAQ with answers to common questions we've received since the Quest 2 release.
Read More »