Blog

Procedural creature progress 2021 - 2024

For my game The Big Forest I want to have creatures that are both procedurally generated and animated, which, as expected, is quite a research challenge.

As mentioned in my 2024 retrospective, I spent the last six months of 2024 working on this — three months on procedural model generation and three months on procedural animation. My work on the creatures actually started earlier though. According to my commit history, I started in 2021 after shipping Eye of the Temple for PCVR, though my work on it prior to 2024 was sporadic.

Though the creatures are still very far from where they need to be, I'll write a bit here about my progress so far.

The goal

I need lots of forest creatures for the gameplay of The Big Forest, some of which will revolve around identifying specific creatures to use for various unique purposes. I prototyped the gameplay using simple sprites for creatures, but the final game requires creatures that are fully 3D and fit well within the game's forest terrain.

creatures from prototype → replace with 3D procedural creatures → put into procedural terrain

I've seen a fair amount of projects with procedural creatures. It's not too hard to create a simple torso with legs, randomizing the configuration, and animating movement by transitioning the feet between footsteps in straight lines or simple arcs.

This works fine for bugs, spiders, lizards and other crawly critters, as well as for aliens and alien-like fantasy creatures. The project Critter Crosser is doing very cool things with this. While it features mammals too, I'm not sure they'd translate well outside the game's intentionally low-res, quirky aesthetic.

If we also consider games where only the animation is procedural, Rain World is another example where this works great for its low-definition but highly dynamic alien-like creatures. Spore is a classic example too, though its creatures often end up looking both alien and goofy.

For The Big Forest though, I want creatures that feel like they truly belong in a forest, with movement and design reminiscent of quadruped mammals like foxes, bears, lynx, squirrels, and deer. The way mammals look and move is far too distinct to simply "wing it" — at least in my game's aesthetic. Achieving realistic mammalian creatures requires thorough study, so although I also plan to include non-mammal creatures, I’m focusing primarily on mammals for now.

Procedural generation of creatures

The basic problem is to generate forest creatures with plausible anatomy with a small set of parameters and ensure that:

  1. The parameters are meaningful so I can use them to get the results I want.
  2. Any random values for the parameters will always create valid creatures.

My main challenge is identifying what constitutes meaningful parameters. This is something I have to discover gradually, starting with a large number of low-level parameters based on minimal assumptions and eventually narrowing down to a smaller set of high-level parameters as I refine the logic.

From the beginning, I decided to focus on basic proportions for the foreseeable future, without worrying about more subtle shape details. For this reason I limited the generated meshes to extruded rectangles. I had to start somewhere, and here's the glorious first mesh:

A very boxy creature.

Later I specified the torso, each leg, each foot, the neck, head, jaw, tail, and each ear as multi-segmented extruded rectangles. I found this approach easily sufficient for capturing the likeness of different animals. By the end of 2023, I had produced these three creatures and the ability to interpolate between them:

Simple 3D models of a bull elk, a coyote and a cat.

This was based on very granular parameters, essentially a list of bones with alignment and thickness data. Each bone length, rotation values, and skin distance from the bone in various directions was an input parameter, totaling 503 parameters.

Creating creatures from scratch using these parameters was impractical, so I developed a tool to extract data from skinned meshes. The tool identified which vertices belonged to which bones and derived the thickness values from that. However, it was error-prone, partly due to inconsistent skinning of the reference models. For example, in the cat model, one of the tail bones wasn’t used in the skinning, which confused my script. This is why part of the cat’s tail is missing in the image above.

I tried to implement workarounds for such edge cases, and various ways to manually guide the script towards better results for each creature, but each new reference 3D model seemed to come with new types of quirks to handle. After resuming work in 2024, I had these seven creatures, with their reference 3D models shown below them:

Seven generated creatures. Five of them have hand-crafted reference 3D models below.

(I lost two of my original reference 3D models — the coyote and bull elk — because they were in Maya format. Since I don’t have Maya installed, when a project reimport was triggered, Unity couldn’t import the models, and they vanished. Since it's not standard practice to commit Unity’s imported data (the Library folder) to source control, I couldn’t recover them.)

Anyway, so far all my efforts had been focused on representing diverse creatures through a uniform structure that can be interpolated, but I hadn't yet made progress on defining higher-level parameters.

Failed attempts at automatic parametrization

Once I shifted my focus to defining higher-level parameters, the first thing I tried out was using Principal Component Analysis (PCA) (Wikipedia) to automatically identify these parameters based on my seven example animals. In technical terms, it worked. The PCA algorithm created a parametrization of seven parameters, and here's a video where I manipulate them one at a time:

As I suspected though, the results weren't useful because each parameter influenced many traits at once, with lots of overlap, making the parameters not meaningful.

Why did it create seven parameters? Well, when you have X examples, it's easy to create a parametrization with X parameters that can represent all of them. You essentially assign parameter 1 to represent 'contribution from example 1', parameter 2 to represent 'contribution from example 2', and so on. This is essentially a weighted average or interpolation. While this isn't exactly what Principal Component Analysis does, for my purposes it was just as unhelpful. Manipulating the parameters still felt more like interpolating between examples than controlling specific traits.

When I talk about "meaningful parameters", I mean parameters I can understand — something I could use in a "character creator" to easily adjust the traits of a creature to achieve the results I want. Say, parameters such as:

  • Bulkiness
  • Tallness (relative to back spine length)
  • Head length (relative to back spine length)
  • How pointy the ears are
  • Thickness of the tail at the base (relative to torso thickness)

However, PCA doesn’t work that way. Each parameter it produces influences many traits at once, making it impossible to control one trait at a time. I encountered the same issue in the academic research project The SMAL Model. While this project is far more sophisticated than what I could do, and is based on a much larger set of example animals, their PCA-based parametric model (which you can try interactively here) suffers from the same problem. Even though it can represent a wide range of animals, I wouldn't know how to adjust the parameters to create a specific animal without excessive trial and error.

I'm also convinced that throwing modern AI at the problem wouldn't work either. Not only would it require far more example models (which I don't have) and AI training expertise (which I have no interest in acquiring); it still wouldn't address the fundamental issue: An automated process can't understand what correlations and parameters are meaningful to a human.

Another problem with automated parameters is that they don't seem to guarantee valid results. When experimenting with my own PCA setup, or with the SMAL Model I linked to, it's easy to come across parameter combinations that produce deformed glitchy models. Part of finding meaningful parameters is figuring out what constitutes meaningful ranges for them. For example, the parameter "Thickness of the tail at the base" might range from 0 (a minimal tail thickness, like a cow's) to 1 (a tail as thick as the torso, like a crocodile's or sauropod's).

This ensures that the tail thickness can't accidentally exceed the torso’s. It also means that changing the torso thickness may affect the tail thickness too. While this technically involves "affecting multiple things at once", it’s done in a way that’s sensible and meaningful to a human (specifically, me). An automated process can't know which of these "multiple things at once" relationships feel meaningful or arbitrary.

Manual parametrization work

Having concluded there was no way around doing a parametrization manually, I began writing a script with high-level parameters, which would produce the values for the low-level parameters (bone alignments and thicknesses) as output.

I made a copy of all my example creatures, so now I had three rows: The original reference models, the extracted creatures (automatically created based on analyzing the skinned meshes of the references) and the new sculpted creatures that I would try to recreate using high-level parameters.

Three rows of creatures.

Initially, the high-level parameters controlled only the torso thickness (both horizontal and vertical) and tapering, while the other features remained as defined by the extracted creatures. Gradually, I expanded the functionality of the high-level parameters, ensuring that the results didn't deviate too much from the extracted models — unless they were closer matches to the reference models.

Why keep the extracted models around at all instead of directly comparing with the original reference models? Well, I was still working with only extruded rectangles, and there's only so much detail that approach can capture compared to the high-definition reference models. The extracted models provided a realistic target to aim for, at least for the time being.

From there, my methodology for moving towards more high-level parameters was, and still is:

  1. Gradually identify correlations between parameters across the example creatures, and encode those into the generation code. The goal is to reduce the number of parameters by combining multiple lower-level parameters into fewer higher-level ones, all while ensuring that all the example creatures can still be generated using these higher-level parameters.
  2. As the parameters get fewer and more high-level, it becomes easier to add more example creatures, which provides more data for further study of correlations.

I repeat these steps as long as rolling random parameter values still doesn't produce sensible results.

Here's a messy work in progress:

To help me spot correlations between parameters, I wrote a tool that visualizes the correlation coefficients between all pairs of parameters, and shows the raw data in the corner when hovering the cursor over a specific pair. (In this video, the tool did not yet display all parameter types, so a lot of parameters are missing.)

Focus on joint placement within the body

In 2024, most of my focus on parametrization revolved around the sensible placement of joints within creatures. For instance, in all creatures with knees, the knee joint is positioned closer to the front of the leg than the back. While the knee placement was fairly straightforward, determining the placement of hip, shoulder, and neck joints across creatures with vastly different proportions proved significantly more challenging.

Many of my reference 3D models looked sensible externally, but had questionable and inconsistent rigging (placement of joints) from an anatomical perspective.

Comparison of anatomy of cat 3D model with anatomical picture of a cat's skeleton.

Anatomical reference images of bones and joints in various animals were also frequently inconsistent. I could find detailed references from multiple angles for dogs, cats and horses, but not much else. For instance, depictions of crocodile anatomy varied greatly: Some showed the spine in the neck centered, while others placed it near the back of the neck.

Comparison of four different reference images showing the skeleton within a silhouette of a crocodile. The spine in the neck is inconsistent across them.

All of this uncertainty meant I was constantly second-guessing joint placement — both when contemplating how the procedural generation should work and when adding new example creatures. I wanted to solve this one and for all, so I could stop worrying about it.

Solving the joint placement would also simplify the process of adding additional example creatures, since I could just focus on making them look right "from the outside" without worrying about joints placement. Eventually, I did largely solve it. By this point, I had established 106 high-level parameters that controled all the 503 low-level parameters.

Speeding up creation of additional example creatures

Once joint placement was mostly automated, I come up with the idea of accelerating the creation of example creatures using Gradient Descent (Wikipedia). My goal was to implement a Gradient Descent-based tool that could automatically adjust creature parameters to make the creature match the shape of a reference 3D model.

To my surprise, the approach actually worked. In the video below, the tool I created adjusts the 106 parameters of a creature to align its shape with the silhouettes of a giraffe:

The tool works by capturing silhouettes from multiple angles of the reference model (once) and of the procedural model (at each step of the iterative Gradient Descent process). It calculates how different the procedural silhouettes are from the reference silhouettes, using this difference as the penalty function for the Gradient Descent.

To measure the difference between two silhouettes, the method creates a Signed Distance Field (SDF) from both silhouettes being compared. To speed up the process, I made my SDF-generator of choice Burst-compatible (available here). For each pixel near the silhouette border (pixels with distance values close to zero) the process retrieves the corresponding pixel's distance value from the other SDF to determine how far away the other silhouette border is at that point. The penalty function sums these distances across all tested pixels in all silhouette pairs, yielding the overall penalty.

This explains why, in the video, the legs growing are the first feature to change. Increasing the leg lengths reduces the most distance penalties at once. After that, extending the neck length has the greatest impact.

Notably, the process doesn't get the ears of the giraffe right. The reference model has both ears and horns, while my generator can't create horns yet. So the Gradient Descent process, which aims to match the silhouettes as closely as possible, made the ears large and round so they effectively serve double duty as both ears and horns. I later worked around this by hiding the horns of the reference model.

I also experimented with a standard optimization method called the "Adam" optimizer, but the results were not good. Ultimately, the automated process wasn't perfect, but it complemented my manual tweaks to speed up the creation of example creatures to some extent.

By this point, I had spent three months in 2024 working on procedural creature generation and had developed eleven example creatures covering a variety of shapes. I eventually scrapped the "extracted creatures" because the combination of high-level parametrization and Gradient Descent tooling made them unnecessary as an intermediate step.

Eleven creatures.

However, I was not even close to a sufficient high-level parametrization yet. Feeling the need for a change of pace, I decided to shift my focus to the procedural animation of the creatures.

Intermission

As I was writing this, I realized that while I’m convinced I haven’t yet achieved the goal of ensuring that "any random values for the parameters will always create valid creatures", I hadn’t actually put it to the test. So, I quickly wrote a script to assign random values to all the high-level parameters. These are the results:

While a few of the results look cool, most are not close to meeting my standards, as I expected. Still, this randomizer could prove useful going forward. It might help me identify which aspects of the generation are still insufficient and why, guiding my further refinement of the procedural generation.

Anyway, back to what happened in 2024.

Procedural animation of creatures

When it comes to procedural animation, I have the advantage that I wrote my Master's Thesis in 2009 about Automated Semi-Procedural Animation for Character Locomotion, accompanied by an implementation called the Locomotion System. That was based on modifying hand-crafted animations to adapt to arbitrary paths and uneven terrain.

Even before that, I implemented fully procedural (pre-rendered) animations as far back as in 2002 when I was 18.

In the summer of 2022, I tried to essentially recreate that, but this time in real-time and interactive. As a starting point, I used my 2009 Locomotion System, stripping out the parts related to traditional animation clips. Instead, I programmed the feet to simply move along basic arcs from one footstep position to the next. To test it, I manually created some programmer-art "creatures" with various leg configurations.

In 2024, I resumed this work, now applying it to my procedurally generated creature models.

The result looked somewhat nice but were rather stiff, and the approach only works for taking small steps. As I've touched on before, animating procedural, mammal-like creatures to look natural is a tough challenge:

  • Quadruped mammals like cats, dogs, and horses move their limbs in more complex ways. Or at least, we're so familiar with their movements that any inaccuracies makes the movements look weird to us.
  • Fast gaits, such as galloping, are more complicated than walking.
  • The animation must control not only the legs, but also the spine, tail, neck, etc.
  • Since the creatures are generated at runtime, the animation must work out of the box without any manual tweaks for individual creatures.

That's a tall order even with my experience, but hey, I like a good challenge.

My approach to procedural animation is purely kinematic, relying on forward kinematics (FK) and inverse kinematics (IK). This means I write code to directly control the motion of bodies and limbs without considering the forces that drive their movement. In other words, there’s no physics simulation involved and no training or evolutionary algorithms (like this one).

A training-based approach isn't viable for creatures generated at runtime, and frankly, I have no interest in pursuing it. The only way the animation interacts with the game engine’s physics system is by using ray casts to determine where footsteps should be placed on the ground.

Hilarity ensues

As an aside, one of the fun parts of working on procedural animation is that works in progress often look goofy. When I first applied the procedural animation to my procedurally generated creatures, I was impatient and ran it before implementing an interface to specify the timing of the different legs relative to each other. As I wrote on social media, "It might be hard to believe, but this animation is actually 100% procedural."

The system itself already supported leg timing, as it was inherited from the Locomotion System, where timing was automatically set based on analyzing animation clips. However, in the modified fully procedural version, I hadn't yet implemented a way to manually input the timing data. Once I did, things looked a lot more sensible, as shown in the previous video.

Other people's comments about the absurd animations sometimes inspire me to create more silly animations, just for fun. "Somebody commented that the procedural spider and the procedural dog are destined to fight, but actually they are friends, they are best pals and like to go on adventures together."

Around this time, I wanted to better evaluate my procedural animation by directly comparing it to hand-crafted animation. To do this, I applied the procedural animation to 3D models I had purchased from the Asset Store, comparing it to the included animation clips that came with those models.

Continuing the theme of goofiness, my first attempt at this comparison had a bug so hilariously absurd that it became my most viral post ever on Twitter, Bluesky and Mastodon. (The added music adds a lot to this one, so consider playing with sound on!)

People would inevitably suggest, and sometimes even plead, that I bring this absurd silliness into the game I’m making, for fun and profit. While I understand the sentiment, the reality is that my vision for The Big Forest is not that kind of game. There's definitely room for some light-hearted moments, but that’s not the primary focus. For reference, think of a Ghibli movie — there’s a certain kind of silliness that would fit well there, but it would need to align with the overall tone.

Incremental progress and study

I started studying my procedural animation alongside the handcrafted animations to better understand the subtle but important differences. Here's the comparison tool I made again, this time without the hilarious bug.

Even without the bug, it still looks very rough. At higher speeds, it becomes clear that the approach of simply moving the feet from one footstep position to the next doesn't work.

In real life (and in the reference animations), at high speeds like galloping, the feet don't stay in contact with the ground for most of their backward trajectories. They only touch the ground for short durations. My procedural animation didn't account for this yet. Instead, the hips got pulled down to be within distance of the feet, and that's what caused the torsos of the procedurally animated creatures to drag along the ground.

Once I tried to account for this, things improved slightly — at least in that the torsos no longer dragged along the ground.

To better make informed decisions about how far up the feet should lift, how much time they should spend touching the ground, and similar animation aspects, I developed a tool to plot various properties of the reference animations. For example, the plot below shows that as the stride distance (relative to the length of the leg) increases, the proportion of time the foot is in the air also increases. The white dotted curve represents my own equation that I attempted to fit to the data.

A scatter plot with colored dots and a white curve approximating them.

Below is another plot that shows the signed horizontal distance between the hip and the foot (actually, the footstep position) along the horizontal axis, and the foot's lift on the vertical axis. In the center, where the foot is directly below the hip, the lift is generally zero. Notably, the plot includes data from gaits at different speeds (walking, trotting, galloping), but the distance at which the foot lifts off the ground is fairly consistent across those speeds. So while higher speeds are associated with longer step lengths, the "distance" (relative to the creature) that a foot is in contact with the ground is not proportional to the step length. Instead, it it's closer to a constant value, relative to the leg length.

A scatter plot with colored dots and a white curve approximating them.

I made many plots with this tool, visualizing data from the reference animations in all kinds of ways. Some of the plots revealed clear trends, like the two above. Others resulted in a jumble of unrelated curves or dots that I couldn't use for anything. In this way, my process was (and is) very much a classic research approach: Formulating hypotheses, testing them, discarding those that don't pan out, and building upon the ones that do.

Inverse kinematics overhaul

I had noticed that many animals kind of bend or curl their feet — and sometimes the entire lower part of their front legs — when lifting them. However, my procedural animation didn't capture this behavior at all. I could hack around this by dynamically adjusting the foot rotations over the walk cycle, but it often resulted in unnatural poses.

Eventually, I concluded that I needed to revamp the inverse kinematics (IK) algorithm I was using, with focus on better handling foot roll and bending scenarios. My old Locomotion System employed a two-pass IK approach, where the foot rotation was given as input to the first IK pass. Based on the orientation of the lower leg found by the IK, the foot rotation would be adjusted — rotated around either the heel or toe — to ensure a more natural ankle angle, followed by a second IK pass to make the leg match the new ankle position. This two-pass approach worked all right for the Locomotion System, which was applied on top of hand-crafted animation. However, I found it insufficient for the fully procedural animation I was working on now.

In principle, this two-pass approach could be changed to run iteratively, rather than just twice. However, this would be computationally expensive, since the IK algorithm itself is already iterative. Instead, I implemented a new IK algorithm where the foot rotation is not a static input, but is controlled by the IK itself.

Having the IK handle the foot rotation is a bit tricky, as it must behave quite differently depending on how much weight is on the foot, ranging from none to full weight.

I made significant progress with this approach, although there’s a tricky issue: There are edge cases where multiple solutions can satisfy the given constraints. This makes the leg poses, which are state-less, sometimes snap from one configuration to another based on even tiny changes in the input positions. I have some ideas on how to address this, but I haven't tested them yet.

After incorporating the new IK system and adding logic to make the feet bend when lifted, my results looked like this at the end of 2024:

While it's still a bit glitchy and far from perfect, the bending of the feet and legs is at least a step in the right direction.

And that's how far I got so far

Both the procedural model generation and the procedural animation still have a long way to go, after spending around three months on each, and that can feel a bit demotivating. On the other hand, I've been making steady progress, even if it's slow. Writing this post has actually helped me realize just how much I've accomplished so far after all.

That said, I feel it's time for a break from the creatures. When I return to them later, I'll hopefully do so with renewed energy.

I wish I could have wrapped up this post with a satisfying milestone or a neat conclusion, but there was already so much to cover, and I didn't want to delay this write-up any further. I also think there's value in showing the messiness of creative processes and research. Let's see where I'm at when I write about the procedural creatures next time!

Read More »