Rune Skovbo Johansen
Creative Programmer & Designer
runevision
Menu

Blog

The quest for automatic smooth edges for 3d models

I'm currently learning simple 3D modeling so I can make some models for my game. I'm using Blender for modeling.

The models I need to make are fairly simple shapes depicting man-made objects made of stone and metal (though until I get it textured it will look more like plastic). There are a lot of flat surfaces.

The end result I want is these simple shapes with flat surfaces - and smooth edges. In the real world, almost no objects have completely sharp edges, and so 3d models without smooth edges tend to look like they're made of paper, like this: What I want instead is the same shapes but with smooth edges like this: Here, some edges are very rounded, while others have just a little bit of smoothness in order to not look like paper. No edges here are actually completely sharp. The two images above shows the end result I wanted. It turns out it was much harder to get there than I had expected! Here's the journey of how I got there.

How are smooth edges normally obtained? By a variety of methods. The Blender documentation page on the subject is a bit confusing, talking about many different things without clear separation and with inconsistent use of images.

Edge loops plus subdivision surface modifier

From my research I have gathered that a typical approach is to add edge loops near edges that should be smooth, and then use a Subdivision Surface modifier on the object. This is also mentioned on the documentation page above. This has several problems.

First of all, subdivision creates a lot of polygons which is not great for game use.

Second, adding edge loops is a manual process, and I'm looking for a fully automatic solution. It's important for me to have quick iteration times. To be able to fundamentally change the shape and then shortly after see the updated end result inside the game. For this reason I strongly prefer a non-destructive editing workflow. This means the that the parts that make up the model are kept as separate pieces and not "baked" into one model such that they can no longer be separated or manipulated individually.

Adding edge loops means adding a lot of complexity to the model just for the sake of getting smooth edges, which then makes the shape more cumbersome to make major changes to afterwards. Additionally, edge loops can't be added around edges resulting from procedures such as boolean subtraction (carving one object out of another) and similar, at least not without baking/applying the procedure, which is a destructive editing operation.

Edge loops and subdivision is not the way to go then.

Bevel modifier

Some posts on the web suggests using a Bevel modifier on the object. This modifier can automatically add bevels of a specified thickness for all edges (or selectively if desired). The Bevel modifier in Blender does what I want in the sense that it's fully automatic and creates sensible geometry without superfluous polygons. However, by itself the bevel either requires a lot of segments, which is not efficient for use in games (I'd want one to two segments only to keep the poly count low) or when fewer segments are used it creates a segmented look rather than smooth edges, as it can also be seen below.

Baking high-poly details into normal maps of low-poly object

Another common approach, especially for games, is to create both a high-poly and a low-poly version of the object. The high-poly one can have all the detail you want, so for example a bevel effect with tons of segments. The low-poly one is kept simple but has the appearance from the high-poly one baked into its normal maps.

This is of course a proven approach for game use, but it seems overly complicated to me for the simple things I want to achieve. Though I haven't tried it out in practice, I suspect it doesn't play well with a non-destructive workflow, and that it adds a lot of overhead and thus reduces iteration time.

Bevel and smooth shading

Going back to the bevel approach, what I really want is the geometry created by the Bevel modifier but with smooth shading. The problem is that smooth shading also makes the original flat surfaces appear curved.

Here is my model with bevel and smooth shading. The edges are smooth sure enough, but all the surfaces that were supposed to be flat are curvy too. Smooth shading works by pretending the surface at each point is facing in a different direction than it actually does. For a given polygon, the faked direction is defined at each of its corners in the form of a normal. A normal is a vector that points out perpendicular to the surface. Only, we can modify normals to point in other directions for our faking purposes.

The way that smooth shading typically calculates normals makes all the surfaces appear curved. (There is typically a way to selectively make some surfaces flat, but then they will have sharp edges too.) The diagram below shows the normals for flat shading, for typical smooth shading, and for a third way that is what I would need for my smooth edges. So how can the third way be achieved? I found a post that asks the same question essentially. The answers there don't really help. One incorrectly concludes that Blender's Auto Smooth feature gives the desired result - it actually doesn't but the lighting in the posted image is too poor to make it obvious. The other is the usual edge loop suggestion.

When I posted question myself requesting clarification on the issue, I was pointed to a Blender add-on called Blend4Web. It has a Normal Editing feature with a Face button that seems to be able to align the normals in the desired way - however as a manual workflow, not an automated process. I also found other forum threads discussing the technique.

Using a better smoothing technique

At this point I got the impression there was no way to get the smooth edges I wanted in an automated way inside of Blender, at least without changing the source code or writing my own add-on. Instead I considered an alternative strategy: Since I ultimately use the models in Unity, maybe I could fix the issue there instead.

In Unity I have no way of knowing which polygons are part of bevels and which ones are part of the original surfaces. But it's possible to take advantage of the fact that bevel polygons are usually much smaller.

There is a common technique called face weighted normals / area weighted normals (explained here) for calculating averaged smooth normals which is to weigh the contributing normals according to the surface areas of the faces (polygons) they belong to. This means that the curvature will be distributed mostly on small polygons, while larger polygons will be more flat (but still slightly curved).

From the discussions I've seen, there is general consensus that this usually produces better results than a simple average (here's one random thread about it). It sounds like Maya uses this technique by default since at least 2014, but smooth shading in Blender doesn't use it or support it (even though people have discussed it and made custom add-ons for it back in 2008), nor does the model importer in Unity (when it's set to recalculate normals).

Custom smoothing in Unity AssetPostprocessor

In Unity it's possible to write AssetPostprocessors that can modify imported objects as part of the import process. This can also be used for modifying an imported mesh. I figured I could use this to calculate the smooth normals in an alternative way that produces the results I want.

I started by implementing just area weighted normals. This technique still make the large faces slightly curved. Here is the result. Honestly, the slight curvature on the large faces can be hard to spot here. Still, I figured I could improve upon it.

I also implemented a feature to let weights smaller than a certain threshold be ignored. For each averaged normal, all the contributing normals are collected in a set, and the largest weight is noted. Any weight smaller than a certain percentage of the largest weight can then be ignored and not included in the average. For my geometry, this worked very well and removed the remaining curvature from the large faces. Here is the final result again. The code is available here as a GitHub Gist. Part of the code is derived from code by Charis Marangos, aka Zoodinger.

Future perspectives

The technique of aligning smooth normals on beveled models with the original (pre-bevel) faces seems to be well understood when you dig a bit, but poorly supported in software. I hope Blender and other 3D software one day will have a "smooth" option for their Bevel modifier which retains the outer-most normal undisturbed.

A simpler prospect is adding support for area weighted normals. This produces almost as good result for smooth edges, and is a much more widely applicable technique, not specific to bevels or smooth edges at all. That Blender, Unity and other 3D software that support calculating smooth normals do not include this as an option is even more mind-boggling, particularly given how trivial is it to implement. Luckily there workarounds for it in the form of AssetPostprocessors for Unity and custom add-ons for Blender.

If you do 3D modeling, how do you normally handle smooth edges? Are you happy with the workflows? Do some 3D software have great (automatic!) support for it out of the box?

Read More »

Spin-off game: Whip Arena

I've spend the past few days at Exile (like a game jam but not a jam this year). I've envisioned having a whip in Eye of the Temple for a long time, and at Exile I began developing this whip mechanic.

In the main game, the whip is meant to be just one element in the gameplay, aiding in puzzles, like being able to grab and switch levers from a distance, and grabbing objects to pull towards you. However, for now I started out making a little self-contained game based just around the whip, so that I could focus on getting the feel right first.

At the end of Exile I had "whipped up" a little game I might call "Eye of the Temple - Whip Arena". Here's a video of the gameplay:



Implementing the whip physics was rather tricky. I've ended up with something that doesn't work quite like a real-world whip - you can't really make it do a crack in mid-air - but feels very responsive in its own way. The sound and haptics is based directly on the simulation, and I found it quite satisfying to use.

Now I'm wondering if I should take a little break from developing the full Eye of the Temple game and try to get this little arena game (which is much smaller in scope) finished and released first. I wonder if it's something people might be interested in? It definitely got positive reactions and feedback at Exile.

I'm thinking it would work well as an infinite game with high scores. There's not yet any fail condition though - I'm trying to think what might work well for that. I also need to implement some kind of bonuses and multipliers in the scoring probably.

As a side note - after spending several days prototyping and implementing this whip mechanic at Exile, I ended up with quite sore shoulders from whipping so much. ;)

If you have a Vive and would like to be an early tester of Whip Arena, let me know.
Read More »

Announcing Eye of the Temple

I've been working on a VR game for the past months. It's called Eye of the Temple and it's a Vive game quite unlike any other.

Here's a pre-alpha trailer for it!



That's the first time I've made a trailer by the way. Quite challenging but also fun! It's got some placeholder models in it and it's made from playtest footage rather than clips made specifically for this trailer, but I tried to make the best of what I got.

Eye of the Temple is currently at an early stage in development.

Speaking of placeholder models, I'm just beginning now to look into working with contractors to have some nice art created for the game. If you know of any skilled concept artists or 3d modelers who'd like to design ancient contraptions for a game like this, let me know!

I previously worked on the game jam game Chrysalis Pyramid. Eye of the Temple is based on a similar core mechanic, but expands upon it in scope and variety, and is completely rewritten and redesigned from scratch. The goal is a commercial release in 2017.

Playtesting so far has been very promising and it's a lot of fun developing the game. Follow the development here and on Twitter, and let me know what you think!
Read More »

Development of The Cluster put on hold

The Cluster is an exploration platformer game I've been developing in my spare time for some time. You can see all posts about it here.

As of the time being, I've put development of The Cluster on hold indefinitely.

Burdens

After having worked on the game for more than ten years (!) I finally came to the conclusion that the overall design and constraints in the game is making it very hard for me to finish it, while also holding me back from trying out a lot of otherwise interesting ideas.
  • The game is procedural and the same every time you play, which makes it super important to guarantee you can't get stuck.
  • There is gravity and no ability to fly, which means you will get stuck unless the algorithms are carefully designed to make it impossible. It has to be more or less mathematically provable.
  • The AI enemies can follow you practically everywhere, which means every obstacle needs to be coded to also be usable by AI.
  • The world is constructed from a rather large-scale grid which makes things like large ramps and hills impossible (just a bit boring).
  • The third-person perspective require art and animation resources for avatar and enemies.
  • The side-scrolling format makes it hard to wordlessly attract the player to interesting things in the far distance, because only the immediate environment can be seen. This creates more of a reliance on maps and narratives to explain the reasoning for seeking out those far away things, and that plays against the procedural nature of the game.
Now, none of those things are impossible to overcome, and I did have a playable demo at the end of 2015 that worked under all of those constraints. But the game is not yet nearly varied enough, and every new feature is very tough to pull off under the burden of all those constraints.

Focusing on my strengths

I've switched focus to working on projects that I hope both play more to my strengths, and should be possible to pull off in a shorter time frame.
  • First-person perspective without jumping, so no fiddly physics.
  • No humanoid characters, which decreases the demands for art, animation and "behaviors".
  • No ambition of a story (some of my favorite games don't have one anyway).
This frees me up to focus more exclusively on making interesting environments, which is what I'm most passionate about.

The good news is that the time has not been entirely wasted. For one, I've learned a LOT. And for another I've developed several pieces of substantive tech that are fully reusable in my future projects.

Speaking of future projects, I'm working on a few already.

One is The Big Forest, though I'm not actively working on it right now. The focus here is to create nice, interesting and varied environments with very simple gameplay on top. I want it to me my own favorite walking simulator. In its current early stage it's already one of the best virtual forest experiences I've had.

The project I'm actively working on is a VR game for Vive that I've tweeted a little bit about and will soon announce properly here on the blog.
Stay tuned!
Read More »

The Big Forest

I've been continuing my work on the procedural terrain project I wrote about here. I added grass, trees and footstep sounds (crucial!) and it's beginning to really come together as a nice forest to spend some time in.

I made a video of it here. Enjoy!



If you want to learn more about it, have a look at this thread on the procedural generation subreddit.
Read More »

PuzzleGraph - puzzle state space visualization tool

Working with puzzle design through state space analysis and visualization.

In the beginning of 2014 I was interested in procedurally generating computer games puzzles with typical elements like toggles, gates that can be triggered to open, boxes or boulders that can be moved onto pressure plates, etc. Many games contain elements like these and I took inspiration in particular from the game Lara Croft and the Guardian of Light.

To better understand these puzzles, and understand what makes a puzzle interesting or boring, I started creating a tool for analyzing and visualizing the state space of the puzzle. In the Procedural Content Generation mailing list I discussed the approach here. I've worked on it on and off since, and while I still don't have an algorithm for procedurally generating the puzzles, the tool itself is interesting in its own right. It's called PuzzleGraph and I've just released it for free.
Get PuzzleGraph at itch.io

You can setup and connect puzzle elements like gates, toggles, pressure plates and boulders, and see the state space of the puzzle visualized, including solution paths, dead ends and fail states.

If you make some puzzles with PuzzleGraph, I'd love to see them!

The best demonstration is this video I made. If you already saw the video, skip down a bit for some new info and announcements.



When I announced PuzzleGraph on Twitter, a lot of people seemed to be excited. Besides re-tweets, I also saw many people forwarding and CC'ing each other, particularly people in academia.

It makes sense. As a practical tool for everyday work, PuzzleGraph may only be useful to few people since it's tied to specific (although fairly common and generic) puzzle mechanics. However, as a fully implemented proof of concept and as a research project, it's showing an interesting way of thinking about and interacting with puzzle design that seems to capture the imagination of a lot of people.

Analysis and visualization of game state space is also a field already researched in academia from a bit different angles. Research-wise I may not have done anything groundbreaking with PuzzleGraph, but its polished and highly accessible form with no barriers to entry probably makes the idea of working with state space interesting and accessible to a wider audience.







PuzzleGraph is now open source

In order to maximize the usefulness that PuzzleGraph and it's approach may provide to others, I've decided to open-source it. This way people can adapt it to include specific mechanics of their choice, or pull out specific parts of it and integrate into other tools, or just have a look at the code structures and algorithms for reference.

If anyone want to make improvements that might be a good fit for the original version of the tool, I'll be happy to discuss including it there as well.

The code repository is located at https://bitbucket.org/runevision/puzzlegraph and is licensed under the Mozilla Public License version 2.0.

Puzzle elements in PuzzleGraph

Here's a list of all the puzzle elements in PuzzleGraph version 1.1. Some elements are in locations (nodes) while others are in connections (edges). Since the initial version was released and I made the video, I've added a few more features in version 1.1. All the node and edge types now have tooltips to make it more clear what exactly they do and there is a help screen with an overview like the above. I also added three new puzzle elements; the one way edge, the blockable hazard edge, and the ball track edge, as described above.

Further development

I'm not yet sure to what extent I'll keep developing PuzzleGraph. I've honestly had very little time to actually use it, and gathering more experience with it through my own usage or other people using it will probably be the focus at first.

So if you use PuzzleGraph - either purposefully or just messing around - please tell me about your experience. And I'd love to see puzzles you make with it, and potentially include them with the distribution if you want.

Apart from that, I guess I'll see if anything comes out of open sourcing it as well.

In the next section I'll go over the techniques used for the state space visualization that you might use if you want to implement it for your own tools.

The technology behind state space visualization

There's probably many ways of implementing state space analysis and visualization but I can tell a bit about the approach I used for PuzzleGraph. You can check out the source code for more details. I'll assume a rudimentary familiarity with terminology from graph theory in this section.

Suitability for state space analysis

Before you even begin implementing the state space representation you need to be sure your state space is succinct enough to be useful to visualize. What this means is that every change in state should be significant and correspond to a choice made by the player. Furthermore, the choices available at any given point should ideally be limited in number. If you have an exponential explosion, visualizations of the state space are not going to be of much help.

In PuzzleGraph I built this into the puzzle format itself. By building up the puzzles around discrete puzzle locations, I avoided a continuous or detailed representation of space. Most of the remaining simplicity in states followed directly from this, and I simply avoided supporting gameplay features highly tied to space or time, such as timer based mechanics or physically simulated interactions between objects.

In some cases it's possible to begin with a puzzle form with a somewhat richer state space and simplify or collapse it into fewer states by ignoring irrelevant details. For example, the game of Sokoban has a very high number of states if the states directly capture which grid cell the player is in. However, this can be abstracted away by only considering which grid cells the player currently have access to (without moving boxes) and recording that into states instead of the exact player position. It's conceivable that decent methods for state collapse can be employed for many cases of full 3D worlds as well. For example, there are methods to automatically generate navigation meshes (nav-meshes) from arbitrary 3D geometry, and the way walkable areas are split into convex polygons there might well in some cases be a useful abstraction of player locations for the purposes of state space analysis.

Separating state from statics

Once you have a state representation where every difference in state corresponds to a meaningful player choice, you can begin the implementation. The first step here is to separate the state that changes from anything that doesn't change.

This means a departure from normal object oriented design where you'd have a toggle object that contains its own state as a member variable. Instead you have two different main objects.
  • One is the static puzzle design, that contains information about the nodes (type and location), edges (type and which nodes they connect), and the initial state of each dynamic object.
  • The other is the puzzle state, which contains all data that can change while the puzzle is being played.
With this separation, to evaluate the current state of the toggle element, the puzzle state object is given to the toggle object, and the toggle knows how to find its own state. If the toggle is stored as the fourth element in the puzzle, it finds it state by looking at the fourth element in the array of bools in the puzzle state given to it.
While the current state is separated from the puzzle design, the initial state is encoded directly in the design. This is necessary since the dynamic state objects are highly reliant on a puzzle design that doesn't change. If the puzzle design is changed, things may no longer match up. So whenever the design is changed, any puzzle state objects must be discarded and new ones constructed from scratch based on the initial state properties of the elements in the puzzle design object.

You can test and verify your separation of puzzle design and puzzle state before doing any state space analysis or visualization. A first suitable step is implementing functionality to "play" the puzzle based on these separated states.

Searching the state space

Once you have your separate state objects, you need a way to evaluate which new states it's possible to go to from a given state, and you need to keep track of that and build up a graph of the ways that the different states are connected. If you already implemented play functionality, you are already part way there.

Before you can construct your graph, you need objects corresponding to nodes and edges in the graph. The puzzle state objects are not themselves nodes. The states should be entirely self contained, so we need wrapper state nodes that contain both the state of that nodes, and information about which other states it's connected to. The state nodes don't contain references directly to other state nodes, but rather to state edge objects. Besides having references to the state nodes it connects, the state edges also store which kind of action the edge corresponds to. This can't always be derived just by looking at a before-state and an after-state, so we need to store it explicitly.

To explore the state space you need these data structures:
  • A dictionary with states as key and state nodes as value. This is needed to check if a new state is in fact the same as an existing state, and retrieve the state node of that existing state if so.
  • A queue for state nodes to be processed.

You can then explore the state space according this this algorithm:
  • Extract the initial state from the puzzle and create a state node for it. Add this state node to the dictionary and to the queue.
  • Process state nodes according to the queue as long as it isn't empty. Let a state node popped from the queue be A.
    • Look at the state of the state node A and figure out all the actions it's possible to perform. For each of those actions:
      • Make a clone of the state of state node A.
      • Perform the action on the cloned state such that it changes into a different state.
      • Check in the dictionary if the new state is the same as an existing state in your graph.
        • If so, let the state node of that existing state be B.
        • Otherwise, create a new state node B, store the new state in it, and add B to the dictionary and to the queue.
      • Connect state nodes A and B with a state edge that stores which kind of action was taken.
The state nodes in PuzzleGraph have both a list of outgoing and incoming state edges. Outgoing edges are the actions that can be taken to go from the current state to other states. Incoming are the actions that can be taken to go from other states to this state. Having both makes the state space visualization code simpler, and also helps when implementing e.g. undo for the play functionality of the puzzle.

Visualizing the graph

Now you have a graph data structure representing the puzzle state space, but no easy way to inspect it. All that's really needed here is figuring out at which position each state space node in the graph should be drawn. From there on it's easy to draw each node at its given position, and draw connections between connected nodes.

Figuring out the positions is the tricky part then. There are algorithms and even frameworks available for this, but in the end I ended up not using any of them and just go with my own implementation.

The overall trick is to do a little almost physical simulation where nodes have spring like connections between them, with each spring connection having an ideal distance it attempts to pull or push towards. The two main challenges here are:
  • Figuring out the ideal distance between every pairs of nodes.
  • Figuring out which force to apply to node pairs given the ideal distance between them.

The first of those is only done once when the state node graph layout is initialized. The ideal distance is calculated as the number of state changes required to go between the two nodes, either one way or the other. In PuzzleGraph every type of state change adds the same amount of distance, but this could be made different if some state changes are regarded as more significant than others.

The second part is done iteratively, stopping perhaps when the nodes don't seem to move much anymore. It took quite some experimentation to get something that worked reliably for a wide variety of graphs. I calculate an adjustment length as the difference between the current distance and the ideal distance, divided by the squared ideal distance. This is then multiplied with a constant of 0.1 which seemed to be about the highest value I could use that converges fast without causing oscillation, explosions, or other instabilities.

The division by ideal distance is because it's less important to maintain ideal distance to far off nodes than to close by one. And also because there are more far off nodes than close by ones, so they often have a large aggregate force. The reason to use exactly the squared ideal distance to divide with is largely experimentally arrived at. The graph will look a bit different dependent on which power is used.

Extra stuff

That's the basics! Some other things to do is marking state nodes that are part of shortest solutions paths, or which are fail states from which it isn't possible to reach a goal state. See the source code for more details. :)
Read More »

Creating natural paths on terrains using pathfinding

Pathfinding is not just for finding paths for AI agents/NPCs and similar. It’s also for procedurally creating paths.

While working on path creation for a procedural terrain, I had the problem that the generated paths would keep having too steep sections, like this:
It was too steep and also didn’t look natural at all. I kept increasing the cost multiplier for slopes, but it didn’t help. Then it hit me: My function just penalized the vertical distance, but it didn’t make any difference if this vertical distance came gradually or all at once. So in fact, my cost function wasn’t trying to avoid steepness - rather it was trying to make a path where the altitude changes monotonically. It would do everything it could to avoid the path first going up and then down again.

But I didn’t care if the path went a little up and down, as long as it wasn’t too steep at any point. To achieve this behavior, I needed to penalize steep slopes more than linearly, for example by squaring it. This has the effect of penalizing abrupt changes in altitude and reward gradual changes. And sure enough, after the change the generated paths avoided steepness and looked much more like what humans would create.
Tweaking pathfinding cost functions is one of those odd little things that I really enjoy. Seeing a little change in a formula imbue a generated creation with what could be mistaken for human design is really fascinating and fun!

Further notes

The observation-turned-blog-post in its original form ended there, but I've compiled some further notes for those who want to dig into the details.

Pathfinding method

Let me tell about the method I use for the pathfinding, since I was asked about this. I won't do an introduction to pathfinding in general here but assume understanding of the principles of performing pathfinding using an algorithm like A* or similar on a graph of nodes connected by edges.

Such a graph can be represented in many ways. Sometimes it's a data structure with explicit data representing all the nodes and edges and how they're connected. Other times it can just be a grid (like a 2D array) and each cell is implicitly connected to the neighboring cells - maybe the 4 or 8 neighbors, depending on whether diagonal connections are used.

It doesn't really matter, and a pathfinding algorithm can typically easily be modified to work with one or the other. Apart from giving it a start node and goal node, the important part is that you can tell the algorithm:
  • Which other nodes are a node connected to?
  • What is the real or estimated cost of getting from one node to another?
You might have data for this stored in advance or you might calculate the answers on the fly when asked.

For the terrain pathfinding I do the latter. In fact, there is neither an explicit graph, nor any 2D array. There is no data structure at all for the pathfinding to happen inside. Instead, it's all implicit, based solely on coordinates. I tell the pathfinder what the start and end coordinates are, and there's a function for getting "neighbor coordinates" to a given coordinate. There's also a function for getting the cost of getting from one coordinate to another, as you saw earlier.

This may sound completely free-form and magical at first, but there still is a structure that must be adhered to. You must imagine a grid structure and ensure the coordinates always fall within this grid. In my case it's a plain quadratic cell grid where each cell has a size of 5. So the start coordinate, goal coordinate, and every provided neighbor coordinate must always have x and y values that are multiples of 5. You also shouldn't use floating-point numbers for this, since they can produce floating point precision errors, and even the slightest imprecision can cause the pathfinding to completely fail.

I wanted my paths to be able to have a natural, non-jagged look, so I wanted edges to come in 16 directions instead of the basic 8. The 8 additional directions are achieved by going two cells out and one to the side. This provided me with an interesting choice of whether the 8 other directions should also go two cells out, or just one. In theory the second option can be weird, since if you need to move just one cell the path have to first side-step by two cells. But in practice both seems to work fine. The first images in this post were made with the first option but I later decided to use the second to avoid the path having very small-scale zig-zags.

Effects of different parameter values

I got asked on the proceduralgeneration sub-reddit:
How rapidly does the path change as you increase the power from 1.0 to 2.0? What happens if you go past 2.0? Does the path eventually have to spiral around the mountain or something?
I had actually been content just using the values I had found which seemed to work well enough, but now that I'd been asked, of course I wanted to find the answers too! I tried doing the pathfinding with the power values 1.5, 2.0 and 3.0 and with the multiplier values 1, 2, 3, 4, 5, 6, 7, 10, 15, 20, and 25. I had moved the multiplier inside the power function since the original version of the code, so those multipliers are multiplied onto the steepness before the resulting value is raised to the given power. Here's a table of the results. Some notes on the result.

Overall the common theme is that the results range from straight and boring to wildly tortuous. At the extreme end, the path begins to create loops - something that should be impossible with path-finding. However, the edges I use in the path-finding can cross each other. This means that they can appear to loop even though they never pass through the same points from the perspective of the path-finding. They only begin to do this once the path-finding is so extremely sensitive to tiny changes in altitude that the small difference in altitude between one of two crossing edges is enough for it to exploit it to loop around. I should say that I only sample the altitude at the end-points of each edge, which appears to be fully sufficient except in the extreme cases like this.

Note that there are very similar occurrences across the different power values. For example, multiplier 10 at power 1.5 looks just like multiplier 6 at power 3.0, and multiplier 7 at power 2.0 looks just like multiplier 5 at power 3.0. Does this mean that any result you can get with one power value, you can also get with another? That there exists a transformation that means they're mathematically equivalent? No, I don't believe so. It feels like there's subtle differences. For example, the progression of paths with power value 3 begins to do loops before it begins to take a major detour, while for power value 2, the loops only start happening after it has begun doing a large detour. The differences are very subtle though and hard to put the finger on exactly.

One thing that's tempting to look for is qualitative differences, such as some paths doing many small zig-zags and others doing larger swoops. However, I think that's a red herring. The algorithms doesn't measure sharpness of turns or frequency of turns in any way and shouldn't be able to tell one from the other. I think that the seeming qualitative differences are thus up to unpredictable aspects of how the path-finding interacts with the terrain height function that sometimes just happen to produce one result or the other. To answer it in terms of the original question: If the terrain was perfectly conical, a path from the base to the top might equally well form a spiral, a zig-zag pattern, or any other combination of left-winding and right-winding sections.

My own pragmatic takeaway from this is that sticking with just a power value of 2 seems fine and I'd then use multiplier values between 3 and 15 depending on how straight or tortuous I want the path to be.

Perspectives

These generated paths were a proof-of-concept for a new project I'm working on and I'm sure I'll learn more as I go along. For example, already I'm learning some things about approaches for flattening the terrain around the paths which I may share at a later point when I'm a bit more sure of my findings. For now, I hope you found this useful!
Read More »