Rune Skovbo Johansen
Creative Programmer & Designer
runevision
Menu

Blog

On the difference between individuals and statistical trends

Aug 8, 2017 in
There has been a lot of controversy recently about the famed Googler and his anti-diversity manifesto. He wrote a lot of things that there is good reason to be upset about. Yet, the one thing people have kept referring to as the main issue is something he did not actually say.

Edit: Doug Binks (@dougbinks) pointed out, "The manifesto states that Google has lowered the bar for diversity candidates, thus implying his fellow colleagues may be sub-par." That is correct and I stand corrected on that one. Original post below.

But first, a diversion to a fictional funfair with two fun houses, the Little Funhouse and the Eerie Funhouse.

The Little Funhouse is a popular workplace and employees must be at most 160 cm tall (5 feet and 3 inches) in order to function well and not get injuries. There is a test for it in the hiring process. The place is a popular place to work among both women and men. There are however more women than men who work there, and the company is doing various things to try to attract more men.

One day one of the employees, Mary, declares: "It's natural that there's more women than men, because men are on average taller than women." Her statement is generally accepted as uncontroversial. No one takes her statement as an insult towards the men working at the Little Funhouse, because a casual glance around the place easily reveals that they are just as short, and thus just as qualified, as the women there. This is how they were able to get the job in the first place, after all.

In the neighboring Eerie Funhouse, employees need to have good hearing because the work involves reacting to subtle audio cues. There is a test for it in the hiring process. More women than men work here too, and the company is trying to do something about that as well.

One day one of the employees, Patricia, declares: "It's natural that there's more women than men, because men on average have worse hearing than women." This statement creates a stirring. It's not clear that Patricia's statement is based on sound science, but this is not what gets people worked up the most. Rather, the fact that it's not clearly evident from a glance that the men working at the company have as good hearing as the women gets people all confused about the nature of Patricia's statement. Word gets around, and people both inside and outside the company takes Patricia's statement to mean that she claimed her male colleagues had worse hearing and were less qualified for the job. Patricia tried to argue that that is not what she had said, but it was to no avail and she was fired.

Did Patricia make a claim that her male colleagues were unqualified for their jobs? Of course not; not anymore than Mary made a claim that her male colleagues were unqualified. Making a statement about trends and averages at the population level is entirely separate from making a claim about people at a workplace who went though a hiring process and were already found qualified for the job. This is regardless of whether the claims about the population level trends and averages are correct or incorrect. It's not a statement about the people working at the company in either case.

We return from our fictional examples to the real world. Our infamous Googler was clearly concerned people might mistake claims about the population as being claims about individuals, and so he included this clarification early on in his manifesto.
When addressing the gap in representation in the population, we need to look at population level differences in distributions. (...) Many of these differences are small and there’s significant overlap between men and women, so you can’t say anything about an individual given these population level distributions.
He also included the word "on average" whenever he made statements about men versus women.

Yet, the responses were all primarily about how he had indicated that his colleagues were unqualified for their job.

From So, about this Googler’s manifesto. by Yonatan Zunger
What you just did was incredibly stupid and harmful. You just put out a manifesto inside the company arguing that some large fraction of your colleagues are at root not good enough to do their jobs, and that they’re only being kept in their jobs because of some political ideas.
From Erica Joy's piece
Employees cannot feel included in an environment where their peers believe they aren’t worthy of being there and will say so, freely. Employees cannot advance in a system that is built on peer evaluation if their peers believe them to be fundamentally sub-par.
And lastly, from Google's CEO Sundar Pichai, justifying the firing of the employee:
To suggest a group of our colleagues have traits that make them less biologically suited to that work is offensive and not OK.
To make statements about population wide statistical trends, no matter if these are true or false, is not the same as making statements about one's colleagues who have all been deemed qualified already through the hiring process. Claiming that this is the case is as absurd as if Mary from our fictional Little Funhouse were being criticized for claiming her male coworkers were unqualified just because she said that men in the overall population on average are taller.

There are plenty of things to be upset about in the Googler's manifesto, from claims based on questionable or outdated science, and what appear to be made up claims, to calls to de-humanizing a company culture. And for those reasons I'm not agreeing with him at all. But the misunderstanding is bigger than just this current controversy, and going forward we do need to be able to tell the difference between claims about statistical trends and claims about individuals if we want to be able to have intelligent debate about diversity and not just fetch our pitchforks at any given opportunity.

If anything I have written is ill-informed, misleading or offensive, please let me know so I can correct it. Thanks.
Read More »

July update: Trials and triumphs of whips and levers

Here's the latest updates on the development of my Vive VR game Eye of the Temple.

For the past several months I've been working on improving the whip I prototyped last year. In the last post, I showed how it could grab levers, but there were a lot of issues and the whip and lever didn't exactly look pretty. Now see what it looks like now:



This feels really good to use now. It didn't get to this point without a lot of issues on the way though.

The whip

A bit of background on how the whip is implemented in broad strokes. Using physics joints etc. quickly turned out infeasible when I did the prototype last fall. Instead, I’m keeping track of positions and velocities of “links” in arrays in my own scripts and doing very custom simulation with lots of tweaks and workarounds. Collisions with level geometry works by doing sphere-casts, one per whip link per frame, which is around 30.

There's special logic that makes the stick of the levers "sticky" and "unsticky" at specific times, which aids the behavior, but the way the whip curls around the stick (or fails to curl, sometimes) is still driven by the regular simulation apart from that. For all other surfaces, there's no special logic. It uses the sphere-cast based collision avoidance I mentioned above.

I should say there's a glaring issue in my collision approach which isn't shown in the video, which is that collision fails against moving surfaces, such as the moving platforms. I'm not quite sure if I want to solve that, because it's going to add tons of complexity to the code, while probably also degrade performance significantly. I've chosen to ignore this for now, since there's no lack of other things that need to be done that are more critical.

The lever

The lever has caused me all kinds of problems. Doing a lever that works properly, particularly for VR, is apparently a complicated problem. I made a video about my woes here:



I found out that levers could be made to avoid sliding out of their joints given three criteria are met:

First, the collider of the lever handle must not overlap with any other colliders in the world. The tricky thing here is that it's not easy to see that overlapping colliders might affect the handle, since the handle is firmly locked in place. But they do affect it in very non-obvious ways. So I ensured the handle collider doesn't overlap with any other colliders.

Secondly, the rigidbody must have its position set to locked.

Thirdly, the center of mass of the rigidbody must be overwritten in script to be set to the pivot that the handle should rotate around. Unfortunately, this leads to another problem. Sometimes the lever handle would get completely stuck, in which case no amount of forces would make it move one bit. After some experimentation, this seemed to happen if the handle is exerted to forces while the connected rigidbody (which is kinematic) simultaneously move. (Some levers in my game sometimes get moved around.) I worked around this by disabling the rigidbody position locking at strategic times and then reenabling it again. This seemed to fix the issue.

Polishing it up

After I had gotten most of the technical issues resolved, I set out to create proper 3d models for the whip and lever to replace the simple cylinder placeholders I had before.

And as the last step, I added the ability for the whip to be rolled up (which it now is by default). The whip is still fully simulated while rolled up, which is what gives the rolled up whip its nice juicy appearance. There's no animation or pre-canned movements involved in the whip at all.

The transition where the whip gets rolled up is done by pulling at specific segments of the whip towards a specific point on the handle. This happens to also be how the whip remains rolled up in general.

In the video I do a little upwards flick and then the whip rolls up. This is purely "role playing" though. The rolling up is actually triggered just by pressing a button on the controller. ;)

If you've been following the development of Eye of the Temple, does the whip related gameplay change how you view the game? What do you think it adds to it?
Read More »

June update: Verticality, puzzles, whip

Here's the latest updates on the development of my Vive VR game Eye of the Temple.

For the past month I've been mainly working on improving the whip I prototyped last year. It can now be used to grab levers at a distance, and then you can yank the whip backwards to activate the lever.
There's still some way to go, especially with getting the audio cues right. The physics will never be quite like a real whip, but making it satisfying to use is the top priority.

Apart from this I've been looking into designing more puzzles for the game. I'm no expert puzzle designer, but bit by bit I come up with some that I think work well. The latest involve tall rotating towers, activated by levers (no whip use necessary for this one) where you need to step around on and in them at two different levels.

This also marks my increased effort in making better use of verticality in the level design. Experiencing the great heights is a draw of the game, and I'm figuring out how to use that optimally. I don't have a new build with these new things yet. The work right now is on smaller isolated pieces and puzzles, and once I have a set of those that fit nicely together, I'll begin integrating it all back into the overall world design.
Read More »

April update: Fire, blades, speedrun mode

Here's the latest updates on the development of my Vive VR game Eye of the Temple. New additions:
  • Fire! One challenge tunnel now has fire hazards.
  • Blades! One challenge tunnel now has swinging blades.
  • Speedrun mode! A more challenging way to play the game. More notes below.
  • Hat! You're now wearing a hat. Hope you like hat.
  • Experimental spectator camera. 3rd person view. More notes below.
  • Field of view is now restricted when close to falling and when falling in order to further reduce risk of motion sickness.
  • Placeholder ambient soundscape taken out of the game for now since it had confusing footstep sounds.

Speedrun mode

For those of you who wanted more challenge in the game, there is a new speedrun mode. This mode times your play-through but also speeds up the platform movements as long as you can keep up.

This mode is has a higher risk of being uncomfortable, causing motion sickness, and falling over, so engage on your own risk.
  • Each time you take a perfectly timed step onto a new platform, the game will speed things up a little bit.
  • Each time you miss an opportunity to step onto a new platform, the game will slow things down a little bit. (This can occasionally happen through no fault of your own.)
  • When you die, the speed is reset, so it's recommended to keep to a speed you can handle in order to not lose momentum in your speedrun. You can avoid speeding thing further up by taking steps in a slightly slower way.
I do not recommend this mode to people who haven't already played through the game at least once, so in the final game I'll probably only unlock the speedrun mode by completing the game.

How to use: For now though, you start a speed run by first starting a new game, and then press Shift+R on the keyboard.

Experimental spectator camera

The gameplay in Eye of the Temple can be hard to get an impression of for others by looking out in first person. I've experimented with an alternative camera angle shown on the monitor that shows the action from 3rd person perspective.

How to use: Activate/toggle 3rd person spectator camera by pressing X on the keyboard.
This view requires extra resources from your computer, so if you get performance problems, turn it off.

What do you think of 3rd person spectator camera? Is it something you might use for streaming, videos, or for people watching you play? It's still a bit buggy and has room for improvement, but I'm curious what you think of the overall idea.
Read More »

February update: Gems

Here's the latest updates on the development of my Vive VR game Eye of the Temple. New features:
  • There are now gems throughout the temple that you can collect.
  • Moving platforms have glowing symbols on them.
  • Visuals: Intro area has some red stones and some of the dungeons have grittier gray stones and spikes.
  • The way the platforms move has been tweaked, hopefully to further reduce potential for dizziness.

Notes on gems

The gems are found throughout the temple. The exact placement tries to take player proportions into account so that they are at a comfortable distance for reaching. I haven't tested this on different people yet though. If you could let me know how it works for you and how tall you are, that would be helpful. If you don't want to share that, that's ok too.

Right now the gems don't do anything yet. Later I will implement at the minimum a way for you to see how many you collected.

Beyond that I need to decide if the gems have a critical or non-critical function:

A critical function of the gems could be if they are used to unlock new areas in the game and thus are needed to progress. Or an almost-critical function would be to unlock alternative paths or secret rooms not otherwise accessible. This is still fairly critical because it would be annoying if you're trying to see 100% content of a game to find out you can't due to some mistake made earlier that's too late to do anything about. Currently there are one-way platforms that you can take which will prevent you from going back to collect any gems you might have missed. If I make the gems critical, I'd have to find a way to make it possible to always go back to all areas of the temple.

Non-critical functions of the gems could be high-score, achievements, and, I dunno, unlockable hats if I get a selfie stick implemented for the game. :P Old games would typically grant you extra lives, but it doesn't work for modern games with infinite lives.

For now I refrained from placing gems at platforms that only go one way. If there were gems there and you failed to pick one up, you wouldn't have a second chance and I thought that might feel unfair or frustrating.

Early testers online forum

In order to try to get faster feedback and shorter iteration cycles, I opened up for people to sign up online to be early testers of the game. If you have access to a Vive (and 2.2 by 2.2 meters space) and would like to try out the game and provide detailed feedback based on your experience, please don't hesitate to join!

Sign up to provide feedback on early builds of Eye of the Temple
Read More »

January update: Visuals, usability and early testing

For a while, my focus for my Vive VR game Eye of the Temple have been to not expand more on gameplay right now but rather on improving what I've got in order to make it as presentable as possible.

That has meant:
  • Improving visuals.
  • Addressing usability issues found in play-testing.
(If anybody wonder what happened to the Whip Arena spin-off game, I put that on hold after it become clear it only worked well with a quite large physical VR space, which very few people have available.)

3D models

Gate model. Two keys must be inserted above the gate to unlock and open it: Stone torch model. You light these with your torch to trigger things happening: Cliffs model. The temple used to just float in the air; now it's grounded:
For a long time the game was full of placeholder models made of simple boxes and cylinders. There's still some of those left, but I've been working on replacing them all with proper models.

After briefly planning to work with contractors for 3D models, I decided to learn 3D modeling myself instead (and deal with the various challenges that come with it).

The models I need have highly specific requirements (they need to have very exact measurements and functionality to fit into the systems of the game) yet in the end they are quite simple models (man-made objects with no rigging).

With this combination it turned out that back-and-forth communication even with a very skilled artist took as much time as just doing the work myself. I'll still be working with artists for the game, just not for the simple 3d models I need.

Several of the models still have placeholder texturing. I have an idea for a good texture creation workflow for them, but it will take a little while to establish, so I'm postponing that while there's more pressing issues.

Intro section

My goal is that Eye of the Temple should be a rather accessible game. You need a body able to walk and crouch, and not be too afraid of heights, but I want it simple enough to play that people who don't normally play computer games can get into it without problems.

This has largely been a success. Gamers or not, I normally just let people play without instructions, and they figure things out. My dad completed the whole thing in one hour-long session when he was visiting.

The game did throw people in at the deep end though, asking them right from the start to step between moving platforms four meters above the ground. Some people would hesitate enough to end up mis-timing their step and stumble, making the experience even more extreme right from the beginning.

To ease people a bit more in, I've worked on an intro section that starts out with only a 0.75 meter drop, and the first two platforms have no timing requirement. I have yet to get wide testing of this to see if it helps.
There is one particular problem I've toiled with for a while, which is to design a platform that bridges two spots in a compact manner. Why this is tricky relates to how the game lets you explore a large virtual space using just a small physical space.

Originally I had platforms rotating around a center axis, but that made some people motion sick who otherwise didn't have problems with the rest of the game. I tried various contraptions to replace it, but they were complicated and awkward to use. My latest idea is using just a barrel-like rolling block, which is nice in its simplicity, and also a fun little gimmick to balance on once you understand how to use it.

Figuring out what you're meant to do is easy to miss though, as I found out with the first tester trying it. I have some ideas for a subtle way to teach it, but that will take quite some time to implement. For now I settled for slapping a sign up that explains it.

Early testers online forum

There is no substitute for directly observing people playing a game, but this is impractical for me to do frequently when I also have a full-time job. I'm lucky if I get to do it two times a month.

In order to try to get faster feedback and shorter iteration cycles, I've now opened up for people to sign up online to be early testers of the game. If you have access to a Vive and would like to try out the game and provide detailed feedback based on your experience, please don't hesitate to join!

Sign up to provide feedback on early builds of Eye of the Temple

Read More »

The quest for automatic smooth edges for 3d models

I'm currently learning simple 3D modeling so I can make some models for my game. I'm using Blender for modeling.

The models I need to make are fairly simple shapes depicting man-made objects made of stone and metal (though until I get it textured it will look more like plastic). There are a lot of flat surfaces.

The end result I want is these simple shapes with flat surfaces - and smooth edges. In the real world, almost no objects have completely sharp edges, and so 3d models without smooth edges tend to look like they're made of paper, like this: What I want instead is the same shapes but with smooth edges like this: Here, some edges are very rounded, while others have just a little bit of smoothness in order to not look like paper. No edges here are actually completely sharp. The two images above shows the end result I wanted. It turns out it was much harder to get there than I had expected! Here's the journey of how I got there.

How are smooth edges normally obtained? By a variety of methods. The Blender documentation page on the subject is a bit confusing, talking about many different things without clear separation and with inconsistent use of images.

Edge loops plus subdivision surface modifier

From my research I have gathered that a typical approach is to add edge loops near edges that should be smooth, and then use a Subdivision Surface modifier on the object. This is also mentioned on the documentation page above. This has several problems.

First of all, subdivision creates a lot of polygons which is not great for game use.

Second, adding edge loops is a manual process, and I'm looking for a fully automatic solution. It's important for me to have quick iteration times. To be able to fundamentally change the shape and then shortly after see the updated end result inside the game. For this reason I strongly prefer a non-destructive editing workflow. This means the that the parts that make up the model are kept as separate pieces and not "baked" into one model such that they can no longer be separated or manipulated individually.

Adding edge loops means adding a lot of complexity to the model just for the sake of getting smooth edges, which then makes the shape more cumbersome to make major changes to afterwards. Additionally, edge loops can't be added around edges resulting from procedures such as boolean subtraction (carving one object out of another) and similar, at least not without baking/applying the procedure, which is a destructive editing operation.

Edge loops and subdivision is not the way to go then.

Bevel modifier

Some posts on the web suggests using a Bevel modifier on the object. This modifier can automatically add bevels of a specified thickness for all edges (or selectively if desired). The Bevel modifier in Blender does what I want in the sense that it's fully automatic and creates sensible geometry without superfluous polygons. However, by itself the bevel either requires a lot of segments, which is not efficient for use in games (I'd want one to two segments only to keep the poly count low) or when fewer segments are used it creates a segmented look rather than smooth edges, as it can also be seen below.

Baking high-poly details into normal maps of low-poly object

Another common approach, especially for games, is to create both a high-poly and a low-poly version of the object. The high-poly one can have all the detail you want, so for example a bevel effect with tons of segments. The low-poly one is kept simple but has the appearance from the high-poly one baked into its normal maps.

This is of course a proven approach for game use, but it seems overly complicated to me for the simple things I want to achieve. Though I haven't tried it out in practice, I suspect it doesn't play well with a non-destructive workflow, and that it adds a lot of overhead and thus reduces iteration time.

Bevel and smooth shading

Going back to the bevel approach, what I really want is the geometry created by the Bevel modifier but with smooth shading. The problem is that smooth shading also makes the original flat surfaces appear curved.

Here is my model with bevel and smooth shading. The edges are smooth sure enough, but all the surfaces that were supposed to be flat are curvy too. Smooth shading works by pretending the surface at each point is facing in a different direction than it actually does. For a given polygon, the faked direction is defined at each of its corners in the form of a normal. A normal is a vector that points out perpendicular to the surface. Only, we can modify normals to point in other directions for our faking purposes.

The way that smooth shading typically calculates normals makes all the surfaces appear curved. (There is typically a way to selectively make some surfaces flat, but then they will have sharp edges too.) The diagram below shows the normals for flat shading, for typical smooth shading, and for a third way that is what I would need for my smooth edges. So how can the third way be achieved? I found a post that asks the same question essentially. The answers there don't really help. One incorrectly concludes that Blender's Auto Smooth feature gives the desired result - it actually doesn't but the lighting in the posted image is too poor to make it obvious. The other is the usual edge loop suggestion.

When I posted question myself requesting clarification on the issue, I was pointed to a Blender add-on called Blend4Web. It has a Normal Editing feature with a Face button that seems to be able to align the normals in the desired way - however as a manual workflow, not an automated process. I also found other forum threads discussing the technique.

Using a better smoothing technique

At this point I got the impression there was no way to get the smooth edges I wanted in an automated way inside of Blender, at least without changing the source code or writing my own add-on. Instead I considered an alternative strategy: Since I ultimately use the models in Unity, maybe I could fix the issue there instead.

In Unity I have no way of knowing which polygons are part of bevels and which ones are part of the original surfaces. But it's possible to take advantage of the fact that bevel polygons are usually much smaller.

There is a common technique called face weighted normals / area weighted normals (explained here) for calculating averaged smooth normals which is to weigh the contributing normals according to the surface areas of the faces (polygons) they belong to. This means that the curvature will be distributed mostly on small polygons, while larger polygons will be more flat (but still slightly curved).

From the discussions I've seen, there is general consensus that this usually produces better results than a simple average (here's one random thread about it). It sounds like Maya uses this technique by default since at least 2014, but smooth shading in Blender doesn't use it or support it (even though people have discussed it and made custom add-ons for it back in 2008), nor does the model importer in Unity (when it's set to recalculate normals).

Custom smoothing in Unity AssetPostprocessor

In Unity it's possible to write AssetPostprocessors that can modify imported objects as part of the import process. This can also be used for modifying an imported mesh. I figured I could use this to calculate the smooth normals in an alternative way that produces the results I want.

I started by implementing just area weighted normals. This technique still make the large faces slightly curved. Here is the result. Honestly, the slight curvature on the large faces can be hard to spot here. Still, I figured I could improve upon it.

I also implemented a feature to let weights smaller than a certain threshold be ignored. For each averaged normal, all the contributing normals are collected in a set, and the largest weight is noted. Any weight smaller than a certain percentage of the largest weight can then be ignored and not included in the average. For my geometry, this worked very well and removed the remaining curvature from the large faces. Here is the final result again. The code is available here as a GitHub Gist. Part of the code is derived from code by Charis Marangos, aka Zoodinger.

Future perspectives

The technique of aligning smooth normals on beveled models with the original (pre-bevel) faces seems to be well understood when you dig a bit, but poorly supported in software. I hope Blender and other 3D software one day will have a "smooth" option for their Bevel modifier which retains the outer-most normal undisturbed.

A simpler prospect is adding support for area weighted normals. This produces almost as good result for smooth edges, and is a much more widely applicable technique, not specific to bevels or smooth edges at all. That Blender, Unity and other 3D software that support calculating smooth normals do not include this as an option is even more mind-boggling, particularly given how trivial is it to implement. Luckily there workarounds for it in the form of AssetPostprocessors for Unity and custom add-ons for Blender.

If you do 3D modeling, how do you normally handle smooth edges? Are you happy with the workflows? Do some 3D software have great (automatic!) support for it out of the box?

Read More »