The reason why I’m writing about the book is that while I think that they are great memos about writing, the more I think about them, the more they apply to programming. Which is a weird coincidence, because they were supposed to be memos for writers in the next millennium, and programming is kind of a new form of writing that’s becoming more important in this millennium.

Being a game developer, I also can’t help but apply these to game design. So I will occasionally talk about games in here, but I’ll try to keep it mostly about programming.

The memos are about

- Lightness
- Quickness
- Exactitude
- Visibility
- Multiplicity
- Consistency

The sixth one is not written, but I think consistency is very important in programming, so I’ll write about it a little bit.

Italo Calvino is referring to lightness as opposed to weight. In all his memos he only says that he prefers the thing he talks about over its opposite. So he prefers to take weight out of his stories, but that doesn’t mean that there isn’t also value in weight, and other authors may prefer weight. But I agree with Calvino in that I prefer lightness in my stories. A specific kind of lightness that is. Calvino says that

I hope that what I have said so far shows that there is a lightness that is thoughtful and that is different from the frivolous lightness we all know. Indeed, thoughtful lightness can make frivolity seem heavy and opaque.

He has lots of examples for this in the book, but the one that I connected most with (maybe because it’s from a book I actually read) is Milan Kundera’s book *The Unbearable Lightness of Being*. It’s a book about very, very heavy topics. But it feels like a light book. Calvino says it like this:

His novel shows us how everything in life that we choose and value for its lightness quickly reveals its own unbearable heaviness. Perhaps nothing escapes this fate but the liveliness and nimbleness of the mind – the very qualities with which the novel is written

And as a game developer, the obvious example to mention is Jenova Chen’s work. His first famous work was literally titled *Cloud* and what could be lighter than that? He later created *Flower*, which is a game about being a flower petal floating in the wind, and *Journey*, which has similarly light movement. The game that he is currently working on is called *Sky*. I absolutely loved Flower and Journey, I think Journey in particular is one of the masterpieces of our medium. And I think its lightness has a lot to do with that. There is a simple delight in weightlessness, and it’s rare that someone manages to turn that delight into a full game.

A small interesting aside is that when Jenova Chen spoke at the NYU Game Center last month, he says that in the dichotomy between light and dark games, he prefers to make light games. He still values dark games, because his favorite game is Dark Souls, but in his personal work he prefers light. It’s a quirk of the English language that light and lightness are so closely related, but it’s also fitting because weight drags you down, away from the light.

Anyways, I was going to talk about programming. What is lightness in programming? I think lightness is related to “simplicity”, but it explains why I make certain choices that I wouldn’t make when talking about simplicity. Meaning I have had arguments about which solution to pick, and which one would be simpler, and in hindsight I was actually arguing for the lighter solution, not the simpler solution.

For example it explains why I still like to write code in C++. While C++ is an incredibly heavy language, it allows me to write light code. The idea of “you don’t pay for what you don’t use” is very important in C++, and is very rarely violated. In other languages I always feel like little weights are added to my program when I have to do something in a way that would be more efficient or more elegant in C++. (like having to heap-allocate something that could just be stored inside of another object, or having classes be bigger in memory than just the members that I added, or having overhead for virtual function calls that I don’t need)

The “you don’t pay for what you don’t use” concept gives us an idea where weight comes from in code: It comes from dependencies, and especially dependencies that we didn’t want or aren’t even using. It’s the old joke about object oriented programming: You wanted the banana, but what you got was the gorilla holding the banana. (and sometimes the whole jungle) But to further support that meaning of the word, when people talk about a “lightweight” library or framework, they usually mean a library that doesn’t have many dependencies itself, and that doesn’t force you to use more than what you need.

When I think about heavy objects, I’m immediately thinking about the “character” or “vehicle” classes of most game engines. For some reason these two classes in particular are usually messes. I’ve seen character classes that contain code for animation, model rendering, IK, skeleton queries, interactions with environment objects (like doors, chairs etc.) inventory, damage handling, effect playing, physics interactions, weapon handling, melee combat, camera code, ragdoll code, bone-attachments (like playing a effect attached to the hand) UI checks, AI code, random other game-specific logic, random file IO, lots of player-specific code (that’s still being paid for by all non-player-characters) special code for quadrupeds, water interactions, ammo handling, LODing, transform math, vehicle interactions, sound code, auto aim code, and more.

All that in one class that’s thousands of lines of code. How does that happen? Well the quadruped code gives us a hint: At some point people decided that they needed animals in the game. Animals need a lot of the code that characters already do: Animation code, damage handling, model handling, and random other interactions. Just turn off half of the other code and you’re fine. And then add some more code for quadrupeds.

It’s clear that this results in very heavy classes, and it’s also clear to the programmers who make those decisions. It’s one of those “yes, we’re concerned about it, but we can’t deal with it right now” situations that keeps on not getting dealt with for years, until you have a monster class, and now you can do even less about the problem.

The more I program, the more I think that inheritance is a big source of this. In that same engine, I once tried to create a new object that had overlap with characters, but it needed to be flying. And I didn’t want to add logic for flying to the messy character class. It was actually possible to write a new class that used a bunch of the existing logic, because most of the logic actually didn’t live in the character class. Usually the way it worked was “if you want to be able to receive damage, inherit from this base class and override this function.” The new class started off really clean, but because of this inheritance, we immediately started accumulating little bits of functionality to totally unrelated topics in the same class. Damage handling was now in the same class as animation code and effects code. And soon we had another monster. And once you have a monster, it’s easy to justify adding a little bit more logic into it, after all all of the other logic that you need to talk to is already in the monster class.

The only way I have ever seen this handle well is in an entity component system. An entity component system takes the “prefer composition over inheritance” to an extreme: You don’t even have a central class in which you compose all of the functionality. You just add components to a collection, and all components in the collection are part of the same logical object. Just the fact that you don’t have a central object forces you to be a bit more disciplined: If you want to add logic for foot IK, you’ll probably add a new component which can talk to your animation component. You can’t be lazy and say “I’ll just add it to the character class for now, and we can move it out later.” For an entity component system to work, it’s very important that it’s easy for components to communicate with their siblings, meaning it has to be easy to get the pointer to a sibling to talk to it. If you still have to coordinate through a central class, that central class tends to accumulate behavior like our character class above.

I personally like the system that the Unity game engine has. (whenever I bring it up as a model, people complain that Unity is slow. You could make their entity component system very fast, they just can’t do it because old code has to keep on working, so it’s hard to make things data-oriented or multi-threaded) One additional thing that that engine does is that it’s super easy to add glue code between components. It’s very easy to add small scripts that do the communication between two different components on the same object. I think the ease of adding scripts is one of the reasons why they have few clever components that try to magically find out what the right thing to do is based on their siblings. If you want custom behavior, just add a small script.

Talking about lightness, an interesting case is the C++ class std::function. It’s a heavy object, but it allows you to get away from inheritance, and as such it makes it easier to keep code light. For example you can write an update loop that doesn’t require inheriting from a base class. I have seen engines with requirements like “you have to inherit from CGameObject to be in the update loop,” where CGameObject also has other things on it, like a transform matrix. That’s another pitfall where inheritance leads to weight in the long term: The more classes something is inherited by, the more people will want to add functionality to it. You always get cases where there is common functionality between ten derived classes, so why not just add it to the base class? Well because there are another five objects that don’t need the shared functionality. std::function is an escape hatch out of this surprisingly often. This is actually the main reason why I should really learn Rust: I think their “Traits” form of polymorphism looks like it’ll be much better about this, while being lighter weight than std::function.

Italo Calvino starts by telling an old legend:

Late in life the emperor Charlemagne fell in love with a German girl. The court barons were dismayed to see that their sovereign, overcome by ardent desire and forgetful of royal dignity, was neglecting imperial affairs. When the girl suddenly died, the dignitaries sighed with relief – but only briefly, for Charlemagne’s love did not die with the girl. The emperor had the embalmed body brought to his chamber and refused to leave its side. Archbishop Turpin, alarmed by this morbid passion and suspecting some enchantment, decided to examine the corpse. Hidden beneath the dead tongue he found a gemstone ring. As soon as he took possession of it, Charlemagne hastened to have the corpse buried and directed his love toward the person of the archbishop. To extricate himself from that awkward situation, Turpin threw the ring into Lake Constance. Charlemagne fell in love with the lake and refused to leave its shores.

Calvino goes on to tell how, since it’s such an old legend, there are many different versions of it. Most of them are longer, but somehow the length doesn’t actually make them better. He likes the quick version.

As a game developer, the value of quickness is most obvious when comparing the Blizzard games Warcraft III and Starcraft 2. Quickness alone probably explains why I preferred the story of Warcraft III. To illustrate, watch the introduction of two missions early in Warcraft III. Just watch until the cutscene is over and gameplay starts:

Now I already set you up to expect “quickness,” so maybe you were expecting something shorter. But to make the point, here is someone playing Starcraft 2:

I’m not going to link a second video for Starcraft 2, because this one is already long enough. This is the second level in the game, and the periods between the levels only get longer from here. The person who plays this talks a little bit, but even without that it takes almost seven minutes to get through all the things between missions. And if you don’t talk to everyone between missions, you’re missing out on the story.

When Calvino talks about “quickness,” he doesn’t talk about rushing the story. The old legend he is telling doesn’t feel rushed, even though it’s certainly quick. The Warcraft III introductions don’t feel rushed either. But in Starcraft 2 in comparison it can feel like people are just rambling on and on, and the story suffers.

In general I feel like quickness is a property that games have lost. I got a SNES classic last year, and I had forgotten how quickly those games get to the action. No long tutorials or slow intro missions. The first level is always a good level. And getting to the first level doesn’t take long. Take F-Zero for example:

You’re in the game, racing at high speeds within seconds of turning on the game. I don’t know precisely when we turned away from this, but I feel like it must have been with the advent of 3D gaming. Controls are much more complicated in 3D games, so suddenly you needed tutorials. And maybe an intro level that’s not too hard to allow you to get used to the controls. But often it just ends up feeling like the game doesn’t respect my time.

So what about programming? Quickness matters most for me in iteration times. As an example, let’s take how long it takes to write a unit test. I strongly believe that unit tests should live in the same file as the code that they’re testing. The reason is quickness. I’m not a TDD zealot, but I do write some of my code in a TDD style. When I do, my speed of programming is partially determined by how long it takes me to write and run a test. So you want to avoid adding steps to that. Like having to create a new file. Or having to launch a separate executable. If the environment is set up correctly, I find that coding with tests can be faster than coding without tests, simply because tests can have faster iteration times. Without tests you often have to start the game and load a level where you can test your content. That can add more than a minute to iteration times. I personally like to just scroll to the bottom of a file and add this code:

#ifdef TESTS_ENABLED #include "gtest/gtest.h" TEST(foo, bar) { ASSERT_EQ(5, foo(bar(10))); } #endif

This is pretty low-friction for me. I have the ifdef and the include memorized, so I can quickly add tests to any file. But I recently realized that even this may be too much. The video games industry is oddly backwards about testing, so I constantly try to push people to write more tests. Recently, I told someone who was writing some math code “you know, you would be much more productive if you just wrote a small test for this, to iterate more quickly” and he mumbled something about wanting to write more tests but never getting around to it. I realized that he didn’t know how simple it is to write tests. I showed him the above block with an example call to his function, and suddenly he was much more open.

So how could this be even easier? We need at least one include, and we need the ifdef check if we want the tests in the same file as the production code. We need to be able to compile this without the tests. We also need the macros because of C++ complexity reasons. It’s hard to see how to make this any easier. (neither the preprocessor nor templates are powerful enough to make any of this go away) The D programming language shows how to make it much easier. In D you can use the unittest keyword in pretty much any scope to quickly write a test:

unittest { assert(foo(bar(10)) == 5); }

This takes away all excuses. It’s even important that you can use the normal assert macro here, as opposed to the custom ASSERT_EQ macro of googletest. In my C++ example above the other programmer didn’t immediately start writing tests, because his math code was buried deep in the internals of a class. He had to pull it out to make it testable. That wasn’t hard, but it was something he had to do. Just being able to write unittest at any scope in D makes that part much easier. So this matters.

In fact iteration times is one of those areas where it’s never really fast enough. I use the tup build system because it adds less than a tenth of a second of overhead to a build. Does it really make a difference if the build system adds a second of overhead or a tenth of a second? Yes, you bet it does. When it comes to iteration times, a quantitative difference will quickly turn into a qualitative difference. If a build takes a fraction of a second, I can recompile and reload code every time that I save a file. That may not be doable when compiling C++, but it is doable when compiling shaders. Before using tup, I had to hack that workflow by doing a custom build from game code whenever the file watcher detected a change to the file. That’s ugly because you’re going around the build system, and you may be putting your build into an inconsistent state. If the build system overhead is near zero, suddenly you can just run a normal build on every save. And if there are derived build steps from your shaders, those will get compiled correctly, too. The moment that you have shaders that reload in a fraction of a second, you start working differently. You experiment a lot more. You tweak numbers more. You investigate small oddities of behavior to find the underlying cause and to get a better understanding of the code. Things that you needed to schedule time for (even if just mentally “I’ll get to that later”) you suddenly just do.

So is a fraction of a second fast enough for iteration times? I’d like to think that there are more qualitative differences for even faster compiles. Bret Victor’s talk Inventing on Principle comes to mind. Just like his visualization of “here are many possibilities for a jump,” I feel like there should be similar visualizations like “here are many possibilities for this shader.”

As is appropriate for the topic of exactitude, Calvino is very precise in what he means with the word:

For me, exactitude means above all three things:

- a well-defined, well-considered design for the work;
- the evocation of clear, sharp, memorable images;
- a language that is as precise as possible in its choice of words and in its expression of the nuances of thought and imagination.

The value of these things might be self-evident, but there are many, many works for which the above does not apply. I once gave a talk about The Game Outcomes Project, which looked into what made successful games and what made unsuccessful games. And one of the main things they found was that clarity and communication of vision was incredibly important. Of course “clear vision” is one of those things that is hard to get if you don’t already have it. But I think it can serve as a useful filter for ideas.

In programming I value exactitude most when it comes to robustness and reliability. As such, I very much disagree with Postel’s law: be conservative in what you do, be liberal in what you accept from others. That law is for internet connections. If somebody sends you a packet that isn’t 100% correct, you should still accept it. But you should only send out data yourself that are 100% correct. I understand why people do this on the internet: If your browser can’t load a website because the website doesn’t send out 100% correct HTML, people won’t blame the website, they will blame the browser. For that reason it’s more an unfortunate practical matter than a law. When it comes to your own data, or your own internal network, accepting bad data is exactly the wrong thing to do.

The big problem with Postel’s law is that it leads to weird semi-standards being established. Yes, you’re not supposed to do X, but in practice it works out. And you get weird edge case behavior because things aren’t properly specified in the “liberal” area of data. There have been security bugs because different software interprets the edge cases differently.

In your own code things get even worse, because now you are responsible for fixing the weird edge case bugs. In my experience, the more layers that follow Postel’s law, the more difficult bugs you will end up with. If a designer does something weird in the editor of your game, (like say accidentally using a naughty string) but the editor allows the designer to save it, and then the compiler allows it, and the loading code of your game allows it, the bug you may end up with may be really confusing. All you see is bad data in the game. You have to first debug the loading code to figure out if it messed with your data. You then find the edge case that should never be hit that came from bad data in the compiled file. So you logically conclude that the bad data must have come from the compiler. When you finally figured out what happened in the compiler, you see that a certain pattern in the source format can end up causing the compiler to write bad data. So how did we get that pattern in the source file? Off we go to the revision control history to figure things out. Was it a merge error? No it was actually saved out that way from the editor. So we open the file in the editor, and can’t find the problem there. In fact when we save it out again, the problem goes away. Because we hit a weird edge case. So we ask the designer what they were doing, and finally get a clue for what caused the issue.

How do you solve issues like this? Verify data every step along the way. Verify on save in the editor. Verify in the compiler. Verify on load. These edge cases that were caused by earlier edge cases are the worst bugs, in fact they can often be weird one-off bugs that go away by themselves. (as in the above example where the problem goes away if you just save the file again) Be precise about what you accept, and accept nothing else.

This is also one of the reasons why I prefer statically typed languages. The more messy you allow a programmer to be, the more likely it is that you will get really weird edge cases like this. Not saying that programmers will be messy all the time. 99% of the time they will have great behavior and not put in weird edge cases. But then something comes along that is causing a big problem, but it’s really hard to change, so they put in a little weirdness just this once. And if you’re maintaining that software over years, you will eventually accumulate enough of these little weirdnesses that making any large change becomes a giant pain. Not saying it’s easy to change large projects in statically typed languages, just saying it’s easier compared to dynamically typed languages.

As we’re progressing through the book, the chapters are becoming less and less finished. In Visibility, Calvino talks mostly about the interaction between visual imagination and written words.

[…] the only thing I was certain of was that all of my stories originated with a visual image. One of these images, for example, was that of a man cut in two halves, each of which goes on living independently; another was a boy who climbs up a tree and travels from tree to tree without ever coming back down; yet another was an empty suit of armor that moves and speaks as if someone was inside it.

For me, then, the first step in the conception of a story occurs when an image that comes to mind seems, for whatever reason, charged with meaning, even if I can’t explain that meaning in a logical or analytical terms. As soon as the image has become clear enough in my mind, I begin developing it into a story, or rather the images themselves give rise to their own implicit potentialities, to the story they carry within them. Around each image others come into being, creating a field of analogies, symmetries, juxtapositions. At this point my shaping of this material – which is no longer purely visual but conceptual as well – begins to be guided by my intent to give order and sense to the story’s progression. Or, to put it another way, what I’m doing is trying to establish which meanings are compatible with the general design I want to give the story and which are not, always allowing for a certain range of possible alternatives. At the same time, the writing, the verbal rendering, assumes ever-greater importance. I would say that as soon as I begin to put black on white, the written word begins to take over, first in an attempt to match the visual image, then as a cohesive development of the initial stylistic impulse, until little by little it rules the whole field. Then it is the writing that must guide the story toward its most felicitous verbal expression, and all that’s left for the visual imagination is to keep up.

For me as a programmer, I thought “Visibility” was going to be about something else entirely. The word makes me think of software that shows how it works. I don’t do web programming often, but when I do, I still use Knockout as my library of choice. And the main reason is visibility. Knockout shows how it works. Knockout has a simple promise: It will re-evaluate code automatically when data changes. Like an excel spreadsheet, except for UI code. With that alone a huge amount of state propagation and cache invalidation bugs go away. How does it do that? It has two simple concepts: Observable variables and Computed variables. Computed variables are like formulas in excel spreadsheets. They get re-evaluated when one of the input variables changes. But I can write arbitrary code in the computed variables, so how does it know which observables I used? Easy, observables are not normal variables. You have to call a function to access them. So it just has to remember which observables I called that function on. The model makes perfect sense and I can see how the whole library is built on top of that. I don’t need to know the details because I can imagine how they work myself.

The last time I tried web-programming I did the tutorials for Vue.js and for React. Vue looks very similar to Knockout, except somehow everything works magically. Normal Javascript variables are suddenly observable. I don’t know how it works. Is it super expensive internally? Does it magically transform my structs? It doesn’t show how it works. React is even worse. The tutorial starts off with several layers of deep magic, including writing html tags in the middle of Javascript code.

Why do I care how it works internally? Who cares if the internals require a huge amount of code and weird magic to work, as long as they work, right? But I’m concerned about the law of leaky abstractions. While doing the React tutorials I got some error messages that looked truly terrifying. I’m concerned that if things go wrong, I won’t be able to fix the issue. Or if things are slow, it will be very tricky to figure out how to make them fast. I don’t want to have to make random changes until things are working better.

Some of this can just be solved with good tutorials. Maybe Vue.js actually works exactly like Knockout, just using automatically created getters and setters. Just explain that in the tutorial, and I’d be happy.

The error messages of React reminded me of C++ templates. They were created with visibility in mind: Nobody could debug macros, and templates were supposed to be a more easily understandable alternative. What went wrong? When stepping through STL code, you often come across functions that seem to do nothing, like this one:

template 〈class _Ty〉 inline _Ty * _Unfancy(_Ty * _Ptr) { // do nothing for plain pointers return (_Ptr); }

What does this function do? It just returns its input. Why have such a function and why call it _Unfancy? The answer to that question lies in how you do conditionals in template code. This is actually part of the template-equivalent of a switch/case, and this is the “do nothing” default case. Templates are a weird accidental functional programming language, so people often write really weird code like this to do branching logic like switch/case or if/else. Visibility is definitely low on this. There are talks of making “if constexpr”more powerful, so that this could be an if/else. The C++ community is oddly resistant against that though. The other planned thing that will help here is that modules will remove some of the symbol salad, reducing the number of underscores, and removing those parens around the result. (those are often defensive coding to protect against insane preprocessor use)

Another weird one is how customization points are done in C++ templates. Those also often end up as functions that seem to do nothing. But at compile time, depending on your type, the compiler might have chosen a different function to call that actually does something. Talking about these as problems in visibility makes it obvious what the problem is: If this is a customization point, it should look like a customization point. Not like a function that does nothing. But I shouldn’t be complaining too much. Templates are better than macros, and it looks like things are slowly getting better.

In the final chapter Calvino talks about a certain kind of richness or complexity that is inherent to life, and he calls it multiplicity. The word is used for example in this quote:

He would assert, among other things, that an unforeseen disaster is never the consequence or, if you prefer, the effect of a single factor, of only one cause, but rather is like a whirlwind, a point of cyclonic depression in the consciousness of the world, toward which a whole multiplicity of converging causalities have conspired.

Calvino uses other books to illustrate this as well, like a book that has one chapter for every room in a Parisian apartment complex. Or here is another quote I like:

The best example of these networks that radiate from every object is the episode from chapter 9 of

Quer pasticciaccio brutto de via Merulanain which the stolen jewels are recovered. We get accounts of every precious stone – its geological history, its chemical composition, its artistic and historical connections, along with all its potential uses and the associated images they evoke.

In game design, the closest thing I can think of, that explores this richness and complexity of everything, is a game design approach that Jonathan Blow and Mark ten Bosch talked about in this talk:

In that talk they explain a design philosophy in which you focus on one idea, and you explore all of its implications and its richness. You can see this most easily in Jonathan Blow’s game Braid, in which he explores the implications of the ability to control time.

I also think that Nintendo games follow a similar design philosophy. Here is Game Maker’s Toolkit talking about how it applies to Super Mario Odyssey:

And the shrines in Breath of the Wild also allowed the designers to fully explore their gameplay mechanics.

How does this apply to programming? I think certain parts of programming have this richness, this good kind of complexity. The unix shell for example. Many separate programs that do one task, and they do that one task so well that they have explored every single angle of it. But most important is that those programs can talk to each other through a shared interface. This kind of richness is often missing in programming. What is the unix shell equivalent in C++? Maybe the STL containers and algorithms, but they’re not nearly as rich as the unix shell.

I also feel like many concepts aren’t sufficiently explored. I said above that Knockout is following the same state propagation model as Excel spreadsheets: If any of your inputs change, you get updated automatically. That simple model makes a huge number of bugs and complexity go away. Try to open a random file in your codebase, and measure the fraction of code that is responsible for pushing state around: Making sure to recompute X whenever Y changes, making sure that the new value of a variable gets pushed to all the places that use it, and making sure to invalidate all the caches that have duplicated some state for performance reasons. (like how the transform of a game object is usually cached in at least the physics system and the renderer, but usually in many other places as well) I bet you it’s more than half of your code, and in Knockout it’s so nice that all of that goes away. I feel like it should be used for more than just Excel spreadsheets and UI code. I’ve tried to explore it more fully, but it’s slow work that I can only really do in my spare time, where I already have dozens of other projects.

I think programmers are hesitant to do this work because our main problem is complexity. And when I say things like “explore a concept fully” I am also immediately thinking of over-engineering and unnecessary complexity that’s going to come back and bite me. Calvino also talks about how this is a problem in the books that he is referring: A lot of them are unfinished works. Sometimes the author spent decades on them before he passed away, and the unfinished book was published posthumously. It’s also a problem that Jonathan Blow ran into, The Witness took him seven years to complete, and Mark ten Bosch has been working on Miegakure for something like ten years. In that sense it’s amazing that Nintendo can release these very rich games at a regular schedule. I should look into how they do that…

Calvino hadn’t started on his talk about consistency when he passed away, so we don’t know what he was going to say about it. But Jonathan Blow actually does a great job talking about the value of consistency in game design in the talk that I embedded at the top of this blog post. So I will just tell you to watch that.

In programming, consistency is such a commonly voiced value, I almost don’t have to say anything. The one thing I’ll add to the usual is that I think we’re pretty good about presenting consistency to other programmers (avoid surprises in interfaces etc.) but we’re not that good in presenting consistency to users of our software. Dan Luu has two blog posts that seem relevant here. Here is one on the huge number of bugs that we expose our users to, and here is one about UI backwards compatibility. Here is an example quote:

Last week, Facebook changed its interface so that my normal sequence of clicks to hide a story saves the story instead of hiding it. Saving is pretty much the opposite of hiding! It’s the opposite both from the perspective of the user and also as a ranking signal to the feed ranker. The really “great” thing about a change like this is that it A/B tests incredibly well if you measure new feature “engagement” by number of clicks because many users will accidentally save a story when they meant to hide it. Earlier this year, twitter did something similar by swapping the location of “moments” and “notifications”.

I honestly don’t know what to do about that. In video games, it feels like our games are now complex enough that lots of developers simply can’t manage the complexity any more. Features just stop working, even if no one removed them intentionally. See for example this video comparing Far Cry 2 to Far Cry 5:

I can tell you exactly how the above happened: some of these features just broke during development and nobody noticed until it was too late. Other features were re-written, maybe when porting the engine to a new console version, or just because somebody wanted to rewrite them. The new feature then didn’t have all the features of the old version, either because there was no time, or because the person doing the rewrite didn’t know about all the features that the old version had. Some things were just redone for the sake of redoing them, like the damage animations. Animators always want to redo all the animations, just like artists want to redo all the art, designers want to completely change the design and programmers want to rewrite all the code. Sometimes people just delete old stuff with the intent that somebody will redo them, and they never get redone. (like the underwater death animation that was “replaced” by a “stumble and fall to the ground” animation that makes no sense underwater) And of course the projects are always under time pressure, and features like “destructible vegetation” get down-prioritized early on after a rewrite of the vegetation system. After all there are several other new features that are high priority, not to mention all the bugs that were caused by the rewrite. So “destructible vegetation” becomes a down-prioritized “nice to have” feature, never mind the fact that the engine used to have it.

How do you solve it? I don’t know. I mean clearly more automated tests would help, but the difficulty lies in convincing people to actually write tests, learn how to write good tests, and to maintain their tests. I once wrote a lists of potential improvements that we could do at work, and it had dozens of entries, ranging from “more automated tests” to “more static analysis” and “write postmortems for crashes” and “put a sticker on the programmer’s desk every time they introduce a crash, to see who collects the most” (a form of public shaming), “more strict code reviews” and others. But the problem obviously doesn’t lie in lack of ideas, since we have known most of this for decades.

I have a hunch that whoever solves this will make a lot of money. It does seem to me like Blizzard has much more solid game development practices. I remember watching the development of Starcraft 2, and even early versions of it looked super solid. Nintendo also clearly has this figured out: They used the same Mario engine for many years, and they also used the same Zelda engine for many years. It helped that the Gamecube, Wii and Wii U had very similar hardware.

It’s important to not confuse consistency with stagnation. Developers who ship the same game (or nearly identical games) multiple times will soon die. But I do think that if you have a development process where things don’t break all the time, you would free up a lot of time that we can put into making better games.

Italo Calvino was clearly a genius, and it shows in his choice of topics. Which game designer has ever said that they want to make a game that has “Lightness, Quickness, Exactitude, Visibility, Multiplicity and Consistency”? We strive for much simpler goals like “fun” or “intense” or “scary”. I like Calvino’s choice of topics because they make me think. Thinking about lightness revealed what I actually like when I talk about simplicity. And thinking about quickness revealed why the story in Stacraft 2 had always bothered me. I like the topic of multiplicity most though, perhaps because it leaves me with the most open, intriguing questions. How do you get that kind of richness in a reasonable amount of time? It’s something I’ll spend time investigating, hopefully without falling into an impossible to finish mega-project…

]]>Turns out I was wrong. This is a big one. And everyone should be using it. Hash tables should not be prime number sized and they should not use an integer modulo to map hashes into slots. Fibonacci hashing is just better. Yet somehow nobody is using it and lots of big hash tables (including all the big implementations of std::unordered_map) are much slower than they should be because they don’t use Fibonacci Hashing. So let’s figure this out.

First of all how do we find out what this Fibonacci Hashing is? Rich Geldreich called it “Knuth’s multiplicative method,” but before looking it up in The Art of Computer Programming, I tried googling for it. The top result right now is this page which is old, with a copyright from 1997. Fibonacci Hashing is not mentioned on Wikipedia. You will find a few more pages mentioning it, mostly from universities who present this in their “introduction to hash tables” material.

From that I thought it’s one of those techniques that they teach in university, but that nobody ends up using because it’s actually more expensive for some reason. There are plenty of those in hash tables: Things that get taught because they’re good in theory, but they’re bad in practice so nobody uses them.

Except somehow, on this one, the wires got crossed. Everyone uses the algorithm that’s unnecessarily slow and leads to more problems, and nobody is using the algorithm that’s faster while at the same time being more robust to problematic patterns. Knuth talked about Integer Modulo and about Fibonacci Hashing, and everybody should have taken away from that that they should use Fibonacci Hashing, but they didn’t and everybody uses integer modulo.

Before diving into this, let me just show you the results of a simple benchmark: Looking up items in a hash table:

In this benchmark I’m comparing various unordered_map implementations. I’m measuring their lookup speed when the key is just an integer. On the X-axis is the size of the container, the Y-axis is the time to find one item. To measure that, the benchmark is just spinning in a loop calling find() on this container, and at the end I divide the time that the loop took by the number of iterations in the loop. So on the left hand side, when the table is small enough to fit in cache, lookups are fast. On the right hand side the table is too big to fit in cache and lookups become much slower because we’re getting cache misses for most lookups.

But the main thing I want to draw attention to is the speed of ska::unordered_map, which uses Fibonacci hashing. Otherwise it’s a totally normal implementation of unordered_map: It’s just a vector of linked lists, with every element being stored in a separate heap allocation. On the left hand side, where the table fits in cache, ska::unordered_map can be more than twice as fast as the Dinkumware implementation of std::unordered_map, which is the next fastest implementation. (this is what you get when you use Visual Studio)

So if you use std::unordered_map and look things up in a loop, that loop could be twice as fast if the hash table used Fibonacci hashing instead of integer modulo.

So let me explain how Fibonacci Hashing works. It’s related to the golden ratio which is related to the Fibonacci numbers, hence the name. One property of the Golden Ratio is that you can use it to subdivide any range roughly evenly without ever looping back to the starting position. What do I mean by subdividing? For example if you want to divide a circle into 8 sections, you can just make each step around the circle be an angle of degrees. And after eight steps you’ll be back at the start. And for any number of steps you want to take, you can just change the angle to be . But what if you don’t know ahead of time how many steps you’re going to take? What if the value is determined by something you don’t control? Like maybe you have a picture of a flower, and you want to implement “every time the user clicks the mouse, add a petal to the flower.” In that case you want to use the golden ratio: Make the angle from one petal to the next and you can loop around the circle forever, adding petals, and the next petal will always fit neatly into the biggest gap and you’ll never loop back to your starting position. Vi Hart has a good video about the topic:

(The video is part two of a three-part series, part one is here)

I knew about that trick because it’s useful in procedural content generation: Any time that you want something to look randomly distributed, but you want to be sure that there are no clusters, you should at least try to see if you can use the golden ratio for that. (if that doesn’t work, Halton Sequences are also worth trying before you try random numbers) But somehow it had never occurred to me to use the same trick for hash tables.

So here’s the idea: Let’s say our hash table is 1024 slots large, and we want to map an arbitrarily large hash value into that range. The first thing we do is we map it using the above trick into the full 64 bit range of numbers. So we multiply the incoming hash value with . (the number 11400714819323198486 is closer but we don’t want multiples of two because that would throw away one bit) Multiplying with that number will overflow, but just as we wrapped around the circle in the flower example above, this will wrap around the whole 64 bit range in a nice pattern, giving us an even distribution across the whole range from to . To illustrate, let’s just look at the upper three bits. So we’ll do this:

size_t fibonacci_hash_3_bits(size_t hash) { return (hash * 11400714819323198485llu) >> 61; }

This will return the upper three bits after doing the multiplication with the magic constant. And we’re looking at just three bits because it’s easy to see how the golden ratio wraparound behaves when we just look at the top three bits. If we pass in some small numbers for the hash value, we get the following results from this:

fibonacci_hash_3_bits(0) == 0

fibonacci_hash_3_bits(1) == 4

fibonacci_hash_3_bits(2) == 1

fibonacci_hash_3_bits(3) == 6

fibonacci_hash_3_bits(4) == 3

fibonacci_hash_3_bits(5) == 0

fibonacci_hash_3_bits(6) == 5

fibonacci_hash_3_bits(7) == 2

fibonacci_hash_3_bits(8) == 7

fibonacci_hash_3_bits(9) == 4

fibonacci_hash_3_bits(10) == 1

fibonacci_hash_3_bits(11) == 6

fibonacci_hash_3_bits(12) == 3

fibonacci_hash_3_bits(13) == 0

fibonacci_hash_3_bits(14) == 5

fibonacci_hash_3_bits(15) == 2

fibonacci_hash_3_bits(16) == 7

This gives a pretty even distribution: The number 0 comes up three times, all other numbers come up twice. And every number is far removed from the previous and the next number. If we increase the input by one, the output jumps around quite a bit. So this is starting to look like a good hash function. And also a good way to map a number from a larger range into the range from 0 to 7.

In fact we already have the whole algorithm right here. All we have to do to get an arbitrary power of two range is to change the shift amount. So if my hash table is size 1024, then instead of just looking at the top 3 bits I want to look at the top 10 bits. So I shift by 54 instead of 61. Easy enough.

Now if you actually run a full hash function analysis on this, you find that it doesn’t make for a great hash function. It’s not terrible, but you will quickly find patterns. But if we make a hash table with a STL-style interface, we don’t control the hash function anyway. The hash function is being provided by the user. So we will just use Fibonacci hashing to map the result of the hash function into the range that we want.

So why is integer modulo bad anyways? Two reasons: 1. It’s slow. 2. It can be real stupid about patterns in the input data. So first of all how slow is integer modulo? If you’re just doing the straightforward implementation like this:

size_t hash_to_slot(size_t hash, size_t num_slots) { return hash % num_slots; }

Then this is real slow. It takes roughly 9 nanoseconds on my machine. Which, if the hash table is in cache, is about five times longer than the rest of the lookup takes. If you get cache misses then those dominate, but it’s not good that this integer modulo is making our lookups several times slower when the table is in cache. Still the GCC, LLVM and boost implementations of unordered_map use this code to map the hash value to a slot in the table. And they are really slow because of this. The Dinkumware implementation is a little bit smarter: It takes advantage of the fact that when the table is sized to be a power of two, you can do an integer modulo by using a binary and:

size_t hash_to_slot(size_t hash, size_t num_slots_minus_one) { return hash & num_slots_minus_one; }

Which takes roughly 0 nanoseconds on my machine. Since num_slots is a power of two, this just chops off all the upper bits and only keeps the lower bits. So in order to use this you have to be certain that all the important information is in the lower bits. Dinkumware ensures this by using a more complicated hash function than the other implementations use: For integers it uses a FNV1 hash. It’s much faster than a integer modulo, but it still makes your hash table twice as slow as it could be since FNV1 is expensive. And there is a bigger problem: If you provide your own hash function because you want to insert a custom type into the hash table, you have to know about this implementation detail.

We have been bitten by that implementation detail several times at work. For example we had a custom ID type that’s just a wrapper around a 64 bit integer which is composed from several sources of information. And it just so happens that that ID type has really important information in the upper bits. It took surprisingly long until someone noticed that we had a slow hash table in our codebase that could literally be made a hundred times faster just by changing the order of the bits in the hash function, because the integer modulo was chopping off the upper bits.

Other tables, like google::dense_hash_map also use a power of two hash size to get the fast integer modulo, but it doesn’t provide it’s own implementation of std::hash<int> (because it can’t) so you have to be real careful about your upper bits when using dense_hash_map.

Talking about google::dense_hash_map, integer modulo brings even more problems with it for open addressing tables it. Because if you store all your data in one array, patterns in the input data suddenly start to matter more. For example google::dense_hash_map gets really, really slow if you ever insert a lot of sequential numbers. Because all those sequential numbers get assigned slots right next to each other, and if you’re then trying to look up a key that’s not in the table, you have to probe through a lot of densely occupied slots before you find your first empty slot. You will never notice this if you only look up keys that are actually in the map, but unsuccessful lookups can be dozens of times slower than they should be.

Despite these flaws, all of the fastest hash table implementations use the “binary and” approach to assign a hash value to a slot. And then you usually try to compensate for the problems by using a more complicated hash function, like FNV1 in the Dinkumware implementation.

Fibonacci hashing solves both of these problems. 1. It’s really fast. It’s a integer multiplication followed by a shift. It takes roughly 1.5 nanoseconds on my machine, which is fast enough that it’s getting real hard to measure. 2. It mixes up input patterns. It’s like you’re getting a second hashing step for free after the first hash function finishes. If the first hash function is actually just the identity function (as it should be for integers) then this gives you at least a little bit of mixing that you wouldn’t otherwise get.

But really it’s better because it’s faster. When I worked on hash tables I was always frustrated by how much time we are spending on the simple problem of “map a large number to a small number.” It’s literally the slowest operation in the hash table. (outside of cache misses of course, but let’s pretend you’re doing several lookups in a row and the table is cached) And the only alternative was the “power of two binary and” version which discards bits from the hash function and can lead to all kinds of problems. So your options are either slow and safe, or fast and losing bits and getting potentially many hash collisions if you’re ever not careful. And everybody had this problem. I googled a lot for this problem thinking “surely somebody must have a good method for bringing a large number into a small range” but everybody was either doing slow or bad things. For example here is an approach (called “fastrange”) that almost re-invents Fibonacci hashing, but it exaggerates patterns where Fibonacci hashing breaks up patterns. It’s the same speed as Fibonacci hashing, but when I’ve tried to use it, it never worked for me because I would suddenly find patterns in my hash function that I wasn’t even aware of. (and with fastrange your subtle patterns suddenly get exaggerated to be huge problems) Despite its problems it is being used in Tensorflow, because everybody is desperate for a faster solution of this the problem of mapping a large number into a small range.

That’s a tricky question because there is so little information about Fibonacci hashing on the Internet, but I think it has to do with a historical misunderstanding. In The Art of Computer Programming, Knuth introduces three hash functions to use for hash tables:

- Integer Modulo
- Fibonacci Hashing
- Something related to CRC hashes

The inclusion of the integer modulo in this list is a bit weird from today’s perspective because it’s not much of a hash function. It just maps from a larger range into a smaller range, and doesn’t otherwise do anything. Fibonacci hashing is actually a hash function, not the greatest hash function, but it’s a good introduction. And the third one is too complicated for me to understand. It’s something about coming up with good coefficients for a CRC hash that has certain properties about avoiding collisions in hash tables. Probably very clever, but somebody else has to figure that one out.

So what’s happening here is that Knuth uses the term “hash function” differently than we use it today. Today the steps in a hash table are something like this:

- Hash the key
- Map the hash value to a slot
- Compare the item in the slot
- If it’s not the right item, repeat step 3 with a different item until the right one is found or some end condition is met

We use the term “hash function” to refer to step 1. But Knuth uses the term “hash function” to refer to something that does both step 1 and step 2. So when he refers to a hash function, he means something that both hashes the incoming key, and assigns it to a slot in the table. So if the table is only 1024 items large, the hash function can only return a value from 0 to 1023. This explains why “integer modulo” is a hash function for Knuth: It doesn’t do anything in step 1, but it does work well for step 2. So if those two steps were just one step, then integer modulo does a good job at that one step since it does a good job at our step 2. But when we take it apart like that, we’ll see that Fibonacci Hashing is an improvement compared to integer modulo in both steps. And since we’re only using it for step 2, it allows us to use a faster implementation for step 1 because the hash function gets some help from the additional mixing that Fibonacci hashing does.

But this difference in terms, where Knuth uses “hash function” to mean something different than “hash function” means for std::unordered_map, explains to me why nobody is using Fibonacci hashing. When judged as a “hash function” in today’s terms, it’s not that great.

After I found that Fibonacci hashing is not mentioned anywhere, I did more googling and was more successful searching for “multiplicative hashing.” Fibonacci hashing is just a simple multiplicative hash with a well-chosen magic number. But the language that I found describing multiplicative hashing explains why nobody is using this. For example Wikipedia has this to say about multiplicative hashing:

Multiplicative hashing is a simple type of hash function often used by teachers introducing students to hash tables. Multiplicative hash functions are simple and fast, but have higher collision rates in hash tables than more sophisticated hash functions.

So just from that, I certainly don’t feel encouraged to check out what this “multiplicative hashing” is. Or to get a feeling for how teachers introduce this, here is MIT instructor Erik Demaine (who’s videos I very much recommend) introducing hash functions, and he says this:

I’m going to give you three hash functions. Two of which are, let’s say common practice, and the third of which is actually theoretically good. So the first two are not good theoretically, you can prove that they’re bad, but at least they give you some flavor.

Then he talks about integer modulo, multiplicative hashing, and a combination of the two. He doesn’t mention the Fibonacci hashing version of multiplicative hashing, and the introduction probably wouldn’t inspire people to go seek out more information it.

So I think people just learn that multiplicative hashing is not a good hash function, and they never learn that multiplicative hashing is a great way to map large values into a small range.

Of course it could also be that I missed some unknown big downside to Fibonacci hashing and that there is a real good reason why nobody is using this, but I didn’t find anything like that. But it could be that I didn’t find anything bad about Fibonacci hashing simply because it’s hard to find anything at all about Fibonacci hashing, so let’s do our own analysis:

So I have to confess that I don’t know much about analyzing hash functions. It seems like the best test is to see how close a hash function gets to the strict avalanche criterion which “is satisfied if, whenever a single input bit is changed, each of the output bits changes with a 50% probability.”

To measure that I wrote a small program that takes a hash , and runs it through Fibonacci hashing to get a slot in the hash table . Then I change a single bit in , giving me , and after I run that through Fibonacci hashing I get a slot . Then I measure depending on which bit I changed in , which bits are likely to change in compared to and which bits are unlikely to change.

I then run that same test every time after I doubled a hash table, because with different size hash tables there are more bits in the output: If the hash table only has four slots, there are only two bits in the output. If the hash table has 1024 slots, there are ten bits in the output. Finally I color code the result and plot the whole thing as a picture that looks like this:

Let me explain this picture. Each row of pixels represents one of the 64 bits of the input hash. The bottom-most row is the first bit, the topmost row is the 64th bit. Each column represents one bit in the output. The first two columns are the output bits for a table of size 4, the next three columns are the output bits for a table of size 8 etc. until the last 23 bits are for a table of size eight million. The color coding means this:

- A black pixel indicates that when the input pixel for that row changes, the output pixel for that column has a 50% chance of changing. (this is ideal)
- A blue pixel means that when the input pixel changes, the ouput pixel has a 100% chance of changing.
- A red pixel means that when the input pixel changes, the output pixel has a 0% chance of changing.

For a really good hash function the entire picture would be black. So Fibonacci hashing is not a really good hash function.

The worst pattern we can see is at the top of the picture: The last bit of the input hash (the top row in the picture) can always only affect the last bit of the output slot in the table. (the last column of each section) So if the table has 1024 slots, the last bit of the input hash can only determine the bit in the output slot for the number 512. It will never change any other bit in the output. Lower bits in the input can affect more bits in the output, so there is more mixing going on for those.

Is it bad that the last bit in the input can only affect one bit in the output? It would be bad if we used this as a hash function, but it’s not necessarily bad if we just use this to map from a large range into a small range. Since each row has at least one blue or black pixel in it, we can be certain that we don’t lose information, since every bit from the input will be used. What would be bad for mapping from a large range into a small range is if we had a row or a column that has only red pixels in it.

Let’s also look at what this would look like for integer modulo, starting with integer modulo using prime numbers:

This one has more randomness at the top, but a clearer pattern at the bottom. All that red means that the first few bits in the input hash can only determine the first few bits in the output hash. Which makes sense for integer modulo. A small number modulo a large number will never result in a large number, so a change to a small number can never affect the later bits.

This picture is still “good” for mapping from a large range into a small range because we have that diagonal line of bright blue pixels in each block. To show a bad function, here is integer modulo with a power of two size:

This one is obviously bad: The upper bits of the hash value have completely red rows, because they will simply get chopped off. Only the lower bits of the input have any effect, and they can only affect their own bits in the output. This picture right here shows why using a power of two size requires that you are careful with your choice of hash function for the hash table: If those red rows represent important bits, you will simply lose them.

Finally let’s also look at the “fastrange” algorithm that I briefly mentioned above. For power of two sizes it looks really bad, so let me show you what it looks like for prime sizes:

What we see here is that fastrange throws away the lower bits of the input range. It only uses the upper bits. I had used it before and I had noticed that a change in the lower bits doesn’t seem to make much of a difference, but I had never realized that it just completely throws the lower bits away. This picture totally explains why I had so many problems with fastrange. Fastrange is a bad function to map from a large range into a small range because it’s throwing away the lower bits.

Going back to Fibonacci hashing, there is actually one simple change you can make to improve the bad pattern for the top bits: Shift the top bits down and xor them once. So the code changes to this:

size_t index_for_hash(size_t hash) { hash ^= hash >> shift_amount; return (11400714819323198485llu * hash) >> shift_amount; }

It’s almost looking more like a proper hash function, isn’t it? This makes the function two cycles slower, but it gives us the following picture:

This looks a bit nicer, with the problematic pattern at the top gone. (and we’re seeing more black pixels now which is the ideal for a hash function) I’m not using this though because we don’t really need a good hash function, we need a good function to map from a large range into a small range. And this is on the critical path for the hash table, before we can even do the first comparison. Any cycle added here makes the whole line in the graph above move up.

So I keep on saying that we need a good function to map from a large range into a small range, but I haven’t defined what “good” means there. I don’t know of a proper test like the avalanche analysis for hash functions, but my first attempt at a definition for “good” would be that every value in the smaller range is equally likely to occur. That test is very easy to fulfill though: all of the methods (including fastrange) fulfill that criteria. So how about we pick a sequence of values in the input range and check if every value in the output is equally likely. I had given the examples for numbers 0 to 16 above. We could also do multiples of 8 or all powers of two or all prime numbers or the Fibonacci numbers. Or let’s just try as many sequences as possible until we figure out the behavior of the function.

Looking at the above list we see that there might be a problematic pattern with multiples of 4: fibonacci_hash_3_bits(4) returned 3, for fibonacci_hash_3_bits(8) returned 7, fibonacci_hash_3_bits(12) returned 3 again and fibonacci_hash_3_bits(16) returned 7 again. Let’s see how this develops if we print the first sixteen multiples of four:

Here are the results:

0 -> 0

4 -> 3

8 -> 7

12 -> 3

16 -> 7

20 -> 2

24 -> 6

28 -> 2

32 -> 6

36 -> 1

40 -> 5

44 -> 1

48 -> 5

52 -> 1

56 -> 4

60 -> 0

64 -> 4

Doesn’t look too bad actually: Every number shows up twice, except the number 1 shows up three times. What about multiples of eight?

0 -> 0

8 -> 7

16 -> 7

24 -> 6

32 -> 6

40 -> 5

48 -> 5

56 -> 4

64 -> 4

72 -> 3

80 -> 3

88 -> 3

96 -> 2

104 -> 2

112 -> 1

120 -> 1

128 -> 0

Once again doesn’t look too bad, but we are definitely getting more repeated numbers. So how about multiples of sixteen?

0 -> 0

16 -> 7

32 -> 6

48 -> 5

64 -> 4

80 -> 3

96 -> 2

112 -> 1

128 -> 0

144 -> 7

160 -> 7

176 -> 6

192 -> 5

208 -> 4

224 -> 3

240 -> 2

256 -> 1

This looks a bit better again, and if we were to look at multiples of 32 it would look better still. The reason why the number 8 was starting to look problematic was not because it’s a power of two. It was starting to look problematic because it is a Fibonacci number. If we look at later Fibonacci numbers, we see more problematic patterns. For example here are multiples of 34:

0 -> 0

34 -> 0

68 -> 0

102 -> 0

136 -> 0

170 -> 0

204 -> 0

238 -> 0

272 -> 0

306 -> 0

340 -> 1

374 -> 1

408 -> 1

442 -> 1

476 -> 1

510 -> 1

544 -> 1

That’s looking bad. And later Fibonacci numbers will only look worse. But then again how often are you going to insert multiples of 34 into a hash table? In fact if you had to pick a group of numbers that’s going to give you problems, the Fibonacci numbers are not the worst choice because they don’t come up that often naturally. As a reminder, here are the first couple Fibonacci numbers: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584… The first couple numbers don’t give us bad patterns in the output, but anything bigger than 13 does. And most of those are pretty harmless: I can’t think of any case that would give out multiples of those numbers. 144 bothers me a little bit because it’s a multiple of 8 and you might have a struct of that size, but even then your pointers will just be eight byte aligned, so you’d have to get unlucky for all your pointers to be multiples of 144.

But really what you do here is that you identify the bad pattern and you tell your users “if you ever hit this bad pattern, provide a custom hash function to the hash table that fixes it.” I mean people are happy to use integer modulo with powers of two, and for that it’s ridiculously easy to find bad patterns: Normal pointers are a bad pattern for that. Since it’s harder to come up with use cases that spit out lots of multiples of Fibonacci numbers, I’m fine with having “multiples of Fibonacci numbers” as bad patterns.

So why are Fibonacci numbers a bad pattern for Fibonacci hashing anyways? It’s not obvious if we just have the magic number multiplication and the bit shift. First of all we have to remember that the magic constant came from dividing by the golden ratio: . And then since we are truncating the result of the multiplication before we shift it, there is actually a hidden modulo by in there. So whenever we are hashing a number the slot is actually determined by this:

I’m leaving out the shift at the end because that part doesn’t matter for figuring out why Fibonacci numbers are giving us problems. In the example of stepping around a circle (from the Vi Hart video above) the equation would look like this:

This would give us an angle from 0 to 360. These functions are obviously similar. We just replaced with . So while we’re in math-land with infinite precision, we might as well make the function return something in the range from 0 to 1, and then multiply the constant in afterwards:

Where returns the fractional part of a number. So . In this last formulation it’s easy to find out why Fibonacci numbers give us problems. Let’s try putting in a few Fibonacci numbers:

What we see here is that if we divide a Fibonacci number by the golden ratio, we just get the previous Fibonacci number. There is no fractional part so we always end up with 0. So even if we multiply the full range of back in, we still get 0. But for smaller Fibonacci numbers there is some imprecision because the Fibonacci sequence is just an integer approximation of golden ratio growth. That approximation gets more exact the further along we get into the sequence, but for the number 8 it’s not that exact. That’s why 8 was not a problem, 34 started to look problematic, and 144 is going to be real bad.

Except that when we talk about badness, we also have to consider the size of the hash table. It’s really easy to find bad patterns when the table only has eight slots. If the table is bigger and has, say 64 slots, suddenly multiples of 34 don’t look as bad:

0 -> 0

34 -> 0

68 -> 1

102 -> 2

136 -> 3

170 -> 4

204 -> 5

238 -> 5

272 -> 6

306 -> 7

340 -> 8

374 -> 9

408 -> 10

442 -> 10

476 -> 11

510 -> 12

544 -> 13

And if the table has 1024 slots we get all the multiples nicely spread out:

0 -> 0

34 -> 13

68 -> 26

102 -> 40

136 -> 53

170 -> 67

204 -> 80

238 -> 94

272 -> 107

306 -> 121

340 -> 134

374 -> 148

408 -> 161

442 -> 175

476 -> 188

510 -> 202

544 -> 215

At size 1024 even the multiples of 144 don’t look scary any more because they’re starting to be spread out now:

0 -> 0

144 -> 1020

288 -> 1017

432 -> 1014

576 -> 1011

720 -> 1008

864 -> 1004

1008 -> 1001

1152 -> 998

So the bad pattern of multiples of Fibonacci numbers goes away with bigger hash tables. Because Fibonacci hashing spreads out the numbers, and the bigger the table is, the better it gets at that. This doesn’t help you if your hash table is small, or if you need to insert multiples of a larger Fibonacci number, but it does give me confidence that this “bad pattern” is something we can live with.

So I am OK with living with the bad pattern of Fibonacci hashing. It’s less bad than making the hash table a power of two size. It can be slightly more bad than using prime number sizes, as long as your prime numbers are well chosen. But I still think that on average Fibonacci hashing is better than prime number sized integer modulo, because Fibonacci hashing mixes up sequential numbers. So it fixes a real problem I have run into in the past while introducing a theoretical problem that I am struggling to find real examples for. I think that’s a good trade.

Also prime number integer modulo can have problems if you choose bad prime numbers. For example boost::unordered_map can choose size 196613, which is 0b110000000000000101 in binary, which is a pretty round number in the same way that 15000005 is a pretty round number in decimal. Since this prime number is “too round of a number” this causes lots of hash collisions in one of my benchmarks, and I didn’t set that benchmark up to find bad cases like this. It was totally accidental and took lots of debugging to figure out why boost::unordered_map does so badly in that benchmark. (the benchmark in question was set up to find problems with sequential numbers) But I won’t go into that and will just say that while prime numbers give fewer problematic patterns than Fibonacci hashing, you still have to choose them well to avoid introducing hash collisions.

Fibonacci hashing may not be the best hash function, but I think it’s the best way to map from a large range of numbers into a small range of numbers. And we are only using it for that. When used only for that part of the hash table, we have to compare it against two existing approaches: Integer modulo with prime numbers and Integer modulo with power of two sizes. It’s almost as fast as the power of two size, but it introduces far fewer problems because it doesn’t discard any bits. It’s much faster than the prime number size, and it also gives us the bonus of breaking up sequential numbers, which can be a big benefit for open addressing hash tables. It does introduce a new problem of having problems with multiples of large Fibonacci numbers in small hash tables, but I think those problems can be solved by using a custom hash function when you encounter them. Experience will tell how often we will have to use this.

All of my hash tables now use Fibonacci hashing by default. For my flat_hash_map the property of breaking up sequential numbers is particularly important because I have had real problems caused by sequential numbers. For the others it’s just a faster default. It might almost make the option to use the power of two integer modulo unnecessary.

It’s surprising that the world forgot about this optimization and that we’re all using primer number sized hash tables instead. (or use Dinkumware’s solution which uses a power of two integer modulo, but spends more time on the hash function to make up for the problems of the power of two integer modulo) Thanks to Rich Geldreich for writing a hash table that uses this optimization and for mentioning it in my comments. But this is an interesting example because academia had a solution to a real problem in existing hash tables, but professors didn’t realize that they did. The most likely reason for that is that it’s not well known how big the problem of “mapping a large number into a small range” is and how much time it takes to do an integer modulo.

For future work it might be worth looking into Knuth’s third hash function: The one that’s related to CRC hashes. It seems to be a way to construct a good CRC hash function if you need a n-bit output for a hash table. But it was too complicated for me to look into, so I’ll leave that as an exercise to the reader to find out if that one is worth using.

Finally here is the link to my implementation of unordered_map. My other two hash tables are also there: flat_hash_map has very fast lookups and bytell_hash_map is also very fast but was designed more to save memory compared to flat_hash_map.

]]>The main benefit of Google’s hash table over mine was that Google’s has less memory overhead: It has a higher max_load_factor (meaning how full can the table get before it grows to a bigger array) and it has only 1 byte overhead per entry, where the overhead of my table depended on the alignment of your data. (if your data is 8 byte aligned, you’ll have 8 bytes overhead)

So I spent months working on that conference talk, trying to find something that would be a good response to Google’s hash table. Surprisingly enough I ended up with a chaining hash table that is almost as fast as my hash table from last year, while having even less memory overhead than Google’s hash table and which has this really nice property of having stable performance: Every hash table has some performance pitfalls, but this one has fewer than most and will cause problems less often than others will. So what that does is that it’s a hash table that’s really easy to recommend.

The main trick of my fastest hash table was that it relied on this upper bound for the number of lookups. That allowed me to write a really tight inner loop. However when I brought that into work and told other people to use it, I quickly ran into a problem: When people gave the hash table a bad hash function, the hash table would often hit the upper bound and would often have to re-allocate, wasting lots of memory.

Writing a good hash function is a really, really tricky undertaking. It actually depends on the specific hash table that you’re writing for. For example if you want to write a hash function for a std::pair<int,int>, then you would probably want to write a different hash function for std::unordered_map than you would use for ska::flat_hash_map, and that one would be different from what you would use for google::dense_hash_map which again would be different from google::flat_hash_map. You could come up with one hash function that works for everything, but it would be unnecessarily slow. The easiest hash function to write for std::pair<int,int> would probably be this:

size_t hash_pair(const std::pair〈int, int〉 & v) { return (size_t(uint32_t(v.first)) << 32) | size_t(uint32_t(v.second)); }

So since we have two 32 bit ints, and have to return one 64 bit int, we just put the first int in the upper 32 bits of the result, and the second int in the lower 32 bits of the result.

Having just done a huge investigation into hash tables for my talk about hash tables, here's what I would tell you about this hash function: It would work great for the GCC version and the Clang version of std::unordered_map, it would work terribly for the Visual Studio version of std::unordered_map, it would cause ska::flat_hash_map to re-allocate unnecessarily, but not by much, and it would be terrible for google::dense_hash_map.

What's wrong with it? A few things: Half the information is in the upper 32 bits. The Visual Studio implementation of std::unordered_map and google::dense_hash_map use a power of two size for the hash table, meaning they chop off the upper bits. So you just lost half of your information. Oops. ska::flat_hash_map however would run into problems if the v.second member has sequential integers in it. Meaning for example if it just counts up from 0. In that case you get long sequential runs, which can sometimes cause problems in ska::flat_hash_map. (usually they don't, but sometimes they do and then the table will re-allocate a lot and waste memory)

The best way to fix this properly is to use a real hash function. FNV-1 is an easy choice to use here and it would make the hash work well for all hash tables. Except that you using FNV-1 will make all your find() calls more than twice as slow because a real hash function takes time to finish…

So writing a good hash function is really tricky and it's probably the easiest way to mess up your performance. When I said that my new hash table has stable performance, among other things I meant that it's robust against hash functions like this one. As long as your hash function isn't discarding bits, it'll probably be OK for my hash table.

The table is called ska::bytell_hash_map, and it’s a chaining hash table. But it’s a chaining hash table in a flat array. Which means it has all the same memory benefits of open addressing hash tables: Few cache misses and no need to do an allocation on every insert. Turns out if you’re really careful about your memory, chaining hash tables can be really fast.

The name “bytell” stands for “byte linked list” which comes from the idea that I implemented a linked list with only 1 byte overhead per entry. So instead of using full pointers to create a linked list, I’m using 1 byte offsets to indicate jumps.

I won’t go into more detail here, mainly because I’m a little bit burned out on this hash table right now. I just spent literally months working on hash tables for this conference talk, and a good blog post about this would take me more months. (my blog post about the fastest hash table last year definitely took more than a month of free time) So what I’ll do is I’ll link to the talk once it’s online (the first C++Now talks were uploaded last week, so it shouldn’t be too long for the talk to be available) and otherwise keep the blog post short.

So for now here are two graphs that show the performance of this hash table. First for successful lookups (meaning looking up an item that’s in the table):

This is the graph for a benchmark that’s spinning in a loop, looking up random items in the table. On the left side of the graph the table is small and fits in cache, on the right side the table is large and doesn’t fit in cache. In this graph we mostly just see that std::unordered_map is slow (this is the GCC version of std::unordered_map) so let me remove that:

This one I’ll talk about a little bit. The hash tables I’m comparing here are google::dense_hash_map, ska::flat_hash_map (my fastest table from last year), bytell_hash_map (my new one from this blog post) and google_flat16_hash_map. This last one is my implementation of Google’s new hash table. Google hasn’t open-sourced their hash table yet, so I had to implement their hash table myself. I’m 95% sure that I got their performance right.

The main thing I want to point out is that my new hash table is almost as fast as ska::flat_hash_map. But this new hash table uses far less memory: It has only 1 byte overhead per entry (ska::flat_hash_map has 4 byte overhead because ints are 4 byte aligned) and it has a max_load_factor of 0.9375, where ska::flat_hash_map has a max_load_factor of 0.5. Meaning ska::flat_hash_map re-allocates when it’s half full, and the new hash table only reallocates when it’s almost full. So we get nearly the same performance while using less memory.

Here we can also see the second thing that I meant with more stable performance: This new hash table is much more robust to higher max load factors. If I had cranked up the max_load_factor of flat_hash_map this high, it would be running much slower. So stable performance leads to memory savings because we can let the table get more full before it has to grow the internal array.

Otherwise I’d just like to point out that this new table easily beats Google’s hash tables both on the left, when the table is in cache and instructions matter, and on the right when cache performance matters.

The second graph I’m going to show you is for unsuccessful lookups. This time I’m going to skip the step of showing you unordered_map:

In unsuccessful lookups (looking up an item that’s not in the container) we see that Google’s new hash table really shines. My new hash table also does pretty well here, beating ska::flat_hash_map. It doesn’t do as well as Google’s. That’s probably OK though, for two reasons: 1. This hash table does well in both benchmarks, even if it isn’t the best in either. 2. Google’s hash table actually becomes kinda slow when it’s really full (the spikiness in the graph just before the table re-allocates), so you have to always watch out for that. Bytell_hash_map however has less variation in its performance.

I will end the discussion here because I don’t have the mental energy to do a full discussion like I did last time. I need a rest from this topic after just having spent lots of energy on the talk. But I ran a lot more benchmarks than this, and this thing usually does pretty well. And sometimes all you want is a hash table that’s an easy, safe choice that people can’t mess up too badly which is still really fast.

I’ve added bytell_hash_map to my github repository. Check it out.

]]>This seems to be a fundamental law of economics. It’s not something we have constructed. It’s even true in primitive societies: If there are two families, one family owns two cows, and one family owns ten cows, the family with ten cows doesn’t make five times as much money as the family with two cows, it makes more than that. That’s because it can more easily survive bad times (like if a cow gets sick) or it can invest in better tools to take care of cows, and those tools pay off more (like fences or a cow shed).

The book traces that out with lots of numbers: Bill Gates keeps on getting richer even though he hasn’t worked in years. It was very hard for anyone else to overtake him as the richest person in the world. Also the richest universities make more interest on their endowments than less rich universities. Why? Because they can pay better people to manage those endowments.

This is usually put as the most important lesson of the book (the famous r > g inequality) but this one is only important if you also state that people who have more money make more money.

What does it mean if return on capital is higher than economic growth? It means if you rent out a house, you will probably make a higher percentage return on that (say 5%) than the economy grows (say 2%). This is not just true for houses, but for many forms of capital: Factories, bonds, stocks, farms, you name it. The return on all of those is different, but it’s usually bigger than the growth of the economy. And once again this seems to be a fundamental law. It’s always been true throughout human history (with very brief exceptions).

So if the economy only grows by 2% a year and people who own capital make 5% (or more) a year, what does that result in? Well let’s say the whole economy has a value of 10000. And after one year it has grown to a value of 10200 (2% growth). And let’s say the owner of your apartment has a total wealth of 100, and after one year his wealth has grown to 105 (5% growth) then he now has a bigger part of the total wealth than he had the year before. He started off with 1% of the total economic pie, and after one year he had 1.03% of the total economic pie. What does that do in the long term?

We live in a very unusual time in history, because inequality isn’t that high right now. Inequality is getting bigger, but it’s still not where it has been historically.

Historically what happened is that you had a few very rich people (kings, nobles) but 90% of the population was living in complete poverty. They owned basically nothing and barely survived. The middle class was tiny (lesser nobles, merchants).

This third point follows directly from the first two points. Since those two points seem to be fundamental (they’ve always been true, through all of human history) this third point is also a fundamental thing.

So if you’re ever wondering how much worse inequality can get: It is going to get a whole lot worse. Right now we don’t have anyone who is as important as kings used to be. Until that happens, inequality is not as bad as it usually is.

I said that we live in an unusual time in that inequality is fairly low. How did we get here? Four things: The great depression, the two world wars, and the new deal. The great depression and the two world wars destroyed a lot of wealth. The high taxes of the new deal prevented the rich people from taking off again.

Since then we have reduced taxes on the rich and had a long period of peace and stability. So inequality is going up again.

It’s hard for us to imagine what life was like before modern technology. For example when David Foster Wallace talked about Dostoyevsky novels, he complains about how the novels are made harder to understand because the society was so different, and among other things he says this:

Obscure military ranks and bureaucratic hierarchies abound; plus there are rigid and totally weird class distinctions that are hard to keep straight and understand the implications of, especially because the economic realities of old Russian society are so strange (as in, e.g., the way even a destitute “former student” like Raskolnikov or an unemployed bureaucrat like the Underground Man can somehow afford to have servants)

(quote from David Foster Wallace’s “Consider the Lobster” page 263)

Capital in the Twenty-First Century was the first book I read that gave an explanation for this weirdness. The book explains how in former times, you were considered a poor person if you made five times the average income. To be considered middle class you had to make 20 to 30 times the average income. Why? Because to do anything you needed servants. Everything was hand-made and everything had to be done by hand. Piketty uses nineteenth century novelists to illustrate this point:

one can read between the lines an argument that without such inequality it would have been impossible for a very small elite to concern themselves with something other than subsistence: extreme inequality is almost a condition of civilization.

In particular, Jane Austen minutely describes daily life in the early nineteenth century: she tells us what it cost to eat, to buy furniture and clothing, and to travel about. And indeed, in the absence of modern technology, everything is very costly and takes time and above all staff. Servants are needed to gather and prepare food (which cannot easily be preserved). Clothing costs money: even the most minimal fancy dress might cost several months’ or even years’ income. Travel was also expensive. It required horses, carriages, servants to take care of them, feed for the animals, and so on. The reader is made to see that life would have been objectively quite difficult for a person with only 3-5 times the average income, because it would then have been necessary to spend most of one’s time attending to the needs of daily life. If you wanted books or musical instruments or jewelry or ball gowns, then there was no choice but to have an income 20-30 times the average of the day.

(quote from page 415 of Capital in the Twenty-First Century)

So you were considered poor if you had 3 to 5 times the average income. And the 90% of the population that didn’t have that kind of money weren’t really part of society. So the poor student from David Foster Wallace’s quote probably still had an income of several times the average income of the time (maybe from his family, but I haven’t read the book).

The above points are what I consider to be the most important points. So what’s going to happen? This part will be my own theories. First of all inequality is going to get a lot worse again. Rich people will always make more money, and their income will always grow faster than the economy, so their share of the pie will get larger and larger. In theory we can control that through higher taxes, but in the US rich people seem to be really good at influencing politics to their advantage. They now pay lower taxes than the average American, and they can now donate unlimited money for political causes and thus further change rules in their favor.

How far will that go? Much further than most people imagine. There will be people as powerful as kings used to be. Of course we won’t call them kings (maybe we’ll call them “oligarchs” or “trillionaires”). But how did you get to be a king in the old days? You got to be king by having enough money to hire the largest army. It was more complicated than that, but that’s the basic idea. Hiring a big army probably won’t work in modern times but they will find other ways to be influential. Rules and laws will be changed to allow for more influence of rich people. There will be a new aristocracy where people live in a parallel world of inherited wealth with its own social norms, fashions, manners etc.

Is there any way that that won’t happen? Hard to say. We live in a very unusual time where inequality isn’t that big, so for us it just seems unimaginable, but extreme inequality was the norm for all of human history so why wouldn’t we be moving back to that?

We got to this low inequality through destruction and high taxes. We probably don’t want another world war, so high taxes would be a way to keep high inequality in check. And it could happen. Bernie Sanders was surprisingly popular in the last election, so it’s not inconceivable that we’ll get somebody like that to be president at some point. But those things can only ever delay the rise of the rich people. There are fundamental laws at work here. You’d have to either change it so that the people who have more money don’t make more money (change lesson 1 above) or so that return on capital is not greater than the growth of the economy (change lesson 2 above). If you attempt to change either of those, you will have very, very strong opponents fighting against you.

Despite all that I’m also optimistic, because I think it’ll never be as bad again as it was before modernity. Thanks to modern technology, even poor people are actually living fairly good lives. This is not to say that there aren’t people out there who work very hard and still barely make enough to get by, but those are no longer the norm. And those people don’t die at the same rate as people did in the middle ages. Now as a counter point, I do think that there are plausible scenarios where we are back to crazy unequal societies. For example the back story of *Horizon: Zero Dawn* has a fairly consistent view of this with rich people who never work, and poor workers who don’t even make enough money to pay for their rent, so they just fall further and further into debt. But scenarios like that just seem unlikely. Why wouldn’t somebody come along in that situation and build cheap housing? (on the other hand we already prevent construction of cheap housing…)

In any case we are currently watching the conflict between people who like things to be as equal as they have been recently, and rich people who like to be richer and more influential. Rich people seem to be winning, and Capital in the Twenty-First Century helped to illustrate that one of the reasons for that is that rich people have fundamental laws of economics on their side. And it helped to illustrate that we are currently living in abnormal times, and having crazy rich people around (like kings and queens) is much more normal, and we’ll probably move back to that. It takes work to keep inequality low. By default it will always rise.

]]>The use case here was for A* path finding, but this may also be applicable in other situations. I’ll explain a bit more of A* later, but as a motivation you just have to know that you can speed up A* a lot if you can reason about all the nodes in your graph that have the same “cost.” But since the “cost” is a sum of a lot of different floats, they will rarely have exactly the same value. At that point you could fudge numbers and say that “if two numbers are equal to within 0.0001, treat them as equal” but when you do that it’s easy to accidentally define an ordering that’s not a strict weak ordering. And I have literally seen a crash caused by an incorrect float ordering that was very rare and very hard to track down. So once bitten twice shy, I wanted to see if I couldn’t just force floating math to be exact. And turns out you often can.

To illustrate the problem in A* path finding, let’s start with a picture:

Here I want to walk on a grid from the bottom left to the top right. There are three equivalent short paths I could take, two of which are visualized here. In one I walk diagonally up-right, then up-right again, then right. In the other path I walk right first, then diagonally up-right, then up-right again. These two should have exactly the same cost, but whether they do or not depends on which constants you pick.

Let’s say we pick a cost of 1 to go north, east, west or south, and we pick a cost of 1.41 to go diagonally. (the real cost is sqrt(2), but 1.41 should be close enough) Then the first path adds up to 1.41 + 1.41 + 1 = 3.82, and the second path adds up to 1 + 1.41 + 1.41 = 3.82. Except since these are floating point numbers, they don’t add exactly to 3.82, and of course these two actually give slightly different results. The first one comes out to 3.8199999332427978515625, and the second comes out to 3.81999969482421875. Can you spot the difference? Computers certainly can. Now usually you would just say that this is why you don’t compare floats exactly. You always put in a little wiggle room. But as I said above, I have been bitten by that in the past because that can break in really subtle ways.

So how would we find numbers that add exactly in cases like this? The solution is to round the floats. But clearly rounding sqrt(2) to 1.41 didn’t work. So how would we round floating point numbers in a way that’s friendly to floating point math? For that we have to see how floating point numbers work. A floating point number consists of

- 1 sign bit
- 8 bits for the exponent
- 23 bits for the mantissa

The sign bit is clearly not interesting for us. So what about the exponent? When you increase the exponent on a number like 1.41, you get 2.82. When you decrease the exponent, you get 0.705. (except you never really get exactly these numbers) So we clearly don’t want to mess with the exponent when rounding floats. That leaves the mantissa. To illustrate how we’re going to round the mantissa, I’m going to print the 23 bits of the mantissa on the left, and the corresponding float value on the right. Starting with sqrt(2):

01101010000010011110011: 1.41421353816986083984375

01101010000010011110010: 1.4142134189605712890625

01101010000010011110000: 1.4142131805419921875

01101010000010011100000: 1.414211273193359375

01101010000010011000000: 1.41420745849609375

01101010000010010000000: 1.4141998291015625

01101010000010000000000: 1.4141845703125

01101010000000000000000: 1.4140625

01101000000000000000000: 1.40625

01100000000000000000000: 1.375

01000000000000000000000: 1.25

00000000000000000000000: 1

In each of these steps I changed a 1 to a 0. This rounds the number down. We could also round up, but rounding down is easier and we don’t really care which way we lose precision. So what we see here is how floating point numbers change as we round them to have more zero bits at the end.

And that gives us a bunch of floating point numbers that are going to guarantee associativity for a longer time. To test that I wrote a small simulation that randomly walks straight or diagonally on a grid, and adds up the cost. For the first number, I very quickly found paths like in my picture above where two paths with identical costs actually don’t end up with identical floating point numbers. In fact I could also do it in just three steps, just like in the picture. For the third row I already needed 32 steps. For the fifth row I needed 128 steps, for the seventh row I needed 2048 steps, and for the eighth row, I needed more than 100,000 steps on a grid before I lost associativity. Meaning if you use the number 1.4140625 as a constant for doing diagonal steps, you can have grids with a diameter of 100,000 cells before two paths that should have identical cost will ever show up as having different costs. Which is good enough for me.

So my recommendation is that you use 1.4140625, then you can compare your floats directly without needing to use any wiggle room.

How do we find numbers like this? Here is small chunk of C++ code that sets one bit at a time to zero:

union FloatUnion { float f; struct { unsigned mantissa : 23; unsigned exponent : 8; unsigned sign : 1; }; }; void TruncateFloat(float to_print) { FloatUnion f = { to_print }; std::cout << f.f << std::endl; for (int bit = 0; bit < 23; ++bit) { f.mantissa = f.mantissa & ~(1 << bit); std::cout << f.f << std::endl; } }

Feel free to use this code however you like. (MIT license, copyright 2018 Malte Skarupke)

So how do we use this to speed up A* path finding?

The idea isn’t mine, I have it from Steve Rabin. But these rounded floating point numbers make it easier. Before we get there, I have to walk through one run of A* to explain how it works. (because I can’t assume that you know A*) In A* you keep track of two numbers for each node: 1. How long it took you to get to that node (usually called g) and 2. An estimate for how large the distance between the current node and the goal is. (usually called h) Then when we decide on which node to explore next, we pick the one where g+h is the smallest number. So for example let’s say we start A* and after looking at all the neighbors of the start node, we have this state:

In this picture I made the start node green because we’re already finished with it. The blue nodes are the nodes that we consider next. The top left node has a total value of 1+3.41 = 4.41. The other two nodes have a total value of 1.41+2.41=1+2.82=3.82. The g distance is the distance from the start to this node, the h distance is the estimated distance to the goal. Since there are no obstacles in the way on this grid, the h value is actually always the real distance to the goal. But that won’t matter for this example. (oh and I’m using Steve Rabin’s Octile heuristic to compute the h value)

What this means is that when exploring the next node, both of the nodes on the right are equally good choices because they have the same value. So let’s explore the one at the bottom:

That gives us one new node with a bad total value (4.41) and one new node with a good total value (3.82) so we can choose between the two upper right values. I will expand the one on the left:

Once again we have two choices with a total value of 3.82. I will expand the top one:

With that we have the goal within reach. We can’t end the search here though, because we have hit exactly the example that I started the blog post with: The path we chose rounds off to a slightly higher cost than the other path. So we also have to explore that one:

And with that we have now found the shortest path to the goal: Walk right, then walk diagonally up-right twice. It’s ever so slightly shorter than going diagonally up-right twice and then going right.

Having exact floating point math would allow us to skip that last step, but it would also allow another optimization:

**When choosing between two nodes that have the same total value, explore the one with a larger g value first.**

That trick speeds A* up a lot when walking across open terrain. Let’s go through it again, using this trick now:

We’re back at the start and we have two equal choices: Right and top-right both add up to 3.82. But top-right has a larger g value, so we choose that one:

Now we have three possible choices that add up to 3.82. Once again we choose the one with the largest g value and go top-right again:

And with that we’re at the goal. And we don’t need to explore any alternatives because they all have the same values. We simply walked straight there. The previous search had to explore 6 nodes, (including the goal) this one only had to explore 4. On bigger grids the difference gets bigger.

The trick of “among equal choices, prefer higher g” makes the A* path finding walk directly across open fields. And we don’t lose any correctness. If there is an obstacle in the way it will still explore the other paths that add up to the same number.

It’s a great trick. And it’s made easier by having floating point numbers that don’t have rounding errors when added. So choose 1.4140625 for diagonal steps. And if you’re using the octile heuristic, choose 0.4140625 for the constant in there. With those two numbers your grid would have to be more than 100,000 cells in width or height before you would ever get different costs for nodes that should have the same cost. So all equally good cells will actually have the exact same float representation.

Also if you can come up with other use cases for rounding floating point numbers like this, I would be curious to hear about them.

]]>It’s a very interesting idea: The problem is that states often lower taxes with the hope of attracting business or talent, but there is very little evidence about whether that actually works. So the authors of that paper decided to find a group of influential people who are somewhat easy to track: people who apply for lots of patents, the so called “star scientists” from the title. So the authors built a huge database, tracking where the top 5% of scientists who applied for the most patents had moved to over the years.

And the authors claim that they found pretty clear evidence that people like to move from high-tax states to low-tax states, so the conclusion is that if you want to attract top scientists, you should lower taxes.

Except, I dug through the data and I found the opposite. Yes, top scientists do move to states that have lower taxes, but high tax states have such a large lead in the number of scientists, that that little bit of migration doesn’t matter. But we’ll have to get to that conclusion one step at a time.

First some doubt: The “lower taxes attract business and talent” argument is clearly true at some point: If in one state you had to pay 90% taxes and in another 10% taxes, of course you’d move to the state where you have to pay 10% taxes. But if in one state you’d have to pay 35% and in the other you’d have to pay 32%, would you really decide based on that? Or would you decide based on something else, like the price of real estate, the quality of life, quality of schools or quality of the work that you’d be doing?

Reading the above article, there are a few reasons to be suspicious: The top two hacker news comments about the article point out two big mistakes: 1. The authors ignore sales tax. 2. The authors didn’t measure where people choose to locate, they measured where people migrate. Which is a subtle but very important difference.

The other reason to be suspicious is that the article reminded me of this piece, which talks about the “Rich states, poor states” index by a conservative think tank. In that index the authors find a bunch of indicators that allow them to claim that states that tax highly are not doing well and states that have low taxes are doing great. For example they claim that New York and California have been doing terribly for the last ten years, because lots of people are moving away from those states. They apparently never paused to think that if New York and California are doing terribly in an index called “Rich states, poor states”, you probably have it exactly backwards…

So when I saw that same measure of “migration” being used again to claim that states with low taxes attract lots of scientists, I became suspicious…

But I do have to say one very positive thing about the authors: They made their data available. Unfortunately the paper is behind a paywall, but the data is available for download for free. So I looked at the data.

One thing I’ll say at the beginning is that for some reason the authors’ data seems to go wonky somewhere around the year 2003. At that point the number of scientists takes a downward turn in all states until it approaches zero in the year 2009. I don’t know why that is. Maybe something in the way they collected the data. Because of that I chose to put the end point for all my analysis at the year 2003.

With that out of the way, let’s plot where all the scientists were in 1977:

OK in 1977 California had the most scientists, followed by New Jersey, New York, Illinois and Pennsylvania. I was very surprised to see New Jersey up there. I don’t think of it as a center of science these days. But I did expect California and New York to be up high.

Now for comparison here is the same graph for 2003:

In 2003 California completely dominates. It has by far the most scientists. New York is a distant second, followed by Texas.

So obviously the big story over the observed time period is the rise of Silicon Valley in California. This is our first indication that something is not right about the story of scientists moving to low-tax states. California is not exactly known for having low taxes, and seeing how most of the growth happened in California, where did scientists move to low-tax states? Sure there are lots more scientists in Texas and Washington, but then we also saw healthy growth in New York, another high-tax state.

So what did they actually claim in the article? Their main claim is:

Our empirical analysis uncovers large effects of personal and business taxes on star scientists’ migration patterns. The probability of moving from an origin state to a destination state increases when the ‘net-of-tax rate’ (after-tax income) in the latter increases relative to the former.

And they use these plots to back that claim up:

The description in the article reads “Each panel plots out-migration (measured as the log of the ratio of out-migrants to non-movers) against the net-of-tax rate (which is simply one minus the tax rate, thereby representing the share of a dollar’s income that is retained after taxes). The data underlying the figures represent changes in out-migration and changes in net-of-tax rates relative to their average levels over 1977-2000.”

Now I have looked at these graphs for quite a while, and I can honestly say that I have no idea what they mean. The first thing they do is measure out-migration by measuring how many scientists move, compared to how many stay, and then they take the log of that. Why the log? I have no idea, but it makes the y-axis completely unreadable. They also claim to have taken the log of the net-of-tax-rate, but if you take the log of something and arrive at 0.02, that means that the original value was greater than 1. How can the net-of-tax-rate be greater than 1? There are no states with negative tax rates. So I have no idea what the x-axis is either… I tried looking at the outliers to see if I could identify them by looking at the data. Surely California has to be an outlier on one of these graphs. But I honestly don’t know which one it is. Also there is only something like 40 dots. Shouldn’t there be one per state? What are these dots?

They make one other big claim in the article, which is that when you lower taxes, scientists will come to your state:

if after-tax income in a state increases by 1% due to a cut in personal income taxes, the stock of scientists in the state experiences a percentage increase of 0.4% per year until relative taxes change again.

[…]

when we focus on the timing of the effects, we find that changes in mobility follow changes in taxes, rather than preceding them. The effect on mobility tends to grow over time, presumably because it takes time for firms and workers to relocate.

In a sense this claim is even more important than the first one, because it gives us actionable advice: Lower taxes and you will get top scientists. Because of the paywall I don’t have access to the paper behind this article, but the appendix for the paper is available. Luckily the appendix contains a graph that backs up this claim:

So what this displays is that at the time of the red line the taxes decreased, and the number of scientists went up. (or the taxes increased and the number of scientists went down. Both of those events would make the line go up)

These are pretty strong effects. They claim that if you lower your taxes, you will get, on average, 100 star scientists coming into your state over the next ten years. I’m a bit worried about the size of the whiskers on those plots, but the measured effect is so big that I’m inclined to believe them. (also you would expect states to promote the fact that they lowered taxes, so you would expect business to move there)

So the first graph didn’t make sense to me, but this second one does. Before I investigate this claim though, I want to make a detour, because

I mean really, somebody did the work of tracking the population and movement of scientists over several decades, and they make all that data available. How can you not just want to dig in?

The first question that my graph from 2003 above throws up is this: Sure, California has a lot of scientists, but California is also the biggest state. Number 2 and 3 are New York and Texas, which are also the two next biggest states. So isn’t this just “states with a high population have lots of scientists”? I visualized that relationship in Gapminder, but I can’t embed custom HTML in this blog, so here is a picture:

So that’s a pretty clear relationship: The smaller your population, the fewer scientists you have. Sure, Florida is punching a bit under its weight and Massachusetts is punching above its weight, but overall there is a very clear trend here: If you’re further to the right, you tend to be higher up. And in the corner we see the giant outlier that is California. If you were to draw a trend-line through all other states, California would be way above it.

What if we account for this population/scientist correlation, and instead of drawing total number of star scientists, we draw number of scientists per million inhabitants?

Now we see which states are really doing a good job at this science thing: Idaho actually had a huge number of star scientists in 2003 relative to its population. It had 1004 star scientists for a population of 1.36 million, giving it more than 700 star scientists per million inhabitants. California is still clearly above average, but New York and Texas now look less impressive. They’re still doing good, but plenty of other states are doing better.

But I’m now convinced that this number is the right choice for the Y-axis. We need to remove the effect that the population has on the number of scientists. If we do that we can see which states are doing a good job or a bad job. With that, let’s try plotting the number of scientists against the taxes.

On the x axis I have the average income tax that somebody in the top 1% had to pay in that state. Why the top 1%? Because that’s what the paper chose to do. Their reasoning is that top scientists are more likely to be in the top 1%. In their own data, they found that 14% of the top scientists are in the top 1% in terms of income. Should we therefore use the tax rate that the top 1% has to pay? It’s not clearly the right choice, but it’s also not clearly wrong. If you’re going to make a choice of moving somewhere based on taxes, is it plausible that you’d look at the top tax rate? I think it is. Also one nice thing of using the tax rate for the 1% is that the difference in taxes is bigger there. (for reference: To be in the top 1% you had to make more than $380,000 in the year 2011. (the last year that they have data for) Apparently 14% of star scientists make that much money)

In any case if we sort these by tax rate, we see that a lot of states with high taxes are punching above their weight. Idaho, Vermont, Minnesota and California all have a lot of scientists relative to their population. In the low tax states we see that Washington state is doing pretty well. But overall there seems to be an upward trend here.

Ah, I hear your objection: I’m not looking at the sales tax here. Sure, Washington and Texas have no income tax, but they get a lot of their income from the sales tax. So what happens if we factor in the sales tax?

I added the sales tax to the income tax. That seems like a good approximation of what actually happens. When I do that the upward trend becomes even clearer. Washington is not as big of an outlier anymore.

I should mention that there are a few problems with this graph: The choice of using the income tax of the top 1% is now more questionable, because it exaggerates the size of the income tax relative to the sales tax. Also the sales tax is actually too small here because many cities have sales tax on top of that: Seattle, New York City, Houston, Austin, San Antonio and others have sales taxes on top of the state sales tax. But to be consistent with the original paper I chose to just use the same numbers.

At this point I would say that there is a correlation between having high taxes and having lots of scientists. Clearly just having high taxes is not enough. Look at Arkansas and Maine down there: High taxes but no science. Also I should say that this is still an approximation. New Hampshire is still listed as 0 taxes on the left, because it has no income and no sales tax. But clearly New Hampshire has to have some taxes. I just don’t have the numbers for that because I only have the data that the original authors collected. (or maybe they have the total tax numbers somewhere, but I don’t know what all their numbers mean) But with all these caveats out of the way, it is clear that the states with lots of scientists tend to have lots of taxes.

Now could this be a fluke because Idaho and Vermont are very small states, and there tend to be more outliers in small samples? Let’s find out by plotting the number of scientists again, as opposed to the number of scientists per million inhabitants:

Now California is our big outlier again, but even without it I still think there is a slight upward trend. Clearly we still have the high-tax-no-scientists states at the bottom right, but if you read from top to bottom, you get California, New York, Texas, Massachusetts and Minnesota as the top 5. Four out of five have high taxes, and Texas has a large population.

So now we know what the situation is at the end of the capture. But that doesn’t disprove the paper: It could still be that these states got all their scientists when they had lower taxes, and that the states have since raised the taxes.

That is actually the main reason why I did the work of adding the data to Gapminder: In Gapminder it’s very easy to find out which came first: Did the number of scientists go up first, or did the taxes go up first? Here is what that last graph looks like animated:

Watch this a couple times and see what you notice. It’s very noisy movement at first, but after a while you start to see patterns.

The first big pattern is that taxes used to be lower on average. Most states moved to the right. New York is unusual in that it moved to the left. The second pattern is that in the 90s, science went up pretty much across the board. It seems like there is more movement on the right side of the graph than on the left, but not by much.

To try to figure out the question of “does lowering taxes attract scientists,” let’s look at a couple of the big movers: California, Texas and Washington moved right before they moved up. New York moved left before it moved up. Massachusetts and Minnesota seem to walk left and right, but mostly stay in place while they go up. New Jersey moved to the right and down. The trends are ambiguous. This is weird because this is the graph where we should see the “if you lower taxes, you will get on average 100 scientists, if you raise taxes you will lose on average 100 scientists” result from the appendix of the paper that I posted above. A result that strong should be pretty clear, but the data is ambiguous at best, and maybe even points in the other direction.

Let’s look at the graph again where we measure scientists per million inhabitants:

In this one we can see the smaller states that are doing well again. Idaho and Vermont got lots of scientists in the 90s. That was buried in the graph above because these are small states. Both of these states moved to the right before moving up. On the other side we see Delaware which moved to the left, then hops up and down a bit but generally seems to move up.

So overall there is just no clear trend that would say that you lose scientists if you raise taxes and you gain scientists if you lower taxes. So what happened here? This is probably the most important claim from the article, and it should have been very clear, but it’s just not there.

The answer is in the number that they chose for their “Number of Stars Before and After Tax Change Event” graph: They chose the total income tax, not just the state income tax. Let me show you what it looks like when we plot the total income tax:

This looks pretty funny because all the dots are moving around together. Why? Because the total income tax is dominated by the federal income tax. Any time that the federal income tax went up or down, all the points move left or right.

With that knowledge we can explain the finding of the paper: In general the federal income tax went down over the observed period, and the number of scientists went up. With that alone you would find “after lowering taxes, the number of scientists goes up.” You’re measuring a time period where tax rates went down and US population went up. Those two have nothing to do with each other though. You’re just seeing baby boomers becoming great scientists at the same time as the tax rate goes down. (and the tax rate went down because it had been at historic highs after the world wars and several New Deal governments, none of which has anything to do with lowering taxes to attract science) So does lowering taxes lead to more scientists? I don’t know, but I do know that we can not infer it from this last animation, and unfortunately that’s what they did in that graph from the appendix.

So how do we figure out if people move from one state to another because of taxes? Well the authors of the paper collected thousands of migrations between states. So if we plot those against the difference in tax rate, we might be able to see a pattern. Let’s try to do that:

Here I plotted one dot for each pair of states where people moved from one state to the other. For example that highest dot is the migration of 112 scientists from New Jersey to California in the year 2000. It’s to the right of the center because they moved to a state with higher taxes. The second highest point is the migration of 104 star scientists from California to Texas in the year 2001. It’s to the left of the center because they moved to a state with lower taxes. All the other dots are similar pairs of migration. Most of them are pretty far down, because usually just a couple scientists move per year for any given pair of states.

So what can we see from this? It’s hard to see, but there is a slight tilt to the left. Since it’s so hard to see, I added a dot where the average value is. It’s slightly to the left of center. On average people move to a state that has a tax rate that’s 0.3% lower than the state that they came from. So if they came from a state where they paid 9% taxes, on average people will move to a state where they pay 8.7% taxes.

So I finally showed that yes, there is a tendency for people to move to states where they pay less taxes. But it also presents us with a problem: How can this be true at the same time as California has by far the most scientists while also having a high tax rate?

The answer is that those California scientists don’t come from migration. By the year 2003, a total of 11610 scientists had moved to California, and a total of 11048 scientists had moved from California. Which means net-migration to California over the entire observed time period is 562 scientists. California started with 3151 scientists in 1977, yet somehow California had 14428 top scientists in 2003. Which means we have 10715 unaccounted scientists that had to have come from somewhere. And that brings us to the title of this blog post: Where do top scientists come from?

I think the answer lies in the definition chosen for the paper. They used the top 5% of scientists who submitted the most patents. How do you get to be in that group? Well you start off with no patents, then with your first patent you get to be in the bottom 95% and then you work your way up. So if you are a state that wants to have a top scientist, what is the best way to get one? You could either try to entice one to come from a different state, or you could look at some of the bottom 95% of scientists that already live in your state, (or students or young scientists without any patents) and you could try to turn one of them into a top scientists.

Which of those paths is more common? For the California example above clearly the second path is much more common. California got more than 10,000 top scientists from within California, and only 562 scientists from outside California. But maybe the low tax states got lots of scientists from migration. And which state lost the most to migration? New Jersey maybe? Lets find out:

Here I plotted the number of top scientists who migrated to or from a state on the y axis, against the taxes on the x axis. And wow, New York lost a lot of scientists. It lost nearly 1000 scientists to migration over the observed time period. New Jersey was also hit pretty bad, losing 748 top scientists between 1977 and 2003. California, with a gain of 562, gained the most. But with the exception of California there seems to be a downward trend here: The more you’re to the left, the more likely you are to have gained scientists from migration. The more you’re to the right, the more likely you are to have lost scientists to other states. The top five states are now California, Texas, Florida, Washington and North Carolina. Three out of those five have low taxes.

Let’s also plot the migration per million inhabitants, so that small states can get a chance to shine if they did well relative to their size:

New Jersey does even worse now. It lost 87 scientists per million inhabitants. But we have a new worst performer with Delaware, which lost 130 top scientists per million inhabitants. California isn’t looking all that impressive now, and the states who are doing best are New Hampshire, New Mexico, Wyoming, Nevada and Washington. Four out of five are low tax states. We also have a new bad high tax state with Idaho, who lost 55 scientists per million inhabitants in the time period from 1977 to 2003. So it seems pretty clear at this point that top scientists do migrate from tax-heavy states to low tax states.

But wait a second, wasn’t Idaho doing very well above? How can it lose scientists to migration and still be one of the top states in terms of scientist per million inhabitants? To resolve that discrepancy we have to look at a second number: The number of homegrown scientists. For that I take the number at the end of the observation, subtract the number at the beginning of the observation and subtract the number gained from migration. I ran through the example for California above and ended up at 10715 homegrown scientists. Here is what it looks like for all states:

Now New York is looking pretty good again. It had 2490 scientists in 1977, grew to 4260 scientists in 2003, all while losing 998 scientists to migration to other states. That means that 2768 new top scientists came from New York in the time period from 1977 to 2003. That’s very impressive. The top five states that generated the most scientists are California, New York, Minnesota, Texas and Washington. Once again we see that big states are over-represented, so here is the same graph for “homegrown scientists per million inhabitants”:

Idaho does great again now. So does Vermont, Minnesota, Oregon and Washington. Comparing this to the graph of migration per million (two pictures up) we can see that many of the states who do very well here did poorly there: Idaho, Minnesota and New York for example. They generated a lot of scientists, and also lost a lot of scientists to migration. Ohio on the other hand is a state that isn’t doing well: It lost a lot of scientists to migration, and didn’t grow many.

Except, Ohio actually doing alright: It generated 34 top scientists per million inhabitants, and only lost 30 per million. Why does it look so badly in this graph? Because the scales are all different. Look at the difference in scales compared to two pictures up: The top states generated 200 to 800 scientists per million inhabitants, but in the migration graph higher up the top states only managed to attract 40 to 80 scientists per million inhabitants through migration.

So what does that mean? It means if you want top scientists, you’re probably better off helping your local scientists to become top scientists, rather than trying to attract them through migration. You simply can’t get the same numbers through migration that you can get from homegrown scientists.

So from this the data would suggest that

- High tax states seem to generate more scientists
- Some of those scientists move to lower tax states

So does lowering taxes attract scientists? Yes, but not many. States that have higher taxes seem to be able to get more scientists. Why? I don’t know. I mean there are some high tax states that are doing really badly. There are many possible theories that would lead from high taxes to many scientists. California for example has a really strong network of universities. One thing I need to point out though before we make theories is that this phenomenon of high tax states having more scientists is a recent development. At the start of the observation, we don’t see this. I showed the animation above, but let me freeze frame the “scientists per million” on the first year, 1977:

In 1977 there is no clear upward trend. Maybe there is a really slight upward trend, but really if there is a trend here I would say that states that are in the middle of the pack do well. If you have low taxes you do badly, and if you have high taxes you do well, but not as well as the people in the middle. But there are also a few things misleading about this graph. Delaware for example had less than 600,000 inhabitants at the time. It’s easier to be an outlier in the “scientists per million inhabitants” category when you’re small. Also the X axis is misleading, because federal taxes are missing. The income tax rate in New Jersey in 1977 was actually higher than in California in 2003. You can kinda see that in the animation higher up where I plot the “total income tax.” It’s the animation where the bubbles bounce left and right a lot. The only conclusion I can draw from this is that in the 70s high tax states didn’t have such a clear lead over the other states as they did in 2003, so for some reason whatever benefit that you get from higher taxes really took off in those 26 years.

At this point I could try to make theories for why high tax states generated more top scientists from 1977 to 2003, but I’ll leave that for others. I set out to investigate the question whether lower taxes lead to more top scientists, and I disproved that theory. There is clearly no correlation, and therefore there is no causation. In fact the correlation seems to go the other way, but theories for why that is will have to wait for another time.

In conclusion what I found is that the states that are best at generating scientists tend to have higher taxes. Some of the top scientists move from the generating states to lower tax states, but not nearly enough to make up the difference.

So if you want top scientists in your state, should you lower taxes to attract scientists? Maybe, but you can probably get more if you can figure out what those states on the right, the ones with higher taxes, have done between 1977 and 2003. It seems like you can get literally ten times more top scientists from growing scientists in your own state (200 to 800 scientists per million inhabitants) than you can get from migration from other states. (40 to 80 scientists per million inhabitants)

Whether you decide to raise taxes to invest them in science funding, or if you decide to lower taxes to try to get people to migrate to your state, be careful: There are low tax states which didn’t manage to attract migration, and there are high tax states that didn’t manage to generate scientists. The situation is more complex, and it can’t be controlled through taxes alone.

For further reading I have this article, which claims that most science gets done by big corporations. Which would certainly explain why California and New York are doing so well, but it just begs the next question: Why are there so many big companies in Silicon Valley and New York City? Also here is a piece that claims that 76% of venture capital funding goes to California, New York and Massachusetts. I have no idea why those three states in particular, but it also seems relevant… And finally I work in computer science, a field where like half of the really big inventions came from governments or universities, (or government-funded research at universities) so that would be the clearest link between taxes and top scientists. I’m sure somebody has numbers on where most government funding for research goes, and we could correlate that with this database of top scientists.

But for now I end this investigation here and I thank you for reading this.

I exported the Gapminder graph and uploaded the file here. You have to extract that zip archive somewhere and then open index.html. This file is probably what you want if you just want to play around with the data and look for correlations. For example one fun analysis that I didn’t include above is “maybe the higher tax states have a higher GDP per person and because of that they have more scientists.” That is interesting because there is no correlation between tax and GDP, but there is correlation between taxes and scientists, and there is correlation between GDP and scientists. Meaning A correlates with B and B correlates with C, but A does not correlate with C.

If you want more data, the authors of the original paper have made their data available here. There is much, much more data in that dataset than what I included in the Gapminder export.

I used this R script to generate pictures from the data and to convert it for Gapminder. To run the R script you have to change the “base_folder” variable at the top of the file to point to where you extracted the data from the paper. Also, after running the script, you have to manually open the file “for_gapminder.csv” that the script writes to the data/ folder, and remove the first column in there. I did that using sublime text, but you can also remove it using LibreOffice Calc. There is probably also some way to do it in code. After you have removed the first column, you can open the file in Gapminder.

]]>I’m going to talk about video games, but this is also about games in general. Why do kids play with dolls? Because they want to learn about family life. (or about conflicts when playing with action figures) This is not explicit learning like we learn from a teacher, but you act out situations and adjust your behavior depending on how your play partner reacts. Why do we send our kids to football practice? Not because we think that they need to learn the valuable skill of kicking a ball into a net. No it’s because we want them to learn about working in teams and about pacing themselves and about playing fair and all that.

The things we learn are obvious in those scenarios. It’s well known that it’s important for kids to play in order to figure out how to act in the world in a safe environment. But I claim that the same thing is true for video games, and my example will be Super Mario World.

Why Super Mario World? Because I bought a SNES classic and I’ve been playing it a lot. Going back to it, I’m actually very impressed with the game, but that’s for another blog post.

Let me talk about a specific situation where I learned something small from Super Mario World: Me and my girlfriend currently have a friend staying with us for a week. Last Friday night we went out separately, and when me and my girlfriend came home we found our friend playing Super Mario World in the living room. We briefly made fun of him but then joined him. Very quickly you had three thirty year old adults sitting on the floor in front of the TV playing SNES like six year olds.

Our friend had remembered a secret that allowed him to skip half of world 3 and all of world 4 of the game. You just have to beat two really tough levels. Our friend had a very hard time at those levels, going through several continues and not making much progress. I tried to give him advice, but naturally I also started making fun of him: “this shortcut seems to take longer than going the normal route.” This went on for a while and I found new jokes to make because our friend kept on trying the same level, not making any progress. At some point my girlfriend got upset and told me to stop nagging him. So I scaled it back a little but didn’t completely stop. By the end of the evening my girlfriend was quite upset with me, so we had to have a conversation about this.

There’s a lot to learn in this simple social interaction, so let me walk through it.

First, why do we nag people? I was nagging my friend because I thought he was employing a bad strategy. I have played a lot of games and I know from experience that if you are stuck on something like that you can easily burn out on the game. Even if you eventually overcome the challenge, you might never turn on the game after that because you’re burned out. So the smarter thing to do is to go do something else for a while and then come back to the challenge, so I was trying to get my friend to take the “normal” route instead of the difficult shortcut. My friend thought he was showing perseverance, but I thought he was just being stubborn. And stubbornness leads to burn out which is the opposite of perseverance.

So why nag him instead of just telling him this outright? Because you can’t just tell people “you’re doing this wrong, you should do X instead.” That would introduce some power dynamics because suddenly I claim that I’m better than him, and the other person might get defensive. So instead you try to do a subtle intervention, and you nag him a little bit. He can either respond to it, or he can ignore it. And he can also easily tell me to stop at any time without anyone losing face. (and in this case it actually turned out that I was wrong since he has since beaten the game, so doing a subtle intervention like that is good because if it turns out to be a non-issue, then it’s good that you didn’t escalate it)

We never think about it this explicitly, (I didn’t think of this while I was nagging him) but that’s the kinds of dynamics that are going on here. It’s awkward to state all this and it’s even more awkward to say all of this in person. Since my girlfriend was upset about the nagging (this friend I’m talking about is more her friend than my friend) I thought that our friend might also be upset about it, so I tried explaining things the next morning. Turns out he wasn’t upset, but when I explained that I had learned the “difference between perseverance and stubborness” that I mentioned above, he got defensive. He said things like “I’m not trying to beat the game, I’m just playing for nostalgia.” I tried explaining that what I had learned wasn’t about that, but it wasn’t a good conversation to have so we dropped it. You can’t say these things directly, so the slight nagging I employed the night before was the best strategy.

Social rules like this are really complex and you can’t teach the subtleties directly, because we can’t even articulate all the subtleties. It depends so much on the situation. I claim that we learn these things from playing games, and that these things are the reason why we play games. We never learn them as explicitly as I stated them here, but we try to figure these things out. When should you be stubborn? When should you nag, when should you say things directly? The rules for these are complex, require a lot of experience, and you can’t transfer them directly, so you learn them through play.

In the conversation that my girlfriend and I had that night, she was upset at me: “Why did you have to nag him like that? He was just playing a game.” But my point is that the nagging has to happen precisely because he is playing a game. We all know people that are too stubborn in their job or in other competitive situations, but you can’t necessarily tell them that in those situations. Because in the real world the stakes are often higher, so if you come along and tell them that they are doing something wrong, there is even more tension. And if you nag them, they might snap back at you.

But then we also play games, and the hope is that if a person has a personality like that, you would also see that personality in the game. And since it’s “just a game” this is the perfect time to try to improve on these things. It’s a low-risk environment where we can easily practice our social interactions, and if there is an uneasy interaction, we will all forget about it within a week. Because it really wasn’t that bad, and that’s the whole point of doing this in the context of a game. All that happened was a bit of fine tuning in everybody’s interactions.

So then if games are about learning about ourselves like that, why is Super Mario World a good game? Because there is so much to learn there. It’s an amplified version of the real world: It’s a world rich of challenges, opportunities, risks and rewards.

You always encounter new levels and new enemy types, so you learn about how to approach new situations. How to be careful in a new situation. How to incrementally learn more about it and to explore safely. (but you also learn that often if you’re trying to be too safe, you don’t succeed either, so you can’t be too timid) There are levels where you have to be patient, levels where you have to be quick, levels where you have to be creative and levels that are about perseverance. There are situations where you have to grab an opportunity, and there are situations where the “opportunity” is actually a trap.

So you can learn and experiment with all those skills and you can try to fine-tune them. You can do this much more quickly than you can in real life, because there simply isn’t that much going on in real life. On top of that you can learn a whole different layer of skills when playing with friends. Whether you’re competing, cooperating, or taking turns, there is a lot of social behavior to practice.

These are all really valuable life skills and we learn them in games. This may sound too grandiose, because after all it’s “just a game” and I do agree that it’s weird to state these things so explicitly. It’s not like you’re going to become a really smart person because you found a smart trick in Mario to get extra lives. But if you watch kids play, you will see how bad they are at a lot of these things and how they get better the more they play. And I claim that the things you learn in the game transfer into real life. Obviously something like “here is a trick for easily beating enemy X” is not going to transfer into real life, but the creativity that you employed in finding that trick is going to transfer.

Our subconscious knows this and this is why it likes playing these games. To beat Super Mario World, you need to be good at a lot of high-level-skills like “perseverance,” “patience,” “curiosity” and “creativity” because otherwise you’re going to get stuck on a difficult level half-way through. When a game stops teaching us those meta-skills, it becomes less interesting. If all it presents us with is harder versions of the same problems that we have already solved, we get frustrated or bored. We don’t actually care about getting better at jumping or at killing enemies. So don’t just give us harder versions of that. We care about getting better at the higher level skills, so instead we want new situations where we can try what we have learned and refine it further.

When thinking about a new theory like this, it’s always fun to try to pattern-match it to see if it also explains other things.

One thing it explains is why we play less as we get older: As kids we have to learn so much about the world. All these high level skills have to be learned and fine tuned. But at some point we’re pretty good at them, and we’re learning less from playing new games. At that point we can either play more complex games like Dark Souls or Europa Universalis, or if we don’t have time for that we stop playing. The normal games stop being interesting for us for the same reason that playgrounds stopped being interesting for us: We have learned all we can learn from playing on a playground, and we have learned all we can learn from playing simple games.

This theory also explains why so many educational games are bad. They think that in order for games to be educational, they have to teach you something directly. So they make you do math exercises or something stupid like that. But they lose all the much richer learning about ourselves that happens in game like Super Mario World. It’s been said before, but “The Sims” is how you make a good educational game: The thing you’re learning is intrinsically interesting. It starts off with the same appeal as playing with dolls, where you learn about family life. And then you also learn about having a job, using money, building a house, getting better at skills etc. None of this is taught explicitly, and you’re not going to get any big wisdoms out of it, but you are going to learn and tune some of those implicit social rules that everybody has to learn to live in modern society.

So how do we make games like that? I don’t have the answer to that question yet. A few games came to mind while writing this blog post, but I don’t think the creators of any of those set out to make games where you develop as a person. What I do think though is that if at some point through the development of your game you realize that “this is a game about breaking through your perceived limits” (as in Dark Souls) then you can use that realization to make choices that make the game better. One example that I came up with is if The Sims is a game about learning how to use money (among many other things) then it should probably contain credit cards or student loans or some way to learn about debt and interest on debt. Obviously you don’t want to allow the player to get themselves into really bad situations, but you could for example have debt collectors come to the house and take the most valuable furniture. Would that be fun for the player? Maybe not. Would players enjoy the game more because they have more possibilities in the game? I think they might.

One thing I tried while writing this blog post is to use this angle to try to understand a game that I didn’t understand before. I chose Candy Crush Saga. Playing it, thinking “what do you have to learn to beat this game?” made me think that Candy Crush is a game about being lucky. It’s a game about setting up situations where luck can strike, and spotting lucky opportunities when they come about where you didn’t expect them. You know all those theories about slot machines being “due to hit” or lucky streaks and all that? In Candy Crush these really do happen, because the game is about setting up the board so that these do happen. Is that a valuable skill to learn? I think it’s a very valuable skill. I think being lucky is half of my success in life. You should read this article about the difference between people who consider themselves lucky and people who don’t consider themselves lucky. When I first read that article I realized how much of the good things in my life happened because of habits like those described in the article. Now is Candy Crush a good game to learn these skills? Not necessarily. I think it’s better at it than other match-3 games like Puzzle Quest, but all the free-to-play things in there with the mind tricks that try to get you to pay money are… problematic. But I now think that I could make a game based on Candy Crush that’s more explicitly about teaching luck to people, and I think it would be a good game.

So I think this is a useful theory. And hey, isn’t it great to realize that video games teach important life skills? If somebody ever complains about you “wasting time” because you’re playing video games, you can say “no I’m learning about my role in society, about balancing work, leisure and family and I’m experimenting with different life trajectories” (when playing The Sims) or “no I’m learning about dealing with frustration, exploring novel situations and finding alternate solutions when I’m stuck” (when playing Super Mario World) or “no I’m learning about focus, creative thinking and how to use both my active mind and my background mind to solve difficult problems” (when playing The Witness) or “no I’m learning about my limits and how seemingly impossible things can be achieved with practice, perseverance and attention.” (when playing Dark Souls)

The main part of this blog post is over, but I think it’s interesting how I can’t find people talking about this theory before me. It is well known that as kids, animals (including humans) play games in order to learn how to act. And they do this through play because that’s a safer environment where you can gather a lot of experience in a short amount of time. But for some reason that thinking hasn’t encompassed video games.

I was reading the book “The Art of Failure” by Jesper Juul while writing this blog post, because I thought it might have some thinking in this direction. The book description starts off with

We may think of video games as being “fun,” but in The Art of Failure, Jesper Juul claims that this is almost entirely mistaken. When we play video games, our facial expressions are rarely those of happiness or bliss. Instead, we frown, grimace, and shout in frustation as we lose, or die, or fail to advance to the next level. Humans may have a fundamental desire to succeed and feel competent, but game players choose to engage in an activity in which they are nearly certain to fail and feel incompetent. So why do we play video games even though they make us unhappy? Juul examines this paradox.

And in the book he talks about the paradox of failure, which is that we generally avoid failure, but we also seek out games where we fail. One thing the book talks about is how people generally like a game less if they beat it on their first try, compared to if they fail at least once and then beat the game. It all sounds like my theory of “we play games to learn” is a perfect explanation for this paradox. And the book talks about it, for example it has this quote from Ben Franklin:

The game of Chess is not merely an idle amusement. Several very valuable qualities of the mind, useful in the course of human life, are to be acquired or strengthened by it, so as to become habits, ready on all occasions… we learn by Chess the habit of not being discouraged by present appearances in the state of our affairs, the habit of hoping for a favourable change, and that of persevering in the search of resources.

But somehow, even though the book has that paradox of failure, and it has this quote by Ben Franklin, it never seems to say that the solution is that we play video games to learn these skills, to develop as a person. (just as it’s the reason why animals play) This was really puzzling to me. Even at the end of the book it feels like the authors are unsatisfied because they haven’t really solved the paradox. I wanted to shout at the author “you are so close, why don’t you just take this tiny step and state that this skill development is why we play games.”

I think one part of the book gives away why they don’t think this. Whenever the author talk about “learning” from video games, the author actually only talks about getting better at the game. Whenever he talks about video games, he doesn’t think that you can develop as a person, he thinks that you can only develop to get better at the game.

This became clear to me when the author talks about the difference between games of chance and games of skill. In that chapter he also talks about a third category of games, games of labor, which are games that primarily reward time invested. In those games the actions you perform are not particularly challenging, and you will mostly succeed if you invest a lot of time. Examples are World of Warcraft or Farmville. (it should be noted that most games are a mix of skill, chance and labor, these are just examples where the labor part is particularly strong) After explaining the distinction, the book states this:

For those who are afraid of failure, this is close to an ideal state. For those who think of games as personal struggles for improvement, games of labor are anathema.

This sentence only makes sense if the “improvement” that the author is thinking about is only in terms of getting better at the game. If the “improvement” was about developing as a person, then games of labor are as valid as games of skill or games of chance. Because in the real world there are a lot of activities that are simply a lot of work. For example manual labor on a farm like in Farmville is a lot of repetitive work. Not everyone can do that kind of job. But you can learn to get better at something like that by playing a game like Farmville, which teaches that for some things it’s not about your skill or intelligence, but it’s about putting in the time necessary to do something good. (but of course Farmville is probably not the best game to learn this, as it has all the same problematic elements as Candy Crush)

So I think part of the reason why people don’t think that games are about personal development is that the fact that you’re getting better at the game is so obvious. And then they miss the less obvious effect that you’re also getting better at higher level skills that you’ll need in life.

Games are great, folks. And we shouldn’t feel bad for playing them. Just, you know, don’t let this be an excuse to play too much. After all the reason why you’re learning and developing in games is to use what you learned in the real world. (and to then get better than you could get just from playing games)

]]>People who saw the talk said that they really liked it, and they keep on telling me how much they liked it. So I decided to record the talk again and upload it.

The pitch for the talk is that the results of the Game Outcomes Project is the best evidence we have for what makes great game development teams and what makes bad game development teams. And I think that every game developer should know this stuff. So I talk about what you should focus on when making a game, and I give advice for how to get there. So the game outcomes project found “really successful teams do X” and I present that, and then also have a section at the end of the talk where I say “here is how you can actually get good at doing X.” Here is the talk:

]]>

The idea is that this should be something similar as George Polya’s “How to Solve It” but for doing research instead of solving problems. There is a lot of overlap between those two ideas, so I will quote a lot from Polya, but I will also add ideas from other sources. I should say though that my sources are mostly from Computer Science, Math and Physics. So this list will be biased towards those fields.

My other background here is that I work in video game AI so I’ve read a lot of AI literature and have found parallels between solving AI problems and solving research problems. So I will try to generalize patterns that AI research has found about how to solve hard problems.

A lot of practical advice will be for getting you unstuck. But there will also be advice for the general approach to doing research.

The general framework is that of exploration and exploitation. Exploitation means you are getting more out of old ideas. Exploration means you are looking for entirely new ideas. You may be thinking that doing research is more exploration than exploitation, but it’s actually a mix which contains more exploitation than exploration. Really new ideas get discovered rarely, and most of the work is to realize all the consequences of existing ideas.

The two analogies I like for this are hill climbing and exploring the ocean.

Exploitation is as an effort in hill climbing. Hill climbing comes from the family of AI problems that deal with search, where search means “I’m at point A, I want to get to point B.” It’s a very general problem that applies to more than just trying to find your way on a map. For research point A is “here is what I know now” and point B is “here is what I would like to find out/prove/demonstrate/get to work/make happen etc.”

There is a large number of search algorithms, and you only really use hill climbing if your problem has the following criteria: You can’t see very far, the problem is very complex, you don’t know where the goal is and progress is slow. Meaning there is heavy fog in the mountains, the mountains are crazy complex, you may end up at a different peak than what you had planned (or maybe you just want to get out of a valley and don’t know ahead of time where that will take you) and to top it off it’s all covered in snow, making progress very slow. So you can’t just say “I’m going to explore a thousands paths” because exploring one path might take you a week before you find out that it leads to a dead end.

At that point all fancy AI techniques are out of the door and we’re left with simple hill climbing. Luckily AI has several improvements over the simple “go up” approach that will just get you stuck on the first small hill.

For the “exploration” part of exploration and exploitation I want to use the analogy of exploring the ocean. You obviously can’t do hill climbing there. You could try to bring a long rope and measure the depth of the ocean, but then you would always just move straight back to the island that you came from. Because if you just left the harbor, in which direction does the floor of the ocean go up? Back into the harbor. You have to cover a large distance before you can do hill climbing.

The “exploring the ocean” analogy is not perfect, because there is a property to this kind of research where the more you’re trying to reach a goal, the less likely you’re going to get there. I guess it works if you have a wrong idea of where the goal is. Like Columbus thinking that India was much closer, and accidentally discovering America.

The best explanation I have found for this is by Kenneth Stanley in his talk The Myth of the Objective – Why Greatness Cannot be Planned. I recommend watching the talk, but if you don’t want to do that I will mention the main points further down.

For now the main point is that there are some discoveries that can only come from free exploration. You find a topic that’s interesting and you go and explore in that direction, without any specific aim other than to find what’s over there. Then at some point you start doing hill climbing to actually get results, but you can’t start off with it.

In this section I’ll mention the general approach to doing research. You’re probably doing many of these things already because they’re common sense but it’s still worth pointing these things out once. Especially students often get these things wrong, and then it’s good to be able to recognize what they do differently than you, and then it’s good to have the words for the common sense.

When doing research it’s easy to fool yourself. So it is very important that you go out of your way to prove yourself wrong. Feynman thought this was very important when talking about Cargo Cult science. I’m slightly misquoting him here because he doesn’t just talk about proving yourself wrong, but about a broader scientific honesty:

But there is

onefeature I notice that is generally missing in Cargo Cult Science. That is the idea that we all hope you have learned in studying science in school – we never explicitly say what thisis, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It’s a kind of scientific integrity, a principle of thought that corresponds to a kind of utter honesty – a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid – not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked – to make sure the other fellow can tell they have been eliminated.Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can – if you know anything at all wrong, or possibly wrong – to explain it. If you make a theory, for example, and advertise it, put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it.

He goes on to talk about more things that you should do, but for now I just want to talk about the part of proving yourself wrong, because it is genuinely helpful when doing research.

It’s also the main difference between real research and pseudo-science. People in pseudo-sciences never try to prove themselves wrong. It’s also the main difference between real medicine and alternative medicine. People who promote crystal healing never try to prove themselves wrong. At least not seriously. Same thing between real journalism and conspiracy theorists. A real journalist will try to prove his theories wrong many times before publishing. Especially if it’s about a conspiracy.

But there is a more pragmatic reason: Proving yourself wrong early will save you from wasting time. Of course if you try to do research into a crazy theory like crystal healing, you’re wasting time. But there are also many reasonable cases where the fastest way to find out if an approach will work is to try to prove it wrong.

Now this can be a little bit tricky, because as Feynman also said “The first principle is that you must not fool yourself – and you are the easiest person to fool.” Meaning it’s hard for you to prove yourself wrong. It’s easier for you to fool yourself. But once you get better at proving yourself wrong, you tend to find shortcuts. Ways to rule out in a day what would have taken you a month to confirm. It turns out that often you only need rough heuristics to prove yourself wrong where proving yourself right requires actually working all the way through the problem.

You want to be a little bit careful with this because sometimes good ideas lurk in areas that most people stay far away from because of some “probably won’t work” heuristic, but usually those heuristics are a good idea. And even if you don’t want to use heuristics, it’s still often faster to prove yourself wrong than to prove yourself right, so the advice stands.

For some people it’s very hard to prove themselves wrong. There is one final trick that can even help those people: For some reason we are really good at proving other people wrong. If somebody else comes to you with a crazy idea you can immediately tell that it’s a crazy idea. Much more quickly than if it was your own idea. So the final trick is to ask others to prove you wrong. Meaning just ask a colleague to run an idea by them. And then listen to what they say.

I don’t know a good name for this, so I’ll use the name of the AI technique. You probably do this automatically, but it’s worth pointing out, because sometimes I see people who don’t do this, and they are really screwed.

Simulated Annealing is a very general approach, which roughly says that you should figure out the big picture out before you figure out the details. And it does that by prescribing what your response should be when you’ve walked all the way up in hill climbing and gotten stuck. Getting stuck means you’re at some local peak and it seems like you can’t see any paths that take you any higher. It seems like there are only steep cliffs or options that take you back down the hill. And ideally you’re not just stuck for five minutes, but you’re stuck for an hour or a day or more.

The general pattern is that every time you get stuck, you do a reset. But the size of the reset becomes smaller and smaller. At first you reset your progress completely and start over from the very beginning and try an entirely different approach. After you’ve tried a few different approaches, the next time that you reset yourself, do a smaller reset so that you stay in the area that took you the furthest. Don’t try new approaches any more, but try different variations of one approach. Later you do smaller resets still and maybe just try a few different solutions to specific problems. And at the end you do really small resets and just tweak some numbers.

What this means is that when you first work on a problem, you shouldn’t spend too much time fiddling around with the details. Instead try a different approach.

And then later, when you’re fiddling around with the details, you should not go back and try a whole different approach.

There is a progression to this. You often see inexperienced researches spend too much time on the details early on. Or maybe they have come very far and are already twiddling with the numbers when they find that their whole approach was wrong and they have to start over with an entirely different approach. That is very demoralizing. You have to do that exploration early on before you ever get to the details.

Or sometimes people are just trying lots of different approaches and are never actually doing one approach seriously. Simulated Annealing says that you should walk up until you’re stuck. Don’t switch to a different approach until you’ve gotten stuck. (sometimes that takes too long and you end up spending weeks on one approach. In that case set a time limit and do a reset once a week or so)

So when you first get stuck, do big resets and try entirely different approaches. Then over time do smaller and smaller resets.

This also gives a natural end to your research. Once you’re done twiddling with the details you’re done, period. (you don’t need to go back and try an entirely different approach since you already explored those earlier)

This one is not an AI technique but my own observation. It’s also something that all good researches do automatically, but it’s worth pointing out explicitly.

You want to be incremental. In the hill climbing analogy, imagine that there are several paths already carved into the mountains where previous researchers have made progress before you. You almost always want to start off from one of those paths. In fact some of those paths have become very wide because there are lots of researches doing work up at the end of those paths, so the path is well-trod.

You may actually want to avoid paths that are too wide, but only if you are experienced already. If you are a grad student doing your first research, don’t stray too far from where others are.

The advice to be incremental may be disappointing because you want to invent the next big theory like general relativity or the next Internet or whatever. But actually the more you read about those and how they came about, the more you realize that they were actually quite incremental. There are really very few inventions which can not be traced back to ideas that slowly accumulated and evolved over many years. Sometimes an idea seems really impressive to the outside world because to them it’s all new. But then you look at the author’s work and find that they had been silently working on it incrementally for the last ten years.

You may think that the “be incremental” advice does not apply to the “ocean explorer” analogy of research, but you’d be wrong. Few good things have come from just setting off into completely uncharted territory. Usually you want to hop from island to island. The “Myth of the Objective” talk that I talked about above strongly emphasizes how important stepping stones are for this kind of research. The results in their program couldn’t have come about if people couldn’t have built on top of each other’s results.

The AI technique for this is called Local Beam Search (the link is to “Beam Search” because it seems like Local Beam Search is never mentioned online…) which is a variant of hill climbing where we do several searches at the same time. That’s the whole trick. Programmers are not good at naming things.

Doing several searches at the same time is an easy thing to do for a computer, but it’s hard to do for a person. But I think we can get the same benefits without literally doing the searches at the same time. I’m going to quote from the book “Artificial Intelligence – A Modern Approach” (second edition) by Russel & Norvig to list the benefits:

In a local beam search, useful information is passed among the parallel search threads. […] The algorithm quickly abandons unfruitful searches and moves its resources to where the most progress is being made.

So how do we get these benefits as a simple human who can’t do multiple searches at the same time? One thing we can do is keep track of where you walked down one path but you could have walked down the other path. Explore the other paths every once in a while. Since humans have to do this sequentially, it’s actually similar to the simulated annealing I mentioned above. But the idea would be to do multiple searches at once, where each of them follows the simulated annealing approach.

One other approach to this is to always have more than one project. Here is Robert Gallager talking about this in the context of a talk about Claude Shannon:

Be interested in several interesting problems at all times. Instead of just working intensely on one problem, have ten problems in the back of your mind. Be thinking about them, be reading things about them, wake up in the morning, and review whether there’s anything interesting about any of them. And invariably […] something triggers one of those problems and you start working on it.

Why is that so important? I would say that one of the most difficult things in trying to do research is “how do you give up on a problem?” So many students doing a thesis just beat themselves over the head for year after year after year saying “I’ve got to finish this problem, I said I was going to do it and I’m going to do it.” If it’s an experiment you’re going to do, yes you can do that. You can do it more and more carefully if something isn’t working you can fix it to the point where it works. [But] if you’re trying to prove some theorem and the theorem happens to be not true, then your chances of success are very low.

If you have these ten problems in the back of your mind and there is one problem that’s been driving you crazy, what are you going to do? It’s going to sink further and further back in your mind and just because you’ve had more interesting things to do, you’ve gotten away from it. After a year you won’t even remember you were working on it. It will have disappeared. That’s a far better thing than to reason it out and say “I don’t think I can go on any further with this because of this, this and this reason.” Because your problem is you don’t understand the problem well enough to understand why you oughta give up on it. So you just find that you can do other things which become more interesting temporarily.

I think this quote is spot on, but there is an additional benefit to working on several things at the same time: There is lots of cross pollination between ideas. Also talking about Claude Shannon, this article has another good quote about this:

His information theory paper drew on his fascination with codebreaking, language, and literature. As he once explained to Bush:

“I’ve been working on three different ideas simultaneously, and strangely enough it seems a more productive method than sticking to one problem.”

The final angle in which this is helpful is that research just takes time. Some things can’t be hurried. If you work on multiple things at the same time, then that allows you to work on one thing for a longer time. If one project needs to take ten years, then there is no way that you can work on it full time for ten years. But if you also work on other things during those ten years, all of a sudden it’s doable.

I’m using Barbara Oakley’s terms from her Learning How to Learn online course.

The idea is that the brain has two distinct ways of working: The focused mode where you actively work on a problem, and the diffuse mode where you’re doing something else entirely but your subconscious is working on the problem. The diffuse mode is responsible for a lot of eureka stories, including the original one: Archimedes was stuck on a problem, trying to figure out whether a crown was pure gold or not. Then on a trip to a bath he is relaxing, mind drifting off, watching the water move, when suddenly the answer jumps into his head.

For me I also often get ideas like this while taking a shower. Some people say that physical exercise helps them get into the right mode. That doesn’t work for me, but long walks certainly do work. Some people say that they only get into this mode when sleeping and that they wake up with good ideas. That works very rarely for me, but maybe you have more luck with it. It seems like you need to be somewhat relaxed, your mind mostly idle. Then the background processes in your brain get to work and can form new connections that weren’t clear before. You have to stop thinking about the problem for a while and then an answer may drift back up from some deeper part of your brain that you don’t have direct access to.

The tricky part is that you can’t easily schedule what your brain is going to work on in the diffuse mode. Distractions like smart phones are really harmful, but even if you turn your phone off you can easily get into a mode where all you can think of is the latest controversy in the news. Rich Hickey talks about how he deals with this issue in his talk Hammock Driven Development: (he talks about sleep because for him a lot of this thinking happens while sleeping)

So imagine somebody says “I have this problem…” and you look at it for ten minutes and go to sleep. Are you going to solve that problem in your sleep? No. Because you didn’t think about it hard enough while you were awake for it to become important to your mind when you’re asleep. You really do have to work hard, thinking about the problem during the day, so that it becomes an agenda item for your background mind. That’s how it works.

For me sleeping doesn’t work, but I’ve certainly found the same thing to be true when going for long walks. If the last thing that I did before the walk was check the news, my brain will keep on going back to whatever I was reading. If the last thing was that I worked really hard on a problem, I may have a chance of finding a solution to the problem while going for a walk. (can’t force it though, you need to allow your mind to drift off and then drift back to the topic)

This is no guarantee for success. Oftentimes the solution of the diffuse mind doesn’t actually work. Or it’s just one step of the solution and after you take that one step you’re just as stuck as you were before. But anytime that I’m making really good progress, it’s a combination of focused mode and diffuse mode work.

This is another one one that is so basic that it’s rarely stated, but I sometimes see people confused about this. For example I was going through the book reviews of Kenneth Stanley’s book about how objectives can be harmful, and one person says that the book is clearly wrong because there are studies about how helpful goal setting is, with a link to this article. And since I believe in proving myself wrong I naturally read that article. It turns out that the professor mentioned in that course, Jordan Peterson, has made the course available on Youtube. And if you listen to what he actually says about setting goals, it’s a lot more compatible with Kenneth Stanley’s idea:

You don’t get something you don’t aim at. That just doesn’t work out. So lots of people aim at nothing and that’s what they get. So if you aim at something you have a reasonable crack at getting it. You tend to change what you’re aiming at a bit along the way, because like, what do you know? You aim there, you’re wrong. But you get a little closer. And then you aim there, and you’re still wrong. You get a little closer and you aim there, and as you move towards what you’re aiming at, your characterization of what to aim at becomes more and more sophisticated. So it doesn’t really matter if you’re wrong to begin with as long as you’re smart enough to learn on the way, and as long as you specify a goal.

This is spot on as far as I’m concerned. You need a goal to start with. But as you travel towards that goal, you may find reasons to change the goal and you shouldn’t be afraid to do that if you have a good reason. He goes on to say that it’s OK to specify a vague goal as long as you’re going to refine that goal along the way. (I think there is also some connection here to Scott Adams’ theory of “using systems instead of goals” but I haven’t thought that through)

Kenneth Stanley developed an algorithm called “Minimal Criterion Novelty Search” after his discovery about how harmful it can be to aim for a goal too rigidly. Novelty Search just tries visit as many different places in the search space as possible. Meaning it generates novel approaches to whatever problem you’re working on. Doesn’t matter if those novel approaches don’t look like they would solve the problem. “Minimal Criterion” says that the novel behaviors should still behave above some minimum threshold like “don’t get eaten by predators before you reproduce.” You can define your own minimal criterion for your problem, but it shouldn’t be very challenging to overcome. He has then shown that for tricky problems, novelty search is better than goal oriented search because novelty search doesn’t go for the goal and doesn’t get stuck on whatever the “trick” is. It just tries to reach as many different points as possible and will eventually automatically find its way around the trick.

The diffuse mode from the last section is a good way to get unstuck. But it’s a bit unreliable, and it needs to be fed. You need to work on the problem hard before the diffuse mode can provide you with a solution. But how do you do that if you’re stuck? And are there any more direct ways to get unstuck? In the hill climbing analogy, imagine you have come across a steep cliff and all you can see is ways back down or sideways.

Some of the advice here is to find good sideways steps that have helped others, other advice is for discovering as many sideways steps as you can. Others are about finding new starting points. It’s all about making movement in the hope that you will come across some hidden path that can take you up again.

This is the Feynman Algorithm for solving problems:

- Write down the problem.
- Think real hard.
- Write down the solution.

The algorithm is of course a joke because Richard Feynman made physics look so easy. Except that a friend of mine once said that the Feynman algorithm actually worked for him. Since then I have tried it a few times and it has actually really helped me, too. The important step seems to be step 1: Write down the problem. Sometimes we seem to be stuck, but we’re not actually all that clear of what exactly we’re stuck on. Putting it into writing forces us to consider what exactly the problem is, and sometimes just doing that is enough. If it’s not, step 2 has also brought me to the solution. Literally just sitting there staring at the formulation of the problem on the paper. Seems unlikely, but sometimes it works. (because you never actually thought about the explicitly stated problem)

A related solution from computer science is Rubber Duck Debugging. The idea is that if you’re completely stumped on trying to figure out a bug in your code, sometimes it helps to explain it to somebody else. That other person doesn’t actually have to understand what you’re talking about. It just helps talking through the problem. So a rubber duck is good enough for this.

I have to confess that I don’t find it easy to talk to a rubber duck, so I usually try explaining my current problem to my girlfriend. She doesn’t know a whole lot about computer science, but if I say that “I just need to talk through this problem once” then she will usually make an effort to listen. It’s also a good exercise to try to explain the problem in a way that somebody who is not familiar with algorithms and data structures can understand. The goal isn’t really to get her to understand it, but to get myself to talk about the problem fully enough that she could understand it.

Oftentimes that’s all it takes to find the thing that you forgot to check.

One thing that has really helped me on this is George Polya’s book “How to Solve It” which makes you ask yourself these questions:

What is the unknown? What are the data? What is the condition? Is it possible to satisfy the condition? Is the condition sufficient to determine the unknown? Or is it insufficient? Or redundant? Or contradictory?

Draw a figure. Introduce suitable notation.

Separate the various parts of the condition. Can you write them down?

It’s an exercise in getting good about “stating the problem.” And going through it explicitly somehow helps. Polya also points out that sometimes you’re stuck simply because you lost sight of the goal, which is another explanation for why stating the problem helps you get unstuck.

Polya also has a list of proverbs in his book that I will quote sometimes, here are his proverbs for this one:

Who understands ill, answers ill. (who understands the problem badly, answers it badly)

Think on the end before you begin.

A fool looks to the beginning, a wise man regards the end.

A wise man begins in the end, a fool ends in the beginning.

Here is Claude Shannon about simplifying problems:

Almost every problem that you come across is befuddled with all kinds of extraneous data of one sort or another; and if you can bring this problem down into the main issues, you can see more clearly what you’re trying to do and perhaps find a solution. Now, in so doing, you may have stripped away the problem that you’re after. You may have simplified it to a point that it doesn’t even resemble the problem that you started with; but very often if you can solve this simple problem, you can add refinements to the solution of this until you get back to the solution of the one you started with.

And here is Robert Galager again about an experience when he had a complex problem and asked Claude Shannon for help:

He looked at it, sort of puzzled, and said, ‘Well, do you really need this assumption?’ And I said, well, I suppose we could look at the problem without that assumption. And we went on for a while. And then he said, again, ‘Do you need this other assumption?’ And he kept doing this, about five or six times. At a certain point, I was getting upset, because I saw this neat research problem of mine had become almost trivial. But at a certain point, with all these pieces stripped out, we both saw how to solve it. And then we gradually put all these little assumptions back in and then, suddenly, we saw the solution to the whole problem.

Another thing I like to do for this is to solve the problem for one case. Instead of trying to attack the general problem, pick a simple case and solve it. Then another. Then another. Then another. Then look for patterns. Don’t look for patterns until you’ve solved three or four specific cases. The cases I usually look at are “what if these are all zero?” Or “what if this always takes the same amount of time?” Or “what if everybody wants the exact same thing?” And then further questions are small variations on that like “what if these are all 1? Or what if these are all zero except for that variable?” Or “what if these take different amounts of time but they start at regular intervals?” Or “what if everybody wants the exact same thing except for that one special case?”

Polya in “How to Solve It” of course also has questions for this:

If you cannot solve the proposed problem try to solve first some related problem. Could you imagine a more accessible related problem? A more general problem? A more special problem? Keep only a part of the condition, drop the other part; how far is the unknown then determined, how can it vary? Could you change the unknown or the data, or both if necessary, so that the new unknown and the new data are nearer to each other?

There is a subtle benefit to simplifying the problem which I’ll explain using the concept of “overfitting” from machine learning. Overfitting happens when your algorithm didn’t really learn the underlying pattern, but just memorized all the training examples. (and then doesn’t work on new examples) Overfitting means that you learned both the signal and the noise. One way to make overfitting less likely is to simplify or to generalize because simplifying the problem reduces the noise in the problem. (I will talk about generalizing further down) This is a bit of an abstract concept and probably deserves a fuller discussion (particularly because some simplifications actually increase your risk of overfitting) but for now I just want to say that solving a simplified problem can reveal broader truths than solving a complex problem, so don’t feel bad for simplifying. It can have real benefits.

We already had Polya’s advice of “Draw a picture. Introduce suitable notations” above, but this goes further. We can often use the visual processes in our brain to solve problems.

This applies to many more problems than math problems. Lots of math has geometric interpretations, but so do other fields. You can draw diagrams or plots or maps or simplified sketches or any number of other things.

One trick is to try to visualize as much data as possible. Draw scatter plots. Then draw small multiples of scatter plots. Add layers, colors, work at different scales, anything that allows you to show more data without confusion. Let your eye do the filtering later. While we would have a hard time dealing with thousands of numbers in writing, we have a very easy time finding patterns in thousands of numbers in a scatter plot. Here is a quote from Edward Tufte’s Envisioning Information (page 50):

We thrive in information-thick worlds because of our marvelous and everyday capacities to select, edit, single out, structure, highlight, group, pair, merge, […] focus, organize, condense, reduce, […] categorize, catalog, […] isolate, discriminate, distinguish, […] filter, lump, skip, smooth, chunk, average, approximate, cluster, aggregate, outline, summarize, itemize, review, dip into, flip through, browse, […].

Visual displays rich with data are not only an appropriate and proper complement to human capabilities, but also such designs are frequently optimal. If the visual task is contrast, comparison and choice – as it so often is – then the more relevant information within eyespan, the better. Vacant, low-density displays, the dreaded posterization of data spread over pages and pages, require viewers to rely on visual memory – a weak skill – to make a contrast, a comparison, a choice.

A common theme of Tufte is that we are really good at looking at lots of data. It’s not good to only show parts of the data at a time. Better to show as much as possible, and people will focus on what they want.

Except of course it’s not quite that simple, because if you just show as much as possible, you often have an unreadable mess. Edward Tufte’s books are all about showing as much as possible without having a mess on your hand. But really you can get pretty far by just trying and iterating on your visualizations. Try combining visualizations, then try separating them. Try looking at multiple things next to each other. Try zooming out or try zooming in etc.

This is one of the main points in Polya’s “How to Solve It.” He thinks mobilizing prior knowledge is one of the most important things you can do. To do this you of course have to be fluent in the field that you’re researching. Here are his questions related to this:

Have you seen it before? Or have you seen the same problem in a slightly different form?

Do you know a related problem? Do you know a theorem that could be useful?

Look at the unknown! And try to think of a familiar problem having the same or a similar unknown.

Here is a problem related to yours and solved before. Could you use it? Could you use its result? Could you use its method? Should you introduce some auxiliary element in order to make its use possible?

This is also a point where it’s useful to work on several things at the same time. Because somehow it seems that formulas or methods or insights from one area often apply in a different area. I don’t know why that is. Maybe there are only a finite number of concepts and connections between them, so we see the same concepts in several fields. (whatever explanation we come up with would also have to explain why garbage-can decision making works so well, so my explanation isn’t very good…) Here is Feynman talking about this:

[After deriving the conservation of angular momentum from the laws of gravity]. And thus we can roughly understand the qualitative shape of the spiral nebulae. We can also understand in the same way the way a skater spins when he starts with his leg out, moving slowly, and as he pulls the leg in he spins faster.

But I didn’t prove it for the skater. The skater uses muscle force. Gravity is a different force. Yet it’s true for the skater. Now we have a problem. We can deduce, often, from one part of physics, like the law of gravitation, a principle which turns out to be much more valid than the derivation.

[…]

So we have these wide principles which sweep across all the different laws. And if one takes too seriously these derivations, and feels that “this is only valid because this is valid” you can not understand the interconnections of the different branches of physics. Some day, when physics is complete, then all the deductions will be made. But while we don’t know all the laws, we can use some to make guesses at the theorems which extend beyond the proof. So in order to understand the physics one must always have a neat balance and contain in his head all the various propositions and their interrelationships because the laws often extend beyond the range of their deductions.

(edited heavily for brevity)

This is true in across parts of physics and it’s also true across entirely different fields, but I should also state that most of the time, the “related problems” you want to look at are going to be pretty close by. Polya gives examples like “to find the center of mass of a tetrahedon, see if you can use the method of the simpler related problem of finding the center of mass of a triangle.”

But I also want to bring it back to Polya’s idea of mobilizing prior knowledge: There is a lot of evidence that in most fields, the main difference between experts and novices is how much experience or knowledge of the field they have, and how good they are at organizing this knowledge. This comes out of Kahneman’s and Tversky’s work with expert firemen, but also from research about chess grandmasters. The better somebody gets at chess, the more they use their memory. (as measured by brain activity) So you have to build that pool of knowledge. You have to know lots of related problems and you have to be able to draw connections to them.

This is very related to the previous problem and it’s also something that I can find plenty of quotes for. Here is Claude Shannon for example:

Another approach for a given problem is to try to restate it in just as many different forms as you can. Change the words. Change the viewpoint. Look at it from every possible angle. After you’ve done that, you can try to look at it from several angles at the same time and perhaps you can get an insight into the real basic issues of the problem, so that you can correlate the important factors and come out with the solution.

Polya’s questions about this topic are simpler in that they are simply “Can you restate the problem? Could you restate it still differently? Go back to definitions.”

Polya then goes on to list several reasons for why this helps. One is that a different approach to the problem might reveal different associations, allowing us to find other related problems. (see the point above) A second reason I will just quote:

We cannot hope to solve any worth-while problem without intense concentration. But we are easily tired by intense concentration of our attention upon the same point. In order to keep the attention alive, the object on which it is directed must unceasingly change.

If our work progresses, there is something to do, there are new points to examine, our attention is occupied, our interest is alive. But if we fail to make progress, our attention falters, our interest fades, we get tired of the problem, our thoughts begin to wander, and there is danger of losing the problem altogether. To escape from this danger we have to

set ourselves a new questionabout the problem.The new question unfolds untried possibilities of contact with our previous knowledge, it revives our hope of making useful contacts. The new question reconquers our interest by varying the problem, by showing some new aspect of it.

See also the point about Simulated Annealing above which says that you should frequently try new approaches. But the size of the change that you make should differ over time.

And finally here is Feynman talking about the same idea. When talking about what if several theories have the same mathematical consequences, he says that “every theoretical physicist that’s any good knows six or seven different theoretical representations for exactly the same physics and knows that they’re all equivalent, and that nobody is ever going to be able to decide which one is right, but he keeps them in his head hoping that they will give him different ideas.” As for how they may help he says that a simple change in one approach may be a very different theory than a simple change in a different approach. And that changes which look natural in one theory may not look natural in another.

This one is related to some of the points that I made in “draw a picture” above, but it’s also worth talking about the data separately, without the context of a picture. To start off with here are Polya’s questions related to this topic:

Did you use all the data? Did you use the whole condition? Have you taken into account all essential notions involved in the problem?

Could you derive something useful from the data? Could you think of other data appropriate to determine the unknown? Could you change the unknown or the data, or both if necessary, so that the new unknown and the new data are nearer to each other?

One thing I would like to point out here is that there are many ways to organize data. I have literally given talks where all I did was take existing data and organized it in a different way to put emphasis on different conclusions. The original authors had organized their data by categories. I had organized it by strength of correlation. There are many ways to sort, filter, group or abstract data, and there are often many different insights to be gained depending on how you go about doing this.

Polya is referring to something else here though. For him the “data” are the information given for a problem. His example problem for this question is “We are given three points A, B and C. Draw a line through A which passes between B and C and is at equal distance to B and C.” And his point is that after drawing a picture of the dots with the desired line, the solution comes almost automatically if you just draw lines using all the available data. (the points A, B and C, as well as the desired line) So for your problem the data may just be any available information.

Changing the data can mean a lot of things from “collect more information” to “if I assume that this variable is always 0, would that simplify the problem?” And changing the unknown means that if the data suggests a different goal, at least consider that other goal. Maybe it’s a better goal than what you were looking for.

Richard Feynman was famous for this because he said that his Nobel prize came directly from playing around with physics. Here is the quote for that and it’s a great read. (unfortunately too long to be included in this blog post)

In the Feynman quote playing around means investigating a problem that has no practical applications. But you can even do this within a problem. You can play around with equations. Do random substitutions. See what the consequences would be if you cube a variable rather than squaring it. Take the equations in a circle and back to where they started. Do anything that you are curious about. You can play around with experiments. If you are working on some variable and normal values are in the range from 20 to 30, then try the values 1 and 100, just to see what happens. If nothing bad happens, try the values 0.1 and 1000. Antibiotics were discovered because an experiment went wrong and Alexander Fleming reacted with curiosity rather than frustration.

Here is a quote from Carver Mead that is in a similar vein to the Feynman story above:

John Bardeen was just the most unassuming guy. I remember the second seminar Bardeen gave at Caltech — I think it was just after he got his second Nobel Prize for the BCS theory, and it was some superconducting thing he was doing. He had one grad student working on it and they were working on this little thing, and he gave his whole talk on this little dipshit phenomenon that was just this little thing. I was sitting there in the front row being very jazzed about this, because it was great; he was still going strong.

So on the way out, people were talking and one of my colleagues was saying, “I can’t imagine, here’s this guy that has two Nobel Prizes and he’s telling us about this dipshit little thing.” I said, “Don’t you understand? That’s how he got his second Nobel Prize.” Because most people, one Nobel Prize will kill them for life, because nothing would be good enough for them to work on because it’s not Nobel Prize–quality stuff, whereas if you’re just doing it because it’s interesting, something might be interesting enough that it’s going to be another home run. But you’re never going to find out if all you think about is Nobel prizes.

This one is connected to the point about “use a related problem” above, but there is additional value to be gained from reading a related paper that I haven’t talked about above.

Reading a related paper is especially valuable if you try to reproduce the related paper. For me that’s often easy to do in computer science because I can implement the program. If it’s hard to do in your field, don’t be afraid to take shortcuts. (potentially huge shortcuts) You’re not trying to verify the paper, the value of the exercise actually comes from walking in other people’s shoes for a while. See what they did and why they did it. Criticize their ideas and their approach.

If you start from the other paper’s starting point, you will come across plenty of opportunities to do things differently. Maybe one of those different paths can give you an idea for your problem. And different starting points run into different problems, which sometimes allows you to dodge a problem that you ran into. Meaning the problem literally doesn’t even show up just because you came from a different angle.

Another thing I like to do is read old papers. You will be surprised at which alternatives they explored back then. (whatever “back then” means for your field) When a field is young, people are more open-minded. Often, the old alternative theories are obviously ridiculous now, but sometimes there are ideas there that should be revisited. Even if you don’t come across anything like that, I still just get random ideas from exposing myself to naive (but smart) ways of thinking about the problem.

Just as reading a paper is a good exercise for getting a different view point, so is starting from the end. The AI method for this is called bidirectional search, and there are real mathematical reasons for why this helps. Here is the picture for bidirectional search from Russel & Norvig’s “Artificial Intelligence – A Modern Approach”:

To explain this image, imagine we have no idea where the goal is. So we start branching out from the start point, exploring all directions. The longer this keeps on going, the bigger area we have to explore and the more we’ll slow down. If we also search from the goal, we can cut that time down dramatically. Instead of having to make one very big circle, we can make two small circles. In this picture the circles are about to touch, and as soon as they touch it’s an easy exercise to connect the circles and to draw a single path connecting the start to the goal.

With this picture in mind you can also see why so much of the advice above is about finding different starting points: If we had multiple starting points, chances are good that the circles can be even smaller. The further we move on from a starting point the more expansion slows (because the number of paths grows proportional to the area which grows at the square of the radius) so you want to be incremental (pick a goal that’s not too far away) and you may want to try multiple starting points.

Now strictly speaking bidirectional search is not a valid thing to do when doing hill climbing, because in hill climbing we have no idea where the goal is. But usually when doing research you have at least some idea of what you’re looking for or what you expect to find. Or you have some idea of what would overcome the current thing that you’re stuck on. Sometimes it helps to just make goals up. Meaning literally say “it would be really helpful if X was true” and then work backwards, try to figure out what you would need to make X true. Making good guesses as to what are good points to work backwards from is something that takes practice.

We all work at some level of abstraction, but sometimes you need to dive deeper and get into the lower levels. Meaning you need to take apart the machine that you’re working with and put it back together. Hook the sensors up directly to your computer instead of a separate display. (so you can write your own display code) Step through the lower level code. Write your own version of the lower level code. Multiply the equations all the way out. Run through them with real world numbers instead of using abstract symbols.

Meaning do the work that the people who provided you with your tools did.

Here is Bob Johnstone talking about Nobel laureate Shuji Nakamura:

Modifying the equipment was the key to his success… For the first three months after he began his experiments, Shuji tried making minor adjustments to the machine. It was frustrating work… Nakamura eventually concluded that he was going to have to make major changes to the system. Once again he would have to become a tradesman, or rather, tradesmen: plumber, welder, electrician — whatever it took. He rolled up his sleeves, took the equipment apart, then put it back together exactly the way he wanted it. …

Elite researchers at big firms prefer not to dirty their hands monkeying with the plumbing: that is what technicians are paid for. If at all possible, most MOCVD researchers would rather not modify their equipment. When modification is unavoidable, they often have to ask the manufacturer to do it for them. That typically means having to wait for several months before they can try out a new idea.

The ability to remodel his reactor himself thus gave Nakamura a huge competitive advantage. There was nothing stopping him; he could work as fast as he wanted. His motto was: Remodel in the morning, experiment in the afternoon. …

Previously he had served a ten-year self-taught apprenticeship in growing LEDs. Now he had rebuilt a reactor with his own hands. This experience gave him an intimate knowledge of the hardware that none of his rivals could match. Almost immediately, Nakamaura was able to grow better films of gallium nitride than anyone had ever produced before.

(from Brilliant!: Shuji Nakamura And the Revolution in Lighting Technology, p 107)

I want to caution against being too eager about this. You can waste huge amounts of time diving into the deeper levels. There is infinite amount of work down there, and there are reasons why we work at higher levels. The approach for this is to do the smallest dive possible. Only if that doesn’t work should you dive into the lower levels for longer amount of times. (the quote above mentions how Shuji Nakamura was frustrated for three months before he decided to dive deeper. That sounds like a reasonable amount of time)

A related problem is that sometimes you need to doubt the lower levels, but you have to be especially careful about this. But it does happen that the lower level formulas are wrong about something. Even the laws of physics still have holes in them which we have to fill up with Dark Matter and Dark Energy. That doesn’t mean that you should immediately question those laws of physics. You should do the smallest intervention possible and dive one level down. Don’t ever skip levels. Meaning first question whether something in your experiment is wrong. Then question whether your equipment is wrong, then maybe question if a formula from a previous paper is wrong, then slowly work your way down. Only if no higher level mistake can explain your observations should you keep on diving deeper. Think of it as detective work. There are heuristics for what to doubt (“how many known problems does this have?” “how much would break if this changed?”) but you will often follow the heuristics automatically if you just work one level at a time. In computer science this still happens with some regularity, and here is a good read about somebody who did this properly and worked his way slowly through every level until they could conclude that they found a hardware bug.

Sometimes it helps to show your unfinished idea to someone who is going to hate it. It’s a very unpleasant experience to do this. But if you do this you will hear all the many reasons why your idea can’t possibly work and why you should just abandon it right now. This can do two things: 1. It can actually increase your resolve to fix this problem. (I’ll show that idiot who thinks this can’t be done) 2. It brings up areas that you have avoided so far. Somehow, people who hate your idea are really good at finding open wounds that they can drive their thumb into to hurt you. Often times those open wounds are what you have avoided even though it’s exactly what you should be working on, as unpleasant as that may be. It sucks when somebody tells you “your idea sucks because it can’t deal with X” because you suspect that it’s true and you have unconsciously avoided dealing with X so far. But it can feel great when you then go back and finally tackle X and it turns out that you find a really elegant way to solve that problem, proving the idiot hater wrong and making some progress while you were at it.

The Internet is a great source for this kind of negativity. Sometimes coworkers and friends can identify your problem spots in a nicer way, but the problem with coworkers and friends is that they often have the same mindset as you. You can avoid that by asking new people in your group for advice. You have to get them when they’re still in the “why the heck do we do it like this?” stage, before they have advanced to the acceptance of the “this is just how we do things here” stage. So it’s tricky. (the two stages may not be this obvious) The most reliable way to get criticism is to ask someone who will hate your idea.

I started this section off with making this sound totally sucky. Because it usually is, and to do this you have to be ready for the unpleasant emotions. But this can actually be a more or less sucky experience, depending on who you get the criticism from. When you’re on one side of an argument, it’s easy to find someone on the other side who is a bit of an idiot and then you point and laugh and say “look at how much of an idiot they are on the other side.” That is easier for you to do, but it’s harder to learn from that. You have to put in more work to understand their point. And even though you’ll dismiss it, it will still negatively affect your mood. The better way to go is to find a smart person on the other side who can articulate themselves well. Ideally they can even state your viewpoint pretty well and can still tell you why their side is right. It’s easier to learn from those people, but you won’t naturally seek them out because you’ll learn all the parts where you are wrong.

If you’re out of all other options, sometimes you should just do something stupid.

Do something that would never work. Do something that might work, but it’s obviously inefficient or inelegant. Add five special cases. Do something hand-wavy that would never survive peer-review. Assume something that you can’t justify assuming. Do something where you already know three cases where it won’t work. Sometimes those surprise you by unexpectedly working or by giving you an answer that is almost right.

Do you have an idea that probably won’t help and it involves going through fifty cases that take an hour of tedious work each? Sometimes you just gotta do it, even if it probably won’t help. Repetition helps understanding, so maybe you will discover a new angle. Or maybe you will find ways to automate the work.

If all of the other advice for getting unstuck hasn’t helped, doing something stupid can help. In the hill climbing analogy it’s taking a step downhill. Or spending way too much work on a sideways step. The idea is to specifically do what you have tried to avoid doing. Obviously don’t do this as your first attempt at getting unstuck.

If after this last point you’re still stuck, maybe try being more incremental. Maybe the thing you’re trying to do is just not ready to be tackled yet. Find a half-way goal and aim for that. Otherwise I’ll talk about making progress next, and there may be more hints there.

In this part I will talk about the normal day to day things that you should do all the time. Why didn’t I put this before the “getting unstuck” section? Because getting unstuck is more interesting and now that I have your attention, I can spend it on making you read things that you should do every day.

Polya’s book “How to Solve It” has a chapter called “Wisdom of Proverbs” in which he talks about some of these always applicable things using proverbs. I kinda like that. It’s cute. So I will quote his proverbs when appropriate.

This one is trivial, because this is what we have been talking about for the whole list. Going up means taking one step towards your goal.

Even though this is obvious, I often catch myself doing this wrong. I’ll be thinking way too much about all the possible paths I could take and which problem I would encounter where, that I never actually end up doing a step. For me as a programmer a step may just mean “start writing some code.” (and don’t worry too much about organizing for now) Or it may just mean “work through a few cases” or just doing anything that gets you to actually do something as opposed to just thinking about it. Doing helps with thinking. I’ve found that solutions just come automatically as soon as I start working. Half the problems I worried about never actually show up. Half of the remaining problems end up being simple. Just start doing a step that seems to go uphill. (there’s actually an AI technique for this called Stochastic Hill Climbing which relies on the same insight that sometimes it’s too much work to find the best path and you should just choose any path)

Being lucky is a skill that you can learn. And it’s actually a fairly easy skill to learn. That may sound surprising to some people (especially to unlucky people) but it’s true. Here, Richard Wiseman writes about his research into luck. What he did is he found people who thought of themselves as especially lucky or especially unlucky and he asked them a lot of questions. Here are a few excerpt from the article that gives you a good idea for what he found:

I gave both lucky and unlucky people a newspaper, and asked them to look through it and tell me how many photographs were inside. On average, the unlucky people took about two minutes to count the photographs, whereas the lucky people took just seconds. Why? Because the second page of the newspaper contained the message: “Stop counting. There are 43 photographs in this newspaper.” This message took up half of the page and was written in type that was more than 2in high. It was staring everyone straight in the face, but the unlucky people tended to miss it and the lucky people tended to spot it.

For fun, I placed a second large message halfway through the newspaper: “Stop counting. Tell the experimenter you have seen this and win £250.” Again, the unlucky people missed the opportunity because they were still too busy looking for photographs.

[…]

And so it is with luck – unlucky people miss chance opportunities because they are too focused on looking for something else. They go to parties intent on finding their perfect partner and so miss opportunities to make good friends. They look through newspapers determined to find certain types of job advertisements and as a result miss other types of jobs. Lucky people are more relaxed and open, and therefore see what is there rather than just what they are looking for.

My research revealed that lucky people generate good fortune via four basic principles. They are skilled at creating and noticing chance opportunities, make lucky decisions by listening to their intuition, create self-fulfilling prophesies via positive expectations, and adopt a resilient attitude that transforms bad luck into good.

[…]

In the wake of these studies, I think there are three easy techniques that can help to maximise good fortune:

- Unlucky people often fail to follow their intuition when making a choice, whereas lucky people tend to respect hunches. Lucky people are interested in how they both think and feel about the various options, rather than simply looking at the rational side of the situation. I think this helps them because gut feelings act as an alarm bell – a reason to consider a decision carefully.
- Unlucky people tend to be creatures of routine. They tend to take the same route to and from work and talk to the same types of people at parties. In contrast, many lucky people try to introduce variety into their lives. For example, one person described how he thought of a colour before arriving at a party and then introduced himself to people wearing that colour. This kind of behaviour boosts the likelihood of chance opportunities by introducing variety.
- Lucky people tend to see the positive side of their ill fortune. They imagine how things could have been worse. In one interview, a lucky volunteer arrived with his leg in a plaster cast and described how he had fallen down a flight of stairs. I asked him whether he still felt lucky and he cheerfully explained that he felt luckier than before. As he pointed out, he could have broken his neck.

I can’t overstate how important this stuff is. Half of the advice from this blog post is due to me being lucky. For example the way I found Kenneth Stanley’s great talk “Why Greatness Cannot Be Planned: The Myth of the Objective” was that I was following Bret Victor on Twitter (or maybe it was from the RSS feed of his quotes page) because he is a constant source of new perspectives. That lead me to watch this talk by Carver Mead about a new theory of gravity. Which I watched even though I have no reason at all to look into this. I barely know any physics. But come on, a new theory of gravity. And it’s supposed to be simpler than Einstein’s theory while still making all the same predictions. That’s interesting. Then I went to find out more about the conference that that talk was given at and finally stumbled onto Kenneth Stanley’s talk.

None of these steps have any obvious practical benefit for me, but they lead me to a great talk, which coincidentally has the best demonstration I have ever seen about why you should behave in exactly this way.

Being lucky can mean that you never actually find what you’re looking for. You may find something else entirely. The list of scientific discoveries that were made “accidentally” is long. But you need to learn to be lucky, otherwise you will miss those chances when you encounter them.

Here are Polya’s proverbs for this topic:

Arrows are made of all sorts of wood.

As the wind blows you must set your sail.

Cut your cloak according to the cloth.

We must do as we may if we can’t do as we would.

A wise man changes his mind, a fool never does.

Have two strings in your bow.

A wise man will make more opportunities than he finds.

A wise man will make tools of what comes to hand.

A wise man turns chance into good fortune.

The title of this section is referring to the Woody Allen quote “80 percent of success is showing up”. This means showing up to work every day and working on a problem. Thomas Edison is supposed to have said that “ninety per cent of a man’s success in business is perspiration.”

“Showing up” can be more broadly applied: Show up to conferences. Show up to lunch with coworkers because that’s where you will have good discussions. Show up to dinner parties because that’s where you might meet people who can give you fresh ideas. Write the papers you’re supposed to write. Read the papers you’re supposed to read.

Part of this is to “be lucky” as in the point above. You can’t be lucky if you don’t show up. So you also want to get yourself into environments where you can show up to all these events. There is a reason why good research rarely comes out of some small town in the middle of nowhere: There are not enough opportunities to show up to out there. You want to at least live in a college town or a big city.

Here is Polya’s list of proverbs for this section:

Diligence is the mother of good luck.

Perseverance kills the game.

An oak is not felled at one stroke.

If at first you don’t succeed, try, try again.

Try all the keys in the bunch.

One final thing that I should point out is that I intentionally didn’t call this section “work hard.” I think that “show up” is a better advice. This is not about working 80 hour weeks. It’s about showing up to work on a problem every day.

The term “iteration time” is a standard term in video game development which roughly measures how much time passes between being finished with a change and seeing the change in the game. So for me as a programmer I make a change, then I have to compile the code, launch the game, get to a point where I can test my change and then test my change. Let’s say compiling takes ten seconds, launching the game takes twenty seconds, and getting to my test setup takes another ten seconds, then my iteration time is 40 seconds. So if I decide to make another small change, I have to wait another 40 seconds before I can see the result. If I can cut the compile time in half then my iteration time is just 35 seconds, which is a good improvement. If I can create a test setup that doesn’t require the whole game to boot then maybe I can get my iteration time down to just 15 seconds.

At the beginning of this blog post I talked about how research is often characterized by slow progress. Exploring one path might take you a week before you find out that it’s a dead end. You shouldn’t just accept that. You should find ways to reduce that time.

Improving iteration time helps in many non-obvious ways: If you can improve iteration times, you can make it cheaper to make mistakes. If an experiment takes you two hours, you probably don’t want to make a mistake and you’ll be very careful. If you can do the experiment in a minute, then some mistakes are OK and you can play around more. But even if you just reduce it from two hours to one hour and 45 minutes, that will still improve your work a little bit. And maybe you can find more improvements after that.

Now you have to invest time to save time, so sometimes it’s not worth it. But sometimes you’ll be surprised. I’ve had an argument about improving iteration times below two seconds. The other person argued that if your iteration time is only two seconds, how much time are you going to save by reducing the iteration time to one second? (and how much effort do you need to invest to achieve a 50% reduction?) But what happens is that when you reduce iteration times, you work differently. If your iteration time is milliseconds, all of a sudden you can work entirely differently. You can try several alternatives per second and create an interactive animation showing the alternatives. You can try different parameters in real time and see what happens. You can show several different variations of the problem on the screen at the same time. At some point you can write a program that just explores a million options and gives out the best one. (but then ironically that program would have slow iteration times, so maybe an interactive tool would be better)

Improving iteration times is a lot about automation, but often it’s also just about being observant as to where you are losing time. You can apply a lot of lessons from factories here. Standardize processes, specialize, batch your work etc. Also if you don’t know how to program, then you should probably learn how to. It’s easier now than it ever was. And to automate simple tasks like “entering numbers into an excel sheet” you don’t need a full computer science education.

Here is something you should do whenever you’re finished with a step: See what the implications of that step are beyond the specific step. See if it has broader applications. Here is Claude Shannon talking about this:

Another mental gimmick for aid in research work, I think, is the idea of generalization. This is very powerful in mathematical research. The typical mathematical theory developed in the following way to prove a very isolated, special result, particular theorem – someone always will come along and start generalizing it. He will leave it where it was in two dimensions before he will do it in N dimensions; or if it was in some kind of algebra, he will work in a general algebraic field; if it was in the field of real numbers, he will change it to a general algebraic field or something of that sort. This is actually quite easy to do if you only remember to do it. If the minute you’ve found an answer to something, the next thing to do is to ask yourself if you can generalize this anymore – can I make the same, make a broader statement which includes more – there, I think, in terms of engineering, the same thing should be kept in mind. As you see, if somebody comes along with a clever way of doing something, one should ask oneself “Can I apply the same principle in more general ways? Can I use this same clever idea represented here to solve a larger class of problems? Is there any place else that I can use this particular thing?”

In this talk Clay Christensen points out that generalizing makes it easier to prove yourself wrong. When you generalize your concept, you have more examples to test it against, and you can use those examples to improve your theory. If you test it against a new example and your theory doesn’t work, you have to either define the limits of your theory, or you have to explain why it sometimes behaves differently. (and then sometimes these new explanation help explain oddities in your original data)

There is also a quote by Feynman which I can’t find right now where he essentially says “if you’re not generalizing, then what’s the point?” With the reason being that the only way that science progresses is to make guesses beyond the specifics of what we observed.

One word of caution about this is that you can also be over-eager about this. Don’t try to find a pattern if all you have is two examples. (or god forbid only one example) You usually want to generalize after you’ve seen three or four examples of something. Of course the trick is in recognizing that three apparently different things are actually examples of the same thing.

This is another thing that you probably do automatically, but it’s worth pointing out: When you’re moving forward, you should try hard to keep moving forward. The usual example of this is when you were stuck for a while: Once you’re over the hurdle, you should keep working on the thing that got you over the hurdle, because you can probably make more progress there.

Csikszentmihalyi talks about the concept of “Flow” in relation to this, which is a highly focused mental state that you enter when you’re doing concentrated work. You want to stay in that state.

The easiest way to get this wrong is to get stuck on small bumps. There are plenty of small speed bumps along the way that will just slow you down. If there is something in a paper that you don’t understand, ignore it and keep reading. Maybe it will become clear later. If you already know something to be true, but proving it is tricky, skip over the proof. You can fill in the gaps later. If you’re working an algorithm but an edge case is driving you nuts, don’t handle the edge case now. Just solve the cases that you actually need.

It’s important that you revisit each of these points later to fill in the gaps (because sometimes good discoveries hide in small irregularities) but you shouldn’t let a small speed bump stop you when you were making good progress before.

Here are Polya’s proverbs for this, the first one being ironic:

Do and undo, the day is long enough.

If you will sail without danger you must never put to sea.

Do the likeliest and hope the best.

Use the means and God will give the blessing.

This is the opposite advice of the previous point, but what can I say. Sometimes you gotta keep on moving forward, sometimes you have to be careful. Often you have to do both.

You can waste a huge amount of time if you mess up a step and never notice.

Polya’s questions for this are “Carrying out the plan of the solution, check each step. Can you see clearly that the step is correct? Can you prove it?”

You get better at this with experience. As you gain more experience, you will just intuitively avoid problems. So if it looks like a very experienced person isn’t checking every step, it may just be that they have taken steps like this a thousand times before.

On the other hand for me personally I feel like I’ve become more and more careful the more I have programmed. My changelists these days tend to be smaller than they used to be. I rarely make huge changes nowadays. Instead I try to make many smaller steps, each of which I can reason about.

The other thing I’d like to mention in this context is that sometimes slowing down can help. Sometimes if you have to make a decision, it’s best to wait for a while before making it. Try to work around it and get a better lay of the land. This is why procrastination sometimes works. Sometimes with delay the correct choice becomes clear. Sometimes all you’re doing is delaying though…

Polya’s proverbs for this section are these:

Look before you leap.

Try before you trust.

A wise delay makes the road safe.

Step after step the ladder is ascended.

Little by little as the cat ate the flickle.

Do it by degrees.

This is a point that I can’t possibly do justice to. Whole books have been written about how to form effective teams, so my advice in a blog post like this has to be hopelessly incomplete.

Research has the curious character where it’s often better when done by yourself. Kenneth Stanley has an amazing illustration of the damage that committees can do to research in his talk. (same talk that I keep referring to) If you have to constantly justify what you’re doing, you won’t do the exploration that’s necessary to actually get anywhere. Yet at the same time none of the pictures that he shows in his talk are the result of people working alone. So how do we square that circle?

Research about effective teams has shown that one of the most important things is emotional safety. You should be safe to speak up, safe to ask stupid questions, safe to follow hunches, safe to take a risk, and safe to admit mistakes. If you make decisions by committee, none of these things are true because you have to constantly justify what you are doing, and you have to constantly compete with others to make sure that your priority is still everyone’s priority.

One piece of advice that I like for this is the practice of “Yes, And” from improv comedy. If somebody has an idea, you can’t say “no that’s stupid.” (or use a more subtle way to shut it down) You have to say “yes”, and you have to add something to it to keep the idea alive. I have the idea from this talk by Uri Alon, who gives the following example:

We were stuck for a year trying to understand the intricate biochemical networks inside our cells, and we said, “We are deeply in the cloud,” and we had a playful conversation where my student Shai Shen Orr said, “Let’s just draw this on a piece of paper, this network,” and instead of saying, “But we’ve done that so many times and it doesn’t work,” I said, “Yes, and let’s use a very big piece of paper,” and then Ron Milo said, “Let’s use a gigantic architect’s blueprint kind of paper, and I know where to print it,” and we printed out the network and looked at it, and that’s where we made our most important discovery, that this complicated network is just made of a handful of simple, repeating interaction patterns like motifs in a stained glass window.

(the term “being in the cloud” is what I would call being stuck in a local maximum using the hill climbing analogy)

Other important things are having a clear, well communicated vision for what you’re trying to do. This doesn’t have to be a specific goal, but it should at least be a direction. That way all the creative attempts that people are taking in your group (because it’s safe for them to do so) will automatically work together. Competing goals within the group can be really harmful here, so you want to resolve disagreements. And changing the vision can also be really harmful. If you have to change direction, you have to communicate that very well.

The final thing is that diversity has been shown to help. Which makes sense if you look at how much of the advice above is about finding different view points.

Whew, you’ve made it to the end and you’ve made a discovery. Now make sure to look back. Polya has these questions for you:

Can you derive the result differently? Can you see it at a glance? Can you use the result, or the method, for some other problem?

The last question aims at the “generalizing” point I have talked about above. But the moment just after you have finished is often the moment where you can do your best work. You can flatten out all the bumps that accumulated in your work over time. You can straighten out the lines, clean up the formulas. Maybe something that seemed odd before now makes a lot of sense and offers a hint for further research. This is the time where you can turn this result into something really good that others will actually want to use. Take some extra time here.

Polya’s proverbs are “He thinks not well that thinks not again.” And “Second thoughts are best.” He also says that it’s really good if you can, with the benefit of hindsight, find a second way to derive the result. “It is safe riding at two anchors.”

You have reached the end of my list. If you still haven’t had enough, here are some of my sources. Otherwise the conclusion is below.

George Polya – How to Solve It – A New Aspect of Mathematical Method

This book is written by someone who was thought carefully about how we solve problems. If I hadn’t read this book, I couldn’t have noticed other patterns.

Kenneth Stanley – Why Greatness Can Not Be Planned – The Myth of the Objective

I love this talk because he explains everything with pictures. For example when he shows the pictures that you get from voting compared to the pictures you get from individual exploration, it really is better than a thousand words about the subject could be.

Rich Hickey – Hammock Driven Development

This has more insights about focused mode and diffuse mode than I actually used in this blog post. I think this is also the place where I first heard about “How to Solve It.”

Claude Shannon – Creative Thinking

This is a transcript of a talk that Claude Shannon gave. The good section is the part about his tricks for doing research. I suspect that the text got messed up by some kind of automatic digitization method, so if somebody has a better source, I would be very thankful.

Robert Galager talking about Claude Shannong

This talk builds on the above list and adds more tricks that Claude Shannon used. Some of those I didn’t mention because I didn’t talk about how to find good topics.

One thing I wish I could link to is a talk or article that generalizes from AI methods to scientific research. I did some of that above, but I have no sources for that other than my own interpretations. I could link you to AI books but they typically spend a very small amount of time on hill climbing.

I don’t think my list is complete, but I think I have a pretty good sample. For example I have not read any of Csikszentmihalyi’s work. I’m sure I could add at least one or two points to my list if I did. But as I kept adding things over the years, I was frustrated by how few people seem to know these things. For example I referred to a TED talk above that talks about being stuck, and the guy doesn’t refer to Polya. And Polya’s “How to Solve It” simply has the best list for getting unstuck, so it should always be mentioned when you’re talking about being stuck. After I saw a few incomplete opinions like that, I decided I had to write this blog post, even if my own list was also incomplete.

The list is necessarily short because it’s a blog post and it’s intended as something that you can re-read the next time that you’re having problems.

There are several directions that a list of “advice for doing research” could be expanded. For example I could talk about heuristics for identifying good research, (it seems solvable, the old theory has known problems, it would simplify things, the underlying conditions have changed, it would help someone, your unconscious keeps on drawing you back to it…) or I could talk about progress and about what you should do in which stage of research (Clay Christensen talks about that here) but I had to stop at some point, and having a list of tricks and habits seems like a good thing to have.

If you’ve made it to the end of this blog post, then I thank you very much for reading. I recommend that you come back here every once in a while to re-read the list. It’s what I’m doing with Polya’s book.

]]>That is until recently, when I came across the paper Imaginary Numbers are not Real – the Geometric Algebra of Spacetime which arrives at quaternions using only 3D math, using no imaginary numbers, and in a form that generalizes to 2D, 3D, 4D or any other number of dimensions. (and quaternions just happen to be a special case of 3D rotations)

In the last couple weeks I finally took the time to work through the math enough that I am convinced that this is a much better way to think of quaternions. So in this blog post I will explain…

- … how quaternions are 3D constructs. The 4D interpretation just adds confusion
- … how you don’t need imaginary numbers to arrive at quaternions. The term will not come up (other than to point out the places where other people need it, and why we don’t need it)
- … where the double cover of quaternions comes from, as well as how you can remove it if you want to (which makes quaternions a whole lot less weird)
- … why you actually want to keep the double cover, because the double cover is what makes quaternion interpolation great

Unfortunately I will have to teach you a whole new algebra to get there: Geometric Algebra. I only know the basics though, so I’ll stick to those and keep it simple. You will see that the geometric algebra interpretation of quaternions is much simpler than the 4D interpretation, so I can promise you that it’s worth spending a little bit of time to learn the basics of Geometric Algebra to get to the good stuff.

OK so what is this Geometric Algebra? It’s an alternative to linear algebra. Instead of matrices, there are multiple kinds of vectors, and there is a more powerful vector multiplication.

Let’s start with vector multiplication. In linear algebra we know two ways to multiply vectors: The dot product (producing a scalar) and the cross product (producing a vector). Where the dot product works for any number of dimensions, and the cross product only works in 3D. Geometric algebra also uses the dot product, but it adds a new product, the wedge product: . The result of the wedge product is not a vector or a scalar, but a plane. Specifically it’s the plane spanned by the two vectors. This plane is called a bivector because it’s the result of the wedge product of two vectors. There is also a trivector which describes a volume. The general principle is that the wedge product increases the dimension of the vectors by one. Vectors (lines) turn into bivectors (planes), and bivectors turn into trivectors (volumes). When we do math in more than 3 dimensions, we can go even higher, but I’ll stick to 2D and 3D for this blog post.

Before I tell you how to actually evaluate the wedge product, I first have to tell you the properties that it has:

- It’s anti-commutative:
- The wedge product of a vector with itself is 0:

The first property will make sense when we talk about rotations. The second product should already make sense if we just think of a bivector as a plane. There is no plane between a vector and itself, so it’s 0.

The other thing I have to explain is how vector multiplication works: In geometric algebra, the vector product is defined as the dot product plus the wedge product:

The result of the dot product is a scalar, and the result of the wedge product is a bivector. So how do we add a scalar to a bivector? We don’t, we just leave them as is. It works the same way as when adding polynomials or when adding apples and oranges or when working with complex numbers: . We just leave both terms.

Note that usually I will leave out the star and just write .

In 3D space we have three basis vectors:

When multiplying these with each other we notice three properties of this new way of multiplying:

So when multiplying the basis vectors with each other, either the dot product or the wedge product is zero. We are left only with one of the two.

All other vectors can be expressed using the basis vectors. So the vector can also be written as and I will use the second notation more often, because it makes multiplication easier.

With that out of the way, we can finally give one real example of how vector multiplication works in geometric algebra. It’s actually pretty simple because we just multiply every component with every other component:

Let’s walk through a few of the steps I did there:

- because .
- because , so the scalar part is zero, and can write the wedge product of basis-vectors shorter as . This short-hand notation is only valid for vectors which are orthogonal to each other.
- because

So as promised the result of multiplying two vectors is a scalar () and a bivector (). A sum of different components like this is called a multivector.

When doing these multiplications you quickly notice that just as all vectors can be represented as combinations of , and , all bivectors can be represented as combinations of , and . So I’ll just use these as my basis-bivectors. We could make different choices here, for example we could use instead of but I like how the bivectors circle around like that. The choice of bivectors doesn’t really matter, just as the choice of basis-vectors doesn’t really matter. We could for example have also chosen , and as our basis vectors. All the math works out the same, we just get different signs in a few places.

Once we have three basis-vectors and three basis-bivectors, we notice that we can represent all 3D multivectors as combinations of 8 numbers: 1 scalar, 3 vector-coefficients, 3 bivector-coefficients and 1 trivector-coefficient. If we did the same exercise in a different number of dimensions, we would find similar sets of numbers. In 2D space for example we have 1 scalar, 2 vector-coefficients and 1 bivector-coefficient. That makes sense, because in 2D there are only 2 directions, only 1 plane and no trivector because there is no volume. If we went to 4D we would have 1 scalar, 4 vector-coefficients, 6 bivector-coefficients, 4 trivector-coefficients and 1 quadvector-coefficient. I’m sure you can spot the pattern that would allow you to go to any number of dimensions. (but really these come out naturally depending on how many orthogonal basis-vectors you start with)

We’re almost finished with our introduction to geometric algebra, so I need to mention one final important property: vector multiplication is associative. Meaning so we can choose which multiplication we want to do first.

OK with that we’re finished with the introduction, but I want to practice a few more multiplications so that you get the hang of it. Maybe do a few yourself. It takes a couple minutes, but then you have the rules ingrained into muscle memory. This practice section is optional though.

Let’s do some practice runs to build up an intuition for how these vectors and bivectors behave. You can skip this section entirely if you don’t care about geometric algebra and just want to get to rotations.

What happens if we multiply two similar bivectors?

So what I did there is I used to re-order the basis-elements. Then everything collapses down because . So what we see here is that the dot product of a bivector is a negative number. Isn’t that interesting? In particular if we have a bivector of length 1 and multiply it with itself: we see that . Remember how in quaternions there are these three components , and which have ? We’re going to be using the bivectors for that. However it just so happens that the bivector is a mathematical construct whose square is -1. That does not mean that it is the result of . I could build any number of mathematical constructs that square to -1, (for example trivectors also square to minus one) that doesn’t mean that they are all the square root of -1. How many square roots is -1 supposed to have?

Speaking of squaring a trivector, let’s try that to get practice at re-ordering these components:

Getting the hang of it yet? It’s all about re-ordering components until things collapse.

Let’s try multiplying two different bivectors:

The result of two bivectors is another bivector. If we have more complicated bivectors that are made up of multiple basis-bivectors, the result is a scalar plus a bivector:

So this is a scalar () plus quite a complicated bivector ().

What happens if we multiply across dimension. Like multiplying a vector with a bivector?

If we multiply the plane with a vector that’s on the plane, we get another vector on the plane. In fact if we do this a few more times:

We notice that after four multiplications we are back at the original vector . So every multiplication with a bivector rotates by 90 degrees. If we multiply on the left side instead of multiplying on the right side, we would rotate in the other direction.

What if we multiply the plane with a vector that’s orthogonal to it?

Well that’s disappointing, we just get the trivector. What if we multiply the trivector with the plane?

If we multiply the trivector with the plane, the plane collapses and we’re left with just the vector that’s normal to the plane. This works even for more complicated bivectors:

Which is the normal of the original plane. What if we multiply a vector with the trivector?

If we multiply a vector with the trivector, the vector part collapses out and we’re left with the plane that the vector is normal to. This works even for more complicated vectors:

And with that we’re back at the original plane. Almost. The sign got flipped. If we had multiplied by we would have been back at the original plane.

So multiplying with the trivector turns planes into normals and normals into planes, because the other dimensions collapse out. This also allows us to define the cross product in geometric algebra: . So first we build a plane by doing the wedge product, then we get the normal by multiplying with the trivector.

If you went through the practice chapter you will have already seen places where geometric algebra does rotations: bivectors rotate vectors on their plane by 90 degrees. It’s not quite clear how we can build arbitrary rotations with that though.

One thing that’s a little bit easier to do is reflections, and we will see that we can get from reflections to rotations.

Let’s say we want to reflect the vector a in the picture below on the normalized vector r, to get the resulting vector b:

To do that it’s useful to break the vector a into two parts: The part that’s parallel to r, and the part that’s perpendicular to r, :

(forgive my crappy graphing skills)

These have a few properties:

(the result is a scalar and we can flip the order)

(the result is a bivector and flipping the order flips the sign)

From the picture it should be clear that if we subtract instead of adding it, we should get to . Or in other words:

So how do we get these and vectors? You may already know how to do it, but we actually never need to explicitly calculate them. Because we can actually represent this reflection as

How do we get to that magical formula? Let’s multiply it out:

The important step is that , allowing us to re-order the elements until we’re left with which is just 1, as long as is normalized.

The reflections above look kinda like rotations. In fact if all we want to do is rotate a single vector, we can always do that with a reflection. The problem is if we want to rotate multiple vectors, like in a 3d model, then the rotated model would be a mirror version of the original model.

The solution to that is to do a second reflection. There are many possible pairs of reflections that we could choose, but here is an easy one. First we reflect on the half-way vector between and , (where writing pipes around a vector like is the length of the vector, so is a normalized vector):

So in this picture I am reflecting on the vector , which is half-way between and , landing us at . To get from to we just have to do a second reflection with the vector itself. (which is a bit weird, but if you follow the equations it works out) Given that is one reflection, is two reflections. First we reflect on , then we reflect on .

Earlier we chose . We can multiply this out and define

Then the rotation is written as (where you could work out by multiplying out the other side, or you can just flip the sign on the bivector parts of ), and the inverse is written as .

And just like that we have quaternions. How? Where? I hear you asking. That part in the last equation is a quaternion. If you multiply it all out, you will find that all the vector parts and trivector parts collapse to 0, and you’re just left with the scalar part and the bivector coefficients. And it just so happens that if you have a multivector which consists of only a scalar and the bivectors, multiplication behaves exactly like multiplication of quaternions.

Now isn’t that interesting? All we did was we did the math for reflections, and if we do two of those we get quaternions? No imaginary numbers, no fourth dimension, just 3d vector math. All we had to do was introduce that wedge product .

And you’ll notice that the way we apply , by doing looks an awful lot like how we multiply quaternions with vectors. To multiply a quaternion with a vector we do .

OK so let’s convince ourselves that these really are quaternions and work out the quaternion equations. They are . Our quaternion consists of a scalar and three bivectors, , , and . (I use them in this order because the plane rotates around the x axis, so it should come first). So let’s try this:

.

Seems to work so far. But I actually don’t fulfill the equation because for me . I could fix that by choosing a different set of basis-bivectors. For example if I chose , and , then this would work out because . But I kinda like my choice of basis vectors and all the rotations work out the same way. If this bothers you, just choose different basis bivectors.

One super cool thing is that when doing the derivations using reflections, I never had to specify the number of dimensions. We could use 3D vectors or 2D vectors or any number of dimensions. So if we work out the math in 2D, what do you think we get? That’s right, we get complex numbers: One scalar and one bivector. Because that’s how you do rotations in 2D. But we could go to any number of dimensions using this method. (except in 1D this kinda collapses, because you can’t really rotate things in 1D)

Also we didn’t specify what we are rotating. We assumed that it was a vector, but we never required that. So this can rotate bivectors and it can rotate other quaternions.

So we found a new way to derive quaternions. This new way is neat because we don’t need 4 dimensions and we don’t need imaginary numbers. But can we learn anything new from this? Already we have two possible new interpretations:

- A quaternion is the result of two reflections
- A quaternion is a scalar plus three bivectors

Maybe one of these has some interesting conclusions.

Before that I want to kill the 4D interpretation properly: There are two reasons why people say quaternions are 4D: The fact that quaternions have four numbers, and the fact that quaternions have double cover. I’ll talk about the double cover separately later, but here I briefly want to talk about the four numbers thing. There are lots of 3D constructs that have more than three numbers. For example a plane equation has four numbers: . Or if we want to do rotations using matrices in 3D, we need a 3×3 matrix. That’s 9 numbers. But nobody would ever suggest that we should think of a rotation matrix as a 9 dimensional hyper-cube with rounded edges of radius 3. So don’t think of quaternions as a 4 dimensional hypersphere of radius 1. Yes, there are some useful conclusions to draw from that interpretation (for example it explains why we have to use slerp instead of lerp) but it’s such a weird interpretation that it should come up very rarely.

With that out of the way let’s get to these two new interpretations:

1. Interpreting quaternions as two reflections. I couldn’t get much useful out of this. The first reflection is always on the vector half-way between the start of the rotation and the end of the rotation. The second reflection is always on the end of the rotation. I’ve played around with visualizing that, but the visualizations always looked predictable and didn’t offer any insights.

2. Interpreting quaternions as a scalar plus three bivectors. This interpretation on the other hand turned out to be a goldmine. Not only can you get an intuitive feeling for how this behaves, you can also get visualizations from this. This interpretation also allowed me to get rid of the double cover of quaternions.

So even though we have derived quaternions using reflections above, I will actually spend the rest of the blog post talking about quaternions as scalars and bivectors.

A quaternion is made up of a scalar and three bivectors. We all know what a scalar does: Multiplying with a scalar makes a vector longer or shorter. I said above that multiplying with a bivector rotates a vector by 90 degrees on the plane of the bivector.

So how can we build up all possible rotations if all we have is a scalar and three rotations of exactly 90 degrees? The answer is that a bivector actually does slightly more: It rotates by 90 degrees, and then scales the vector.

I said that a bivector is a plane. But because of its rotating behavior, I actually like to visualize it as a curved line. So I visualize a vector as a straight line, and a bivector as a 90 degree curve. So here is a visualization of three different bivectors:

These are the bivectors (bottom), (middle) and (top). It’s a 90 degree rotation followed by a scale. I find this visualization particularly useful when chaining a bunch of operations together.

For example let’s say we want to rotate by 45 degrees on the xy plane. To do that we can multiply a vector with the quaternion . (that 0.707 is actually , but I’ll truncate it to 0.707 here) Now let’s multiply the vector with that quaternion. That gives us

Here’s how I would visualize that:

First we rotate by the bivector to get :

So the bivector is a rotation by 90 degrees followed by a scale of 0.707.

Next we multiply the original vector with the scalar to get the vector , which we add to the previous result:

Which then gives us the final vector of :

Which is the original vector rotated by 45 degrees.

This way of visualizing makes it very clear that multiplication with a quaternion is just multiplication with a scalar and multiplication with a bivector. And this also shows how we got a 45 degree rotation, even though all we can do is 90 degree rotations followed by scaling. It also explains why we need the single scalar value, and why the three bivectors are not enough: We sometimes want to add some of the original vector back in to get the desired rotation.

One thing to note is that in here I chose to do the bivector multiplication first, and the scalar multiplication second. But the choice is kinda arbitrary as both of these happen at the same time, and they don’t depend on each other.

Let’s rotate that same vector again to show what this looks like when we didn’t start off with one of our basis vectors:

So let’s visualize that:

First we rotate with the bivector, which puts us at :

So once again this does a 90 degree rotation followed by a scale of 0.707.

Next we multiply the original vector by 0.707 and add the resulting vector :

Which then gives us the final vector of :

Which is exactly what we would expect after rotating by 45 degrees twice.

I think these visualizations also explain how we can get arbitrary rotations: For bigger rotations we just have to make the scalar component smaller as the bivector component gets bigger.

So far we have only looked at the xy plane. To visualize this in 3D, I wrote a small program in Unity that can do the above visualization for all three bivectors. Here is what that looks like for rotating from the vector to the vector . That gives me the particularly nice quaternion .

This is going to be hard to do in pictures because it’s a 3D construct, but I’ll give it a shot. Here is what the two vectors look like:

So I want to rotate from the vector on the left to the vector on the right.

Here is what the contribution of the bivector looks like:

So this bivector is rotating on the xy plane. It takes the end point of the vector and rotates it 90 degrees down on the xy plane. It may be a bit hard to see, but imagine all the yellow lines lying on a xy plane.

The result of that 90 degree rotation is the vector . (the lower edge of the plane) I used the end of that rotation to start our result vector. (see how I have a third short vector sticking out at the bottom now? That’s )

Next I’m doing the contribution of the bivector:

The original vector was already rotated 45 degrees on the yz plane, so this rotation started off at a 45 degree angle and it rotated 90 degrees on the yz plane. Then it scaled the result by 0.5, giving us the result vector . (the bottom of the teal plane)

I also added the result of that rotation to the result vector. (the shorter vector that was sticking out now has a corner in it, indicating that I added the new )

Next we add the contribution of the bivector:

This took the end point of the original vector, and rotated it by 90 degrees on the zx plane. Then it scaled the result by 0.5, giving us the new vector (the end of the purple plane). The reason why the purple plane is floating above the other planes is an artifact of my visualization: I start at the end point and then I only move on the zx plane, so I end up floating above everything else. I also added this to our result vector at the bottom there.

Finally I’m going to add the scalar component into this:

This just took the original vector and scaled it by 0.5, giving us . I then added that to the results of the three bivector rotations. And as we can see, if we add up the contributions of the three bivectors and of the scalar part, we end up exactly at the end point of the vector that we were rotating into. (it may look like the last part is longer than 0.5 times the original vector, but that’s a trick of the perspective. The reason I picked this perspective is that you can see all three rotations from this angle)

So the rotation happened by doing three bivector multiplications and one scalar multiplication and adding all the results up.

Once again I want to point out that the order in which I added these up is arbitrary. All of these multiplications happen at the same time and don’t depend on each other, since they all just use the original vector as input. I chose to do this in the order xy, yz, zx, scalar, because that gave me a nice visualization.

I wanted to make the above visualization available for you to play with. I thought I could be really cool and upload a webgl version so that you can just play with it in your browser. So I built a webgl version, but then I found out that I can’t upload that to my wordpress account. So… I just put it in a zip file which you have to download and then open locally… Here it is.

There is an alternate visualization for the above rotation: Just as we would think of the vector as a single vector, we can also think of the bivector as a single bivector. It’s the plane with the normal , which is the plane spanned between the start vector and the end vector of the rotation. Then the visualization shows a 90 degree rotation on that plane, followed by a scaling of the length of this bivector. (which is ) That visualization looks like this:

So we rotate on this shared plane, then scale by 0.866, and finally add the original vector scaled by 0.5. This visualization as a single 90 degree rotation by the sum-bivector is equally valid as the visualization of the component bivectors. Just as we can visualize vectors either by their components, or as one line, we can visualize bivectors either by their components or as a single plane.

That finishes the part about visualization. As far as I know this is the first quaternion visualization that doesn’t try to visualize them as 4D constructs, and I think that really helps. Every component now has a distinct meaning and a picture. And we can see how the behavior of the whole quaternion is a sum of the behavior of its components.

One quick aside I want to make is that sometimes people say that quaternions are related to the axis/angle representation of rotations. That is a good way to get people started with quaternions, but then it breaks down relatively quickly because the equations don’t make sense and the numbers behave weirdly. The scalar & bivector interpretation is actually related to the axis/angle interpretation, and it explains what’s really going on here. Because when I say that something rotates 90 degrees on a plane, we can also say that it rotates 90 degrees around the normal of the plane. So in this interpretation quaternions first: rotate 90 degrees around the normal, followed by being scaled down, and second: multiply the original vector times a scalar and add that. It’s not quite axis/angle, but we can see how it’s related and why the axis/angle interpretation sometimes seems to work.

With the scalar & bivector interpretation of quaternions, we have a good idea of what quaternions do. With that, we’re ready to tackle the final quaternion mystery:

When I was working on this, a few friends asked me how the “scalar and bivector” explanation explains the double cover of quaternions. If you’re not familiar, the double cover means that for any desired rotation, there are actually two quaternions that represent that rotation. For example the quaternions that have 1 or -1 in the scalar part, and 0 for all the bivectors both represent a rotation by 0 degrees. (or by 360 degrees depending on how you look at it)

At first I responded that I hadn’t gotten to that part yet, but as I was working on this, the double cover just never came up. So eventually I decided to go looking for it, and… I couldn’t find it. It seemed like my quaternions didn’t have double cover. So I double checked everything and noticed that I have one difference: Remember how in order to multiply a quaternion with a vector we did this multiplication: . I accidentally didn’t do that. I just did .

And the simple multiplication actually works as long as you’re only rotating vectors on a plane that they actually lie on. For example rotating the vector on the plane works out: . The problems start if we’re rotating a vector that doesn’t completely lie on the plane that you’re rotating on. So let’s say I’m rotating the vector on the plane:

That’s strange: Some of our vector part has disappeared, and instead we have a trivector. This is not good. You don’t want part of the vector to disappear after a rotation. Rotating with fixes the problem, because the trivector part cancels out:

So now the part that’s on the plane (the component) got rotated, but the part that’s not on the plane (the component) was left unchanged. This is exactly what we want.

But look at what happened: The first rotation was a 90 degree rotation and the part that’s on the plane ended up at . And now we did a full 180 degree rotation and that part ended up at . How did that happen?

Well it actually makes sense. We are multiplying with the quaternion twice after all. Of course it would do a double rotation. It’s clearest if you multiply it all out, but the short explanation is that the conjugate allows us to rotate roughly in the same direction while multiplying from the other side: , and we went ahead and just multiplied on both sides . So if we multiply on both sides of course we get twice the rotation.

This is literally where the half-angles of quaternions and the double cover come from: From the way we multiply quaternions with vectors. Internally quaternions actually don’t have double cover. If you multiply one 90 degree quaternion with a different quaternion, then after four rotations that second quaternion will end up exactly where it started. But then we chose a vector multiplication function that applies the quaternion twice. So we have to change the interpretation and that 90 degree quaternion becomes a 180 degree quaternion. And actually my visualizations above don’t make sense any more because the vector multiplication always does that operation twice.

So if the vector multiplication is the problem, could we define a vector multiplication that doesn’t lead to double cover? That would make quaternions much simpler.

And the answer is that yes, we can. Remember that rotating vectors that lie on the plane already worked correctly. The problem was that rotating an orthogonal vector would turn into a trivector. (but rotations should leave orthogonal vectors unchanged) The solution is that we have to first project the vector down onto the plane, then rotate within the plane, and then apply the original offset again. Here is an outline of the algorithm:

- Compute the normal of the plane by multiplying with the trivector (very fast)
- Project the vector onto that normal (fast, as long as you use the version without a square root)
- Subtract that projected part (very fast)
- Multiply the vector with the quaternion
- Add the projected part (very fast)

So now we only have to do a single multiplication instead of two multiplications. And since all other operations are fast, this might even be faster than the double-cover-giving quaternion/vector multiplication.

And yes, this totally works and it’s faster and it’s less confusing. But you don’t want to use it. The reason is that as soon as I didn’t have double cover in my quaternions, I discovered why double cover is actually awesome.

Double cover is what makes quaternion interpolation so great. (by interpolation I mean getting from rotation a to rotation b in multiple small steps as opposed to one large step) Without double cover, there are some quaternions that you can not interpolate between. Having to worry about those special cases makes interpolation a giant pain and defeats the whole point of why we used quaternions to begin with.

To explain what the problem is, let’s do a couple 90 degree rotations on the plane, once using double cover and once not using double cover:

Rotation | Single Cover | Double Cover |
---|---|---|

If we interpreted these two numbers as vectors, the double cover version would do a 45 degree rotations of the vector each time. But since the double cover quaternion will rotate twice, this will actually give us a 90 degree rotation from one row to the next.

Here is a visualization of the same numbers. The idea here is that I put the scalar value on the x axis and the bivector on the y axis:

I drew the double cover as two lines, and the single cover as one line. Once again we see that a quaternion that uses double cover rotation is simply half-way towards the quaternion that uses single cover rotation.

I said that double cover is what makes quaternion interpolation so great. To see why, let’s try interpolating between these. To keep it simple I won’t do a slerp, but I’ll just try to find the rotation half-way between any of these rotations. We do that by adding the quaternions and then renormalizing them. Interpolating from the rotation to the rotation is pretty easy in both cases:

For single cover: and after normalization that comes out to be which is a 45 degree rotation.

For double cover: and after normalization that comes out to be , which is a 22.5 degree rotation, or with the double cover it’s a 45 degree rotation.

So interpolating a 90 degree rotation works just fine in both cases.

However we run into problems when interpolating from the rotation to the rotation:

For single cover: . Huh. We can’t find the half-way rotation between these two because we just get 0, which we can’t normalize. You may think that this is just a problem because I chose to find the exact midpoint between these two vectors. But this is also a problem if we want to slerp from one to the other. It all collapses and we’re left with a zero vector.

So let’s reason through this manually. How would we interpolate from +1 to -1? We could rotate on the xy plane or on the yz plane or on the zx plane, or on any combined bivector. How do we know which bivector to choose? They’re all zero in both of our inputs. We’re missing information. In order to interpolate between two rotations, we need to know a plane on which we want to interpolate.

Let’s see how the double cover solves this: and after normalization we’re left with which was our 90 degree rotation, which is exactly the half-way point between the 0 degree rotation and the 180 degree rotation.

Isn’t that neat? In the double cover version one of our quaternions had a component, so we could interpolate on that plane. In fact you could build many possible 180 degree rotations in the double cover version. We could build a 180 degree rotation that rotates on the plane or on a linear combination of the and planes, or on any arbitrary plane. They all look different and they all interpolate differently. That’s a great property because we want to be able to interpolate on any plane of our choosing. In the single cover version however we only have one way to rotate 180 degrees and it looks the same no matter which plane you’re on. Which works fine if all you want to do is rotate 180 degrees, but it doesn’t work if you want to interpolate from one rotation to the other.

One way of thinking of this is that the trick of double cover is that you can express any rotation as a rotation of less than 90 degrees. We already saw that if we want to go 180 degrees, we just go 90 degrees twice. Want to go 270 degrees? Just go -45 degrees twice. Like that we can always stay far away from the problem point of the 180 degree rotation that we would run into often if we used the single cover version of quaternions. And like that we always keep the information of which plane we are rotating on, making interpolation easy.

Another way of thinking of this is that the double cover version always gives us a midpoint of the rotation which we can use to interpolate. For some pairs of rotations, there are a lot of possible midpoints depending on which plane we want to interpolate on. Double cover solves that problem by giving us one midpoint, which narrows our choices down to one plane. And we can derive any other desired interpolation if we have the midpoint.

You may be wondering if there is a problem point where the double cover breaks down. Looking at the table above, we can find one: Rotating by 360 degrees: . Which we can not renormalize. But that case is easy to handle, and in fact every slerp implementation already handles this: We detect if the dot product of the quaternions is negative, and if it is we flip the target quaternion. So then we interpolate from to which is just a 0 degree rotation. Which is exactly what we wanted. So as long as we handle the “negative dot product” case in our interpolation function, we can handle all possible rotations. Because there are two possible ways to express every rotation, and if we run into one that’s inconvenient, we just switch to the other one.

So I hope I have convinced you that you want to have double cover. It’s a neat trick that makes interpolation easy. Quaternions do not “naturally” have double cover, but the double cover comes from the way we define the vector multiplication. If we used a different algorithm to multiply a quaternion with a vector (I outlined one above) then we could get rid of the double cover, but we would be making interpolation more difficult. I actually think that the double cover trick is not unique to quaternions. I think we could also apply it to rotation matrices to make them easier to interpolate. I haven’t done the math for that though.

So in summary I hope that I was able to make quaternions a whole lot less weird. The geometric algebra interpretation of quaternions shows us that they are normal 3D constructs, not weird four-dimensional beasts. They consist of a scalar and three bivectors. Bivectors do 90 degree rotations followed by scaling, and we saw how we can create any rotation just from those 90 degree rotations and linear scaling. The rules that govern these constructs are simple, making the equations easy to derive and understand. (as opposed to the quaternion equations which can only be memorized) Also quaternions do not naturally have a double cover. The double cover comes from the way we define the multiplication of vectors and quaternions. We could get rid of it, but the double cover is a great trick for making interpolations easier.

Unfortunately this still only makes it slightly easier to understand the numbers in quaternion. The double cover makes it so that each rotation actually gets applied twice, so my visualizations above only show half of what’s going on. This also makes it difficult to interpret the numbers because you have to know what happens if a rotation gets applied twice, which is a whole lot harder to do in your head than doing a single rotation. But still I now have a picture of quaternions, and I know what each component means, and why they behave the way they do. I hope I was able to do something similar for you.

I also think that Geometric Algebra is a very interesting field that merits further study. The fact that quaternions came out so naturally (in fact they almost don’t even need a special name) and that if we do the same derivation in 2D we end up with complex numbers is fascinating to me. The paper I linked at the beginning, Imaginary Numbers are not Real, spends a lot of time talking about how various equations in physics come out much simpler if we use geometric algebra instead of imaginary numbers and matrices. Simplicity like that is a good hint that there is something good going on here. If you’re interested in this for doing 3D math, there is something called Conformal Geometric Algebra which adds translation to quaternions. I didn’t look too much into it, but a brief glance shows that it might be related to dual quaternions. So there’s much more to discover.

]]>