Probably Dance

I can program and like games

Games Are About Personal Development

Here’s an angle on the fundamental reason for why we play games: They are about personal development, learning about ourselves and about the world. This may not be a new angle, but I haven’t heard it stated this explicitly. Instead I have heard people say stupid things like “games teach hand-eye-coordination” which is true, but also bullshit because why would you spend this much time training your hand-eye-coordination? No I claim that games teach import life lessons, and that that is the fundamental reason why we play games.

I’m going to talk about video games, but this is also about games in general. Why do kids play with dolls? Because they want to learn about family life. (or about conflicts when playing with action figures) This is not explicit learning like we learn from a teacher, but you act out situations and adjust your behavior depending on how your play partner reacts. Why do we send our kids to football practice? Not because we think that they need to learn the valuable skill of kicking a ball into a net. No it’s because we want them to learn about working in teams and about pacing themselves and about playing fair and all that.

The things we learn are obvious in those scenarios. It’s well known that it’s important for kids to play in order to figure out how to act in the world in a safe environment. But I claim that the same thing is true for video games, and my example will be Super Mario World.

Read the rest of this entry »

Evidence For How To Make Great Games

Earlier this year I gave a talk about the Game Outcomes Project. I called that talk “Evidence For How To Make Great Games” because I think the Game Outcomes Project is the best data we have for what teams do that make great games. I wasn’t involved in the Game Outcomes Project, I just gave a talk about it because I really like it. Also I wanted to focus on different things than what they focused on in their own write-ups and talks.

People who saw the talk said that they really liked it, and they keep on telling me how much they liked it. So I decided to record the talk again and upload it.

The pitch for the talk is that the results of the Game Outcomes Project is the best evidence we have for what makes great game development teams and what makes bad game development teams. And I think that every game developer should know this stuff. So I talk about what you should focus on when making a game, and I give advice for how to get there. So the game outcomes project found “really successful teams do X” and I present that, and then also have a section at the end of the talk where I say “here is how you can actually get good at doing X.” Here is the talk:

Read the rest of this entry »

Collected Advice for Doing Scientific Research

I’ve been collecting advice from various sources about how to do research. I can’t find a good place that collects this, so let’s see if I can create one.

The idea is that this should be something similar as George Polya’s “How to Solve It” but for doing research instead of solving problems. There is a lot of overlap between those two ideas, so I will quote a lot from Polya, but I will also add ideas from other sources. I should say though that my sources are mostly from Computer Science, Math and Physics. So this list will be biased towards those fields.

My other background here is that I work in video game AI so I’ve read a lot of AI literature and have found parallels between solving AI problems and solving research problems. So I will try to generalize patterns that AI research has found about how to solve hard problems.

A lot of practical advice will be for getting you unstuck. But there will also be advice for the general approach to doing research.

Read the rest of this entry »

Less Weird Quaternions

I’ve always been frustrated by how mysterious quaternions are. They arise from weird equations that you just have to memorize, and are difficult to debug because as soon as you deviate too far from the identity quaternion, the numbers are really hard to interpret. Most people implement quaternions once and then treat them as a black box forever after. So I had put quaternions off as one of those weird complicated 4D mathematical constructs that mathematicians sometimes invent that magically works as long as I don’t mess with it.

That is until recently, when I came across the paper Imaginary Numbers are not Real – the Geometric Algebra of Spacetime which arrives at quaternions using only 3D math, using no imaginary numbers, and in a form that generalizes to 2D, 3D, 4D or any other number of dimensions. (and quaternions just happen to be a special case of 3D rotations)

In the last couple weeks I finally took the time to work through the math enough that I am convinced that this is a much better way to think of quaternions. So in this blog post I will explain…

  • … how quaternions are 3D constructs. The 4D interpretation just adds confusion
  • … how you don’t need imaginary numbers to arrive at quaternions. The term \sqrt{-1} will not come up (other than to point out the places where other people need it, and why we don’t need it)
  • … where the double cover of quaternions comes from, as well as how you can remove it if you want to (which makes quaternions a whole lot less weird)
  • … why you actually want to keep the double cover, because the double cover is what makes quaternion interpolation great

Unfortunately I will have to teach you a whole new algebra to get there: Geometric Algebra. I only know the basics though, so I’ll stick to those and keep it simple. You will see that the geometric algebra interpretation of quaternions is much simpler than the 4D interpretation, so I can promise you that it’s worth spending a little bit of time to learn the basics of Geometric Algebra to get to the good stuff.

Read the rest of this entry »

I Wrote The Fastest Hashtable

I had to get there eventually. I had a blog post called “I Wrote a Fast Hashtable” and another blog post called “I Wrote a Faster Hashtable.” Now I finally wrote the fastest hashtable. And by that I mean that I have the fastest lookups of any hashtable I could find, while my inserts and erases are also really fast. (but not the fastest)

The trick is to use Robin Hood hashing with an upper limit on the number of probes. If an element has to be more than X positions away from its ideal position, you grow the table and hope that with a bigger table every element can be close to where it wants to be. Turns out that this works really well. X can be relatively small which allows some nice optimizations for the inner loop of a hashtable lookup.

If you just want to try it, here is a download link. Or scroll down to the bottom of the blog post to the section “Source Code and Usage.” If you want more details read on.

Read the rest of this entry »

Faster Sorting Algorithm Part 2

This is a follow up to my previous blog post about writing a faster sorting algorithm. I’m using this as a chance to go into detail on topics that I was asked about in the comments: I’ll clear up some misunderstandings and go into future work that needs to happen with this algorithm.

Somebody was nice enough to link my blog post on Hacker News and Reddit. While I didn’t do that, I still read most of the comments on those website. For some reasons the comments I got on my website were much better than the comments on either of those websites. But there seem to be some common misunderstandings underlying the bad comments, so I’ll try to clear them up.

Read the rest of this entry »

random_seed_seq: A small utility to properly seed random number generators in C++

Some libraries are so small they’re almost not worth sharing. But the C++ standard has a giant hole in that it doesn’t provide an easy way to quickly generate truly random numbers: It has std::mt19937_64 which quickly generates pseudo-random numbers, and it has std::random_device, which slowly generates truly random numbers.

The easiest way to quickly generate truly random numbers is to use a std::random_device to seed a std::mt19937_64. That way we pay a one-time cost of using random device to generate a seed, and then have quick random numbers after that. Except that the standard doesn’t provide a way to do that. In fact it’s more dangerous than that: It provides an easy wrong way to do it (use a std::random_device to generate a single int and use that single int as the seed) and it provides a slow, slightly wrong way to do it. (use a std::random_device to fill a std::seed_seq and use that as the seed) There’s a proposal to fix this, (that link also contains reasons for why the existing methods are wrong) but I’ve actually been using a tiny class for this:

Read the rest of this entry »

I Wrote a Faster Sorting Algorithm

These days it’s a pretty bold claim if you say that you invented a sorting algorithm that’s 30% faster than state of the art. Unfortunately I have to make a far bolder claim: I wrote a sorting algorithm that’s twice as fast as std::sort for many inputs. And except when I specifically construct cases that hit my worst case, it is never slower than std::sort. (and even when I hit those worst cases, I detect them and automatically fall back to std::sort)

Why is that an unfortunate claim? Because I’ll probably have a hard time convincing you that I did speed up sorting by a factor of two. But this should turn out to be quite a lengthy blog post, and all the code is open source for you to try out on whatever your domain is. So I might either convince you with lots of arguments and measurements, or you can just try the algorithm yourself.

Following up from my last blog post, this is of course a version of radix sort. Meaning its complexity is lower than O(n log n). I made two contributions:

  1. I optimized the inner loop of in-place radix sort. I started off with the Wikipedia implementation of American Flag Sort and made some non-obvious improvements. This makes radix sort much faster than std::sort, even for a relatively small collections. (starting at 128 elements)
  2. I generalized in-place radix sort to work on arbitrary sized ints, floats, tuples, structs, vectors, arrays, strings etc. I can sort anything that is reachable with random access operators like operator[] or std::get. If you have custom structs, you just have to provide a function that can extract the key that you want to sort on. This is a trivial function which is less complicated than the comparison operator that you would have to write for std::sort.

If you just want to try the algorithm, jump ahead to the section “Source Code and Usage.”

Read the rest of this entry »

Investigating Radix Sort

I recently learned how radix sort works, and in hindsight it’s weird that I never really learned about it before, and that it doesn’t seem to be widely used. In this blog post I claim that std::sort should use radix sort for large arrays, and I will provide a simple implementation that does that.

But first an explanation of what radix sort is: Radix sort is a O(n) sorting algorithm working on integer keys. I’ll explain below how it works, but the claim that there’s an O(n) searching algorithm was surprising to me the first time that I heard it. I always thought there were proofs that sorting had to be O(n log n). Turns out sorting has to be O(n log n) if you use the comparison operator to sort. Radix sort does not use the comparison operator, and because of that it can be faster.

The other reason why I never looked into radix sort is that it only works on integer keys. Which is a huge limitation. Or so I thought. Turns out all this means is that your struct has to be able to provide something that acts somewhat like an integer. Radix sort can be extended to floats, pairs, tuples and std::array. So if your struct can provide for example a std::pair<bool, float> and use that as a sort key, you can sort it using radix sort.

Read the rest of this entry »

Lessons Learned from Shenzhen I/O

Shenzhen I/O is a brilliant game. In case you haven’t heard of it, it’s a game about programming micro-controllers. It distills programming down to the fun parts, removing the inertia, self-inflicted complexity, overhead, uncertainty and drag of real programming. It’s just about coming up with clever tiny algorithms and micro-optimizing the heck out of them. It’s great alone, but it’s even better if you have a friend that’s playing at the same time. Competing on the leaderboards for puzzles is enormous fun. From playing that game, here are a couple lessons:

1. There is no optimal code. There is only code that’s faster than the code that you’re comparing to

Shenzhen I/O shows you a histogram of all the scores that other people have reached. If my solution would fall on the right of the bell curve, I would optimize it until I was on the left. After a lot of work I would usually arrive at an “optimal” solution that puts me in the best bracket on the histogram. Those solutions were always far from optimal.

Read the rest of this entry »