Probably Dance

I can program and like games

C++11 Completed RAII, Making Composition Easier

The addition of move semantics in C++11 is not just a performance and safety improvement. It’s also the feature that completed RAII. And as of C++11 I believe that RAII is absolutely necessary to make object composition easy in the language.

To illustrate let’s look at how objects were composed before C++11, what problems we ran into, and how everything just works automatically since C++11. Let’s build an example of three objects:

struct Expensive
{
    std::vector<float> vec;
};
struct Group
{
    Group();
    Group(const Group &);
    Group & operator=(const Group &);
    ~Group();
    int i;
    float f;
    std::vector<Expensive *> e;
};
struct World
{
    World();
    World(const World &);
    World & operator=(const World &);
    ~World();
    std::vector<Group *> c;
};

Before C++11 composition looked something like this. It was OK to have a vector of floats, but you’d never have a vector of more expensive objects because any time that that vector re-allocates, you’d have a very expensive operation on your hand. So instead you’d write a vector of pointers. Let’s implement all those functions:

Read the rest of this entry »

Neural Networks Are Impressively Good At Compression

I’m trying to get into neural networks. There have been a couple big breakthroughs in the field in recent years and suddenly my side project of messing around with programming languages seemed short sighted. It almost seems like we’ll have real AI soon and I want to be working on that. While making my first couple steps into the field it’s hard to keep that enthusiasm. A lot of the field is still kinda handwavy where when you want to find out why something is used the way it’s used, the only answer you can get is “because it works like this and it doesn’t work if we change it.”

At least that’s my first impression. Still just dipping my toes in. But there is one thing I am very impressed with: How much data neural networks can express in how few connections.

Read the rest of this entry »

VR Will Be About Using Your Hands

I have been cautiously optimistic about VR for a while: DK1 and DK2 made me feel motion sick pretty quickly, but I could see that there was something neat there. Being able to effortlessly look around changes how the game feels. When you could lean forward in the DK2, it highlighted how constrained our camera has always been in games. Still I didn’t quite get what this would be useful for other than puzzle games.

The obvious use case is for news reports and videos. When I now see pictures from Syria I want to have a 360 degree picture to be able to look around and get a better feel for the situation. But for video games, VR didn’t quite click for me.

At GDC I played two games that used the Oculus and Vive controllers, and now I finally get what you can do with VR  in gamesthat you can’t do otherwise: You can use your own hands to interact with things in the virtual world.

Read the rest of this entry »

Functional Programming Is Not Popular Because It Is Weird

I’ve seen people be genuinely puzzled about why functional programming is not more popular. For example I’m currently reading “Out of the Tar Pit” where after arguing for functional programming the authors say

Still, the fact remains that such arguments have been insufficient to result in widespread adoption of functional programming. We must therefore conclude that the main weakness of functional programming is the flip side of its main strength – namely that problems arise when (as is often the case) the system to be built must maintain state of some kind.

I think the reason for the lack of popularity is much simpler: Writing functional code is often backwards and can feel more like solving puzzles than like explaining a process to the computer. In functional languages I often know what I want to say, but it feels like I have to solve a puzzle in order to express it to the language. Functional programming is just a bit too weird.

Read the rest of this entry »

Quickly Loading Things From Disk

Loading things from disk is a surprisingly unsolved problem in C++. If you have a class that you want to save to JSON or binary and load back later, you’ll probably end up writing custum parsing code. Solutions like Protobuf or Flatbuffers generate custom classes that are… how should I put it… bad code. They force you to work in a parallel type system that’s not quite as powerful as C++ and that doesn’t allow you to write custom member functions. You pretty much have to wrap anything that they generate. Boost serialization is a better solution in that it allows you to load and save existing classes, which allows you to keep your code quality higher, but it’s unable to write human readable files. The default implementation that it provides is also surprisingly slow. I blame it on the fact that it loads using std::istream.

Based on frustrations with existing solutions I decided to write something that can

  1. read and write existing, non-POD classes. No custom build step should be required
  2. write human readable formats like JSON
  3. read data very quickly (I care less about how fast it writes stuff. I work in an environment where there is a separate compile step for all data)

I came up with something that is faster than any of the libraries mentioned above.

Read the rest of this entry »

A surprisingly useful little class: TwoWayPointer

In graph like data structures (explicit or implicit) there is a problem of memory safety when you remove a node. You have to clean up all the connections pointing to you. If your graph structure is explicit this is easy, otherwise it can often be annoying to find all pointers pointing to you. A while ago I thought that you could solve that by introducing a two way pointer, which has to be the same size as a normal pointer, but which knows how to disconnect itself on the other side, so that you don’t have to write any clean up code.

The reason why it turned out surprisingly useful is that if both ends of the connection know about each other, I can make them keep the connection alive even if one of them is moved around. So if one of them lives in a std::vector and the vector reallocates, the TwoWayPointer can handle that case and just change the pointer on the other end to point to the new location. That can dramatically simplify code.

One example is that I’m now using it in my signal/slot class which I’ve also included in this blog post. It allowed me to simplify the signal/slot implementation a lot. Compare it to your favourite signal implementation if you want. There is usually a lot of code overhead associated with memory safety. That all went away in this implementation.

Read the rest of this entry »

Ideas for a Programming Language Part 5: More Flexible Types

Type systems tend to be too rigid. And dynamic typing just throws the baby out with the bathwater. So here are a few things I want:

1. I want to add information to an object at compile time without having to give it a new type. Is an integer in seconds or kilometers or kilometers per hour? Is a string SQL escaped or URL escaped or just plain text? Is a pointer known to not be zero? I also want to write logic based on this. If I multiply seconds with kilometers per hour, I want a result that makes sense. Most importantly, I want to do this easily: There is nothing stopping me from having a “url_escaped_string” type in C++, but then that string doesn’t work with any of my existing functions. I can add an implicit conversion operator, but then I lose my compile time information as soon as I use a function that accepts only one type of string. Or everything has to be templates and everything has to be duck-typing.

So what I envision is that every object has runtime properties and compile time properties. So a string might have a runtime property called size(), and it might have a compile time property called language() where I can set the language to be “SQL” or “URL” or similar and could then make sure at compile time that everything is escaped correctly. An int would not have any runtime properties, but it would still have compile time properties like its unit of measurement. We’ll start with that and then we’ll see what other features we’d want from that. (like functions being able to write conditions for their input parameters where they check the compile time properties of those objects and similar)

Read the rest of this entry »

Another explanation for the Fermi Paradox

I just finished a science fiction book (The Algebraist by Iain M. Banks) in which people travel through space, but there is no faster than light travel. That is except if there is a wormhole connecting the solar system that you’re in with where you want to go. If there isn’t, it will probably take hundreds of years to transport the other end of a wormhole to where you want to go.

The long travel times offer another explanation for the Fermi Paradox. (in addition to all the explanations already in that article) If you lived in a spaceship and heard that there might be a young civilization on a rocky planet somewhere that is just emerging out of their evolutionary phase, would you spend hundreds of years to have a look? (While knowing that that new species is probably pretty boring and that you’ve probably seen five other species like it before and you already don’t like hanging out with those all that much)

You probably wouldn’t. Nobody would. And the proof lies in the fact that there are still hundreds of uncontacted tribes around the world. (Wikipedia) Now that I’ve told you that there are hundreds of uncontacted tribes around the world, will you go out and try to contact them? Probably not. Too many reasons against it, too few reasons for it.

And in space everything takes a heck of a lot longer, so no wonder that nobody has bothered to say hi.

Ideas for a Programming Language Part 4: Reactive Programming

Reactive programming has been around for a long time. It’s a paradigm oriented around data flows. The usual example is a spreadsheet: If cell B refers to cell A, then cell B updates automatically when I change the value in cell A. If ten cells refer to cell A, then all those ten cells change automatically. This is as opposed to C++, where if I compute ten variables using variable X, then if I change variable X, I have to manually write code to make sure that the other ten variables get updated.

What’s changed recently is that there are a few interesting approaches that make this paradigm more widely useful. Functional Reactive Programming (FRP) is a useful approach which is spreading, with popular libraries available for pretty much all languages. For an illustration of FRP and how it can be used in an imperative style I recommend reading Deprecating the Observer Pattern.

What really convinced me though that a lot more code should be reactive is a javascript library called Knockout. Which makes it so easy to write reactive code, you’d be stupid to ever write manual state propagation code.

Read the rest of this entry »

On Self-Improving Intelligence

I just watched this TED talk by Nick Bostrom which is about what happens when computers get smarter than we are. It follows the theory that once we have created AI that is smarter than us, that AI will be able to create AI that is even smarter than itself. So at that point we’ll be left behind very quickly. Intelligence will develop a dynamic of self-improvement that can’t be stopped.

What he doesn’t realize is that we are already in that mode. We have already created people that are smarter than we used to be. And those people have created people who are smarter than themselves. That process has been going on for hundreds, if not thousands of years. The human brain is not the limit for human intelligence.

Read the rest of this entry »