When tinkering with my particles, I thought it would be nice to distribute them in various different ways. I wanted to add a ‘normal’ distribution alongside my existing linear one. C++11 has a bunch of fancy classes for generating random values in this way, but they seemed a little heavyweight for me. I just wanted a crappy function that does the job for my little particles.

The first thing that I did was scour the internet for clues and answers. The traditional way of generating normally distributed random values is called the ‘Box-Muller’ transformation. I believe it has nothing to do with yoghurts. It looks like this:

float r, x, y; do { x = randf(-1.0f, 1.0f); // linear random between -1 and 1 y = randf(-1.0f, 1.0f); r = x*x + y*y; } while (r == 0.0f || r >= 1.0f); return sqrt(-2.0f * log(r) / r) * x; // or y

This does work… so, uh, you can probably stop reading. I’ve made a little graph of what it looks like right here. It’s nice.

HOWEVER, I wanted something simpler than that and I didn’t even care if it was quite crap.

You can convert from a linear distribution to any distribution you have a function for provided you can do two simple steps. First, integrate your distribution’s function. That gives you the cumulative distribution function. Then, invert that function. Now you have a generator. Use that generator on a linear distribution and hey presto, you have your desired distribution of random values.

Okay, so integrate and invert aren’t the easiest things to do. In fact, they’re often not possible. The normal distribution function has the form e^(-x^2). This doesn’t have a simple integral. That’s why there isn’t a simple generator for the normal distribution.

However the graph of the integral of the normal distribution function *looks like* the graph of hyperbolic tan. They’re not the same but they’re pretty much the same. Close enough. Now, the inverse of tanh is arctanh, which has a really simple expression. It’s just this:

return 0.5f * log(1.0f / randf(0.0f, 1.0f) - 1.0f);

Yes! That is reasonably similar to the normal distribution and much, much simpler to spit out.

Compare it to above, you’ll be pleasantly surprised I am sure. It is a little leaner down the middle but the overall feel is the same. That satiated my need for normally distributed data.