Mostly Harmless

A blog about science, art, coding, life,... mostly harmless stuff.

PCG Terrain II: The Mesh Generator

In the previous post we made some good advance on the first (and most important) of the three components for a procedural terrain generator. We’ll get back to that later on, but for now, I want to focus on the second most important component: the mesh generator. Lets recap where we are now:

Gaussian mountains

So, whats wrong with this island? Mainly, two things:

  • It looks too smooth.
  • It has too little detail.

Here is a close up, where both issues can be seen clearly:

Gaussian mountains closeup

Perhaps on the 2000s we could have gotten away with this huge flat polygonal landscape, but not on the 2014s.

Perlin powered details

To add some detail, we can use our old friend the multilayered Perlin noise. Simply modify the SampleHeight function to add a few levels of Perlin after the gaussians:

public float SampleHeight(float x, float y) {
    float h = 0f;

    for (int i = 0; i < islands.Length; i++) {
        h += SampleHeight(x, y, islands[i],

    return h + PerlinInfluence * SamplePerlin(x, y);

The PerlinInfluence parameter can be fine tunned to make sharper or smoother looking mountains. This is the same old SamplePerlin function:

private float SamplePerlin(float x, float y) {
    float width = Width / 10f;
    float result = 0;

    for (int i = 0; i < PerlinLevels; i++) {
        result += (Mathf.PerlinNoise(x / width,
                                     y / width))
                  * width / 2;
        width /= 10;

    return result;

And this is what we can achieve with a 100x100 terrain mesh with four levels of Perlin.

Gaussian and Perlin

It looks a bit better, but closeups still suck:

Gaussian and Perlin closeup

And the problem is of course the mesh resolution. We simply have a way too big terrain with way too few vertices. Right now this mesh has only 10K vertices, which are about 160KB worth of memory. We can certainly store and render a much bigger mesh with modern day hardware; however, Unity allows up to 65K vertices per mesh, so if we want bigger meshes we have to split them into smaller parts.

Mesh subdivision

While we are at it, we can make another reasoning. In the last closeup, we can see that far away mountains look pretty good with that level of detail. However, the closest parts, right in front of the camera, look awful. It would be waste of resources to have a uniformly dense mesh with enough detail to correctly represent the closest parts of the terrain, when just a few vertices suffice for the far away details.

What we need is a way to subdivide the mesh into patches, and dynamically create more vertices in those closest to the camera. As the distance increases, we (almost) won’t notice the lost in the detail, hence, we can as well save some bytes. Let’s aim for something like this:

Closeup with details

The basic idea is to represent the terrain as a quadtree. A quadtree is a tree where each node is either a leaf, or it has exactly four children. It is very useful for representing and/or storing spatial 2-dimensional data in a compact and fast structure. If the tree is well built (i.e., balanced), the cost of accessing a given node (i.e. finding which node correspond to which point in the 2D plane) is logarithmic in the number of nodes.

However, we won’t be using quadtrees for storing points, but for building meshes. We don’t care about accessing a node, instead, what we want is to exploit the spatial coherence of the structure of the tree to quickly create patches of terrain with a desired level of detail, using what Wikipedia calls a region quadtree.

We’ll begin with a single mesh of size, lets say, 10000 game units, and only 20x20 vertices. We build this mesh and assign height to the vertices sampling from our heightmap. This mesh will become the root of our quadtree. We then perform a recursive subdivision of the mesh, until we reach the desired level of detail. A node in the quadtree is divided if the distance to the camera is below some threshold, depending on its size.

Dividing a node causes the creation of four new child nodes, layed out to cover the exact same region the parent used to cover. The meshes for the children are created, with the same resolution (20x20 vertices) but half the dimensions of the parent, effectively multiplying by four the number of vertices in the region, hence increasing the level of detail. Then the parent node’s mesh is hidden.

Here is a wireframe view of the previous scene:

Closeup in wireframe

You can see that (more or less) triangles stay the same size with respect to the screen, no matter the distance.

And in the scene view, you can see how the mesh is more dense closer to the camera (the little white line).

Mesh subdivision

There is no simple snippet I can post to cover this part, so if someone really wants it, I can share the Unity 3D project. Leave a comment if that is the case.

There are a few things left. Next time I want to give a try on improving the water, perhaps adding waves and some details near the shores. We’ll also look at adding some props (trees, rocks, etc.) to breathe some life into our world.