Dev Notes: Seamless LODs, pt. 2 – Normal interpolation

It’s been a while since the last post in this series, in which I described how to gradually blend geometric levels of detail. Now that we have seamless geometry, it’s time to compute the geometry normals, so we can tell in which way the terrain is oriented relative to a light source, and calculate the lightness.

Now, this whole article is just a workaround for a very basic problem – a problem that might not even exist if it wasn’t for the strange ways the OpenGL API has grown and mutated over the years.

Once upon a time…

…drawing stuff with OpenGL worked a bit different from how it does now. You would put polygons on the screen one by one:

glBegin(GL_TRIANGLES);
glVertex3f(1.0f,1.0f,0.0f);
glVertex3f(-1.0f,-1.0f,0.0f);
glVertex3f(1.0f,-1.0f,0.0f);
glEnd();

This is called “immediate mode”. Each of those glVertex calls is essentially an entire chain of initialising a vertex buffer, pushing data into it, then drawing the vertex buffer and closing it again. Of course, this is very inefficient, so today what we do instead is pushing many polygons into a single buffer, draw them all in one go, and keep the buffer for as long as we want to draw it again, so it doesn’t have to be flushed and filled every frame.

However, the immediate mode had a nice feature that’s not supported anymore with the modern drawing pipeline:

glBegin(GL_QUADS);

What this line did was to tell the graphics card that the polygons you were about to send to the GPU should be interpreted and drawn as quads.

But when it was time to abandon the old immediate mode of OpenGL 1.0, ARB/Khronos Group decided that explicitly asking for quads was a bit silly. The graphics card turned them into sets of triangles anyway since there is no such thing as a threedimensional quad with four arbitrary points.

So from then on, the GPU itself was supposed to stay ignorant about all things quadrilateral, and it was the programmer’s job to take care of assembling his quads.

Three isn’t a big enough crowd

But of course, graphics programming isn’t that simple. It’s certainly true that the idea of a quad doesn’t make much sense in a geometrical way. However, 3D graphics isn’t just about geometry. Consider the following:

On the left is a quad with color interpolation. On the right are two triangles with color interpolation. The obvious problem is that the lower triangle has one white and two black corners and doesn’t know about the red corner of the upper triangle.

These are two different interpolation methods. The method on the left is a simple “bilinear interpolation”. To get the color value of an arbitrary point on the quad, you blend the colors of the two upper corners and the colors of the two lower corners relative to the x-offset, then blend the resulting color values relative to the y-offset.

The method on the right is called a “barycentric interpolation” and blends the final color values relative to the three corners of each triangle. This is how all fragment shader interpolation is done in modern OpenGL.

This method is sufficient for coordinate interpolation, so any sort of texture coordinate (UV) mapping will be interpolated correctly. This is true because coordinates are a global value system – both the lower and the upper triangle of a quad have a relative position on the same absolute coordinate map. But for any sort of interpolation with relative values, this method will fail.

Which is why lighting computation with triangle interpolated normals gives obvious hints about the underlying geometry (the triangles are pointing to the lower right corner):

This would totally ruin the plan to smoothly blend between different levels of geometry resolution. It doesn’t matter if the geometry itself is perfectly seamless if the stupid interpolation immediately gives away which way the triangles are aligned, and thus the triangle count.

Here’s what the same terrain would look like with bilinear interpolation.

This is essentially the same as the “old” immediate mode interpolation. This certainly isn’t perfectly smooth either. Twodimensional (image) interpolation is still an ongoing topic of research and there are many different algorithms with varying degrees of smoothness and efficiency.

However, there’s a big difference now: The triangle seams have disappeared. A quad will now always render the same way, no matter if its composed of only two or an arbitrarily high number of triangles. (I know that this isn’t obvious from the screenshot, but you’ll have to take my word on this one.)

Implementation

Here is my implementation of bilinear interpolation in the fragment shader. I sincerely hope that the way I’ve solved this isn’t the only one, because I think it’s overly complicated.

This is based on the idea that interpolation is broken for relative values but works for coordinates.

In the fragment shader:
1. Use the built-in interpolation to generate UV (texture) coordinates("smooth in")
2. Disable interpolation to copy all raw corner values from the previous shader ("flat in")
3. Use the UV coordinates to manually interpolate the raw corner values.

Sounds simple enough, but of course, we can never have simple things.

The problem happens by copying the corner values from the previous shader in “flat mode”. In this case, the previous shader happens to be the tessellation evaluation shader. The information available in the fragment shader only tells us that it has been called by “some vertex point”, so how do we interpolate things?

The ugly answer: When all we have is the red point, the fragment could be part of any of the four quads that surround that red point. Which means that every fragment call requires all 9 corners that might surround the fragment.

We have to figure out which quad we’re in by looking at the UV coordinates – where are we, relative to our red dot? Once we know if we’re in quad 1, 2, 3 or 4, we can finally interpolate the value from 4 of the 9 dots that make up the quad. Well, I warned you – it would be ugly!

The irony is that bilinear interpolation for quads is, of course, still built into OpenGL – as long as you use the legacy immediate mode pipeline. Apparently there’s also an extension available on NVIDIA chipsets that enables bilinear interpolation for fragments that are produced by the tessellation shader.

Here’s an idea, dear Khronos group: if the tessellation shader allows us to explicitly output quads, there needs to be a way to interpolate fragment values for those quads!


So much for today. The next part will be about textures! This time, there will be actual content and not just OpenGL hacks! Stay tuned…

by