Palette lighting tricks on the Nintendo 64

4 hours ago 1

This article is a continuation to my Bluesky thread from April.

We made a Nintendo 64 demo for Revision 2025!

It has baked lighting with normal mapping and real-time specular shading, ahem, well sort of. More on that later. The beautiful song was made by noby with guitar performed by Moloko (https://soundcloud.com/sou_andrade).

Below I have some notes on the directional ambient and normal mapping techniques I developed. They are both pretty simple in the end but I haven’t seen them used elsewhere.

But wait, normal mapping on the N64?

I knew normal mapping on the N64 was possible due to earlier experiments by fellow homebrew developers WadeTyhon and Spooky Iluha. I had also done some emboss bump mapping hacks myself.

The approach explained in this article is not new: the renderer computes lighting directly to textures at runtime. It’s great because no specialized hardware support is needed and you can run arbitrary shading code on the CPU. Too bad it’s so slow…

Shading a palette instead

So the idea is to do texture-space shading on the CPU. But what if it’s a palette texture we’re shading? Those are very common on the N64 anyway. In that case it’s enough to update only the palette and the texture will respond as if we computed lighting for each texel. Instant savings!

A demonstration of “palette-space” shading. When the palettes update, the full textures update too. When mapped to an object, it looks like the shading changed. A demonstration of “palette-space” shading. When the palettes update, the full textures update too. When mapped to an object, it looks like the shading changed.

The original palette is replaced with a shaded palette and the palette texture is applied as a regular texture to an object. With just a diffuse “dot(N,L)” lighting the results look pretty good:

Another view of the above potato-shaped rock mesh. Another view of the above potato-shaped rock mesh.

In the above example I also did shading in linear space by undoing the gamma correction of the color texture :) In the final demo it wasn’t possible because I split the ambient and direct light terms to be combined by N64’s RDP unit in hardware.

Object-space normal mapping

Usually normal mapping is done in tangent space. This is way you can use repeating textures and the fine normals can modulate smoothly varying vertex normals. A tangent-space normal map of a single color represents a smooth surface.

An object-space normal is simpler but more constrained. Now the normal map’s texels don’t represent deviations from the vertex normals but absolute surface normals instead. The runtime math becomes simpler – just read a color from a texture – but all surface points now need a unique texel, like in lightmaps.

 Compressed to a 32-color palette. An early experiment to validate the approach on a high-res normal map.
Left: The original object-space normal map. Right: Compressed to a 32-color palette.

The objects have both a diffuse texture (basecolor * ao) and a normal map. Both textures actually share the same palette indices that I generated with scikit-learn’s K-means clustering. The images were interpreted as a single six-channel image for that to work.

Below is an example how the compression looks with a tangent-space normal map.

A roof tile texture compression example. An RGB diffuse texture and a normal map are both compressed to a 16-color palette image in a way that palette indices are shared. Therefore the actual image data has to be stored just once at 4 bits per pixel. A roof tile texture compression example. An RGB diffuse texture and a normal map are both compressed to a 16-color palette image in a way that palette indices are shared. Therefore the actual image data has to be stored just once at 4 bits per pixel.

At shading time, which can happen on load or on each frame, each palette color is processed in a for loop. A single index is used to fetch a normal and a surface diffuse color. The CPU-side shader code then produces a new RGB color for that index. The result of the loop is a new palette but with shading applied.

Unfortunately this approach only really works with directional lights. It’s also difficult to represent any kind of shadows with just the palette alone. That’s why I started looking into how baked lighting could fit in the to the equation.

Baked directional ambient and sun light

I wanted the demo to have a building with realistic lighting. Perhaps it was a bit too ambitious😅 After a lot of deliberation, I put ambient and direct sun lighting in vertex color RGB and alpha channels, respectively. The ambient term is further split into a directional intensity (a greyscale environment map) and color (vertex RGB with a saturation boost). The sun is a directional light whose visibility is transmitted in vertex alpha.

The shading formula is therefore this:

ambient = vertex_rgb * grey_irradiance_map(N) direct = vertex_alpha * sun_color * dot(N, sun_dir) color = diffuse_texture * (ambient + direct)

Here’s how the different terms look:

Lighting decomposition. Lighting decomposition.

Note how the messy “Sun visibility” vertex colors get neatly masked out by the sun (N.L) computation in the bottom right corner. In the end the ambient and direct terms are summed to get the shaded result below.

Shaded result. Shaded result.

The thing about directional ambient is that even the baked lighting is rough, the details in the textures still make it look pretty high end. Consider this scene that has just a colored blurred environment map and per-vertex ambient occlusion:

Image-based ambient lighting. In this image only an ambient sky light is enabled. Also shows the palettes used (top left corner). Image-based ambient lighting. In this image only an ambient sky light is enabled. Also shows the palettes used (top left corner).

It really pops! I love image-based lighting.

For the blurred environment maps, I used an equirectangular projection for simplicity. Polyhaven’s HDRIs already use the projection. Since I precomputed the shading at load time, the complex sampling math wasn’t an issue.

Visualization of an 64x32 environment map (right) before it gets blurred to an irradiance map. The dot sphere on the left shows the image pixels mapped to unit sphere directions. Visualization of an 64x32 environment map (right) before it gets blurred to an irradiance map. The dot sphere on the left shows the image pixels mapped to unit sphere directions.

Shading a larger model with repeating textures

I designed the original shading algorithm for single objects and only tested it with the potato_rock.obj you saw in the beginning. For the demo, the castle mesh’s repeating textures posed a problem. As a workaround, I split the large mesh into submeshes that each conceptually share the same object-space normal map.

The task was done primarily by yours truly manually in Blender, by grouping geometry by material and surface direction. The computer did its part by calculating a world-to-model matrix based on polygon normals for each group. That is a pretty much an approximate tangent space. So I couldn’t escape them in the end!

Each of these groups shares a palette so as a whole their lighting will be correct only in the average sense.

Tangent-space basis vector visualization for a simple cube. In the final model many polygons that point roughly in the same direction have to share the same tangent space. Tangent-space basis vector visualization for a simple cube. In the final model many polygons that point roughly in the same direction have to share the same tangent space.

The tangent spaces are not interpolated at runtime, which shows up as faceted lighting. This is perhaps the biggest downside of this technique.

Lighting isn’t interpolated smoothly on this arch because the tangent spaces are constant over polygons, unlike in proper tangent-space normal mapping. Lighting isn’t interpolated smoothly on this arch because the tangent spaces are constant over polygons, unlike in proper tangent-space normal mapping.

Specular shading

Since many surface points now share the same shaded color, computing point light or specular shading correctly is not possible. The “palette-space” approach only really works for diffuse directional lights because the shading formulas don’t need a “to camera” vector V, which depends on the position of the shaded surface point. Yet still I tried to hack it for the speculars :)

If we approximate the object to be shaded as a sphere, then the point p being shaded is simply p=radius*normal. We must also accept that the result will look faceted since many surface points share the same palette index.

Fresnel shading. The sculpt was approximated as a stretched sphere in lighting calculations. Fresnel shading. The sculpt was approximated as a stretched sphere in lighting calculations.

In the demo, the specular highlights looked a bit funny but still they seemed to fool most people. I count this as a success.

Is this the future?

In the demo I tried to hide the main limitations of the technique: shading discontinuities, only greyscale textures supported (!), no point lights. So it really only works with elaborate preprocessing. I’d love to see the shading discontinuity issue solved somehow (Spooky Iluha’s techniques don’t have it) without losing support for both ambient and direct lighting. I don’t know if it’s possible but that’s what makes this hobby so fun :)

A PAL-compatible N64 ROM is available but note that it crashes a lot.


I’m also thinking of writing a book. Sign up here if you’re interested.

Read Entire Article