how light baking work in realtime rendering?

how light baking work in realtime rendering? is there any resource that i can reference for light bake technique? eg. how to turn bake light into a texture

how light baking work in realtime rendering?

That’s too open of a question, be more specific. It varies a bit based on what technique:

  • Baking direct lighting only
    • almost always the first step
  • Radiosity
    • stored as flat light
      • Quake / old-unreal / half-life 1
    • stored as ambient + directional (2 lightmaps per bake)
      • Last of Us, several others
    • stored as radiosity normal (3 lightmaps per bake)
      • Half-life 2, current Unreal
    • spherical harmonic (9 lightmaps per bake)
      • Halo Reach and onward
  • Substructuring and lattice interpolation
  • Pseudo-realtime radiosity like Enlighten?
    • coarse emitter to fine lumel relationship
    • form-factors
    • point-clouds
  • Packing?
  • Generating lightmap UV coordinates?
    • Use Thekla or Microsoft UV-Atlas

That all gets nasty because there can be a lot of crossover. For instance you can render as spherical harmonic maps but then write out the final data as ambient + directional or a radiosity normal map.


The short summary is the lightmap stores only the light that will contact the surface, you use that to light the surface according to technique.

In the crudest implementation the light at the surface will be light lightmap * colorTexture.

Baking the lightmap is basically just rendering backwards, you render to UV coordinates interopolating the position and normal per vertex of the mesh instead of rendering to world coordinates interpolating the UV coordinates and normal per vertex (not wholly accurate, close enough for a summary).

Radiosity is too long winded to explain without knowing that’s what you’re after.

Is there any resource that I can reference for light bake technique.

Hugo Elias old page is still the best reference for radiosity:

http://web.archive.org/web/20071001024020/http://freespace.virgin.net/hugo.elias/radiosity/radiosity.htm

Direct-light baking is the inverse of usual rendering as a described above.

eg. how to turn bake light into a texture

It should already be a texture if it’s baked.


You really need to be more specific on what you want to know.

2 Likes

I should add that I have no problem explaining what you need to know but you need to have some specifics for me to even approach it.

Lightmapping is such a massive subject that there are even niche books on the little details like Practical Global Illumination with Irradiance Caching, which is 148 pages of zero pandering to unskilled or unfamiliar readers. One of my favorites on the subject.

You have to at least explain where you are as a programmer, your understanding of geometry and rendering, and what it is that you want to do with lightmapping - the last two are mandatory, the first - not so much (I can infer it).

At the simplest it may just be follow this tutorial on baking lightmaps in Blender and at the worst you could be getting ready for occupancy maps, how to actually use barycoords, hemicubes, and lattice interpolation.


If you have written a software renderer then you’re ready for implementing a lightmapper. If not then you’re not ready for it and should rely on other tools to do it for you.

I’m also interested on this subject, I managed to load DELED map with lightmaps, tho the lightmap was baked in DELED and exported as texture file and I’ve use a second texture to render the lightmap in additive mode using a second UV also retreival from DELED map file as shown below.

I’m playing with shadowmap and I tried rendering the shadow to texture only once and used it again again and it works… but I’m also in the middle of process of how to save render to texture to file with assign UV to the afffected triangle.

But ATM minimizing loosing a hair : - D I stick with DeleD to lightmap my Level ^_^y

You just render to a texture sending your UV-Coords * 2 - 1 (because clipspace is -1 to 1) for the vertex-position and passing your actual world position in a separate interpolator to the pixel-shader. Your pixel shader then uses that world-position instead of the usual position you’d think of using. Everything then just works …

… until you go for a low ratio of lumel->pixel with texture-filtering on, because you need to expand a gutter so that both filtering an mipping don’t wreck the lightmap. That’s a fairly long winded can of worms with 4 decent solutions.

Here’s a basic gist Effect baking interpolated vertex normals to texture, OffsetScale is {2, 2, -1, -1} for render-to-texture. I think that’s simple enough that you can see how you can add whatever interpolators to the vertex-output to use the pixel-shader to bake whatever in UV-space. Can just as easily send the true world-coordinates along to the PS and then evaluate your shadowmap like usual.

1 Like

Yo’ man! thanks, points taken and link copied and noted on my may review list… appreciated much ^_^y