Light map generation

I want to come up with a way of doing nice lighting for indoor scenes with fixed geometry

So think a room with a bunch of meshes and a bunch of lights

I can see a way of generating a 3D texture from all the lights by rendering into 16 textures then combining them

So render the lights onto a plane at various positions in the volume with additive rendering

This would work fine until you think about shadows, objects can block the light within the volume, and I can’t think of a good way of fixing that.

I looked at other techniques , but they seem to only be per-mesh and that won’t work for me (my scenes are built in a lego style )

Has anyone seen anything along these lines I can look at

Deferred lighting, I have a sample in my git repo here.

It also does shadows,
image

I have my own deferred renderer, that is not what I want.

I want to pre-generate complex lighting and shadows

So, light map baking?

This will only work with none dynamic scene’s, do what you would do with deferred, but store the light and shadow maps to file, then just use those at runtime rather than the dynamic ones…

Yes, but that won’t work for me.

The indoor scenes are static, but they are built from blocks so I can put lot’s of them together quickly

So a wall 12 metres long is actually 6 x 2 meter wall sections instance rendered

So anything that relies on storing textures per object will not work

I have a plan , or rather a thought experiment

For each light I render the scene from the lights point of view and extract the linear depth value and the wold x and z coordinates

I combine all of these into a vector3 as float texture , which stores a start and end y value for lights, and a single lighting value

In the object shader the pixel shader scales the world position x and z and uses them to sample this texture

if the y coordinate of this pixel is within the range in the texture, then it is lit and the lighting factor from the texture can be used.

Am I mad? Or will this work

I still don’t get why a single dererred lighting pass to bake the lighting wont work… But, you know what you need I guess. It will be interesting to see what you come up with :slight_smile:

Because the camera can move, the objects are static, but the camera isn’t

But if the scene is still, and so is the light sources, then the stored maps from the single deferred light pass should be enough shouldn’t they??

Not looked at deferred lighting for a while, so forgive my ignorance if I am missing a fundamental point here.

Deferred lighting is efficient because you only do the lighting calculation for VISIBLE pixels.

In a forward renderer you do the lighting calculation for every pixel in the scene regardless of visibility

When the camera moves , the visible pixels change

I have had a look at some other techniques, the one I was interested in most works by taking all the geometry in the scene and rendering all the triangles in the scene into a texture.

Then it does the lighting calculations using this as an output , and there are often a lot of terms in this lighting calculation , and creates a scene wide texture.

The vertices of all the geometry are then modified to include an extra texture coordinate that indexes into this texture.

The problem with that is I cannot see any way of mapping all the tris in a scene into this texture and cannot find an example of how they do this.

OK, I get you now, I was thinking more of the shadows, the maps are generated from the PoV of the light source, so that should work.

As to the lighting it’s self, I would probably (off the top of my head) have a light baking pass, and render the scene from the PoV of each light, and on the CPU calculate the light term per vertex, storing that either in the vertex buffer of the geometry or a texture that can be given to the model at runtime.

I have not really thought about light baking, but I think I would like to have it as an option on my engine.

I think if it is possible to render into a 3D texture, it would be easy

Sanity check this for me.

Since each object has texture coordinates , which are always 0-1 (at least in my world), it would be possible to pass in cell x and cell y for each object. Use these to generate an output mapping in a 2d texture

So object 0 draws to coordinates 0,0, object 1 draws to xstep * 1 , 0 , object 2 draws to xstep * 2, 0, and object N draws to (object number % size)*xstep, (object number / size) * ystep

Doing this all objects can render into a single large texture

With this mapping , you loop over all the lights rendering the result of the lighting equation into the texture using additive blending

Then run a blur over it and use it as the lighting value in the real time code

The real time code will be REALLY quick, I am just worried that since we won’t be using the normals anymore they will be optimized out of the shader and suddenly we can’t render anything

It does mean that the more objects in the scene, the worse the lighting as the region in the light map used for each object will get smaller.

64 objects will give you a 64 * 64 pixel area for the lighting , 128 32 by 32… but that may be enough for me

I’m cooking dinner for the family, so just skim read your post. You can have more than one uv channel remember, and colour too…

Interesting stuff. I noticed something called “Bakery GPU Lightmapper” and was tempted and curious about examining the source just to see how it works. More complex than what I need, technically. Right now, my strategy/idea was: If I alter the game-map or terrain in editor, I save the entire scene block/cell to file and import to 3ds and then bake the entire scene with the-works as a Complete-Map which I then apply as the actual texture to wrap the entire scene (4096x4096) and if more detail needed - I use smaller cells and do multiple bakes. I don’t know if this is a smart way to do it but it works. I suppose if the user can modify the scene - like alter walls or blocks or something - then an in-game light-map thing would be necessary - I suppose technically it’s more ideal for most situations.

This is very interesting, it does everything I want to do , but uses texture atlases to get around the problem I am struggling with.

hmmm

Well I am still struggling.

For each object I render a 256 by 256 lightmap as part of a 4k textures, so I can get a lot in a single texture.

This texture looks fine.

But the results are underwhelming.

You can see on the floor that you get dark areas, these are the seems in the tiles but they shouldn’t be visible

I think this is because the tile at 256 by 256 is too small, unless the 3D objects are specifically textured to give space between different faces

Not sure how to continue.

Not the best solution, but I use 3D modeler “Delgine 3D” to handle the light map generation.

It can generate lightmaps with unlimited lights and shadow baked into texture.

I can send the DMX loader and parsel if your interested, its pretty simple parsel : )

Thanks, that’s an interesting tool, but I am writing the editor for my engine.

So another tool isn’t the answer.

Would love to know how it works though

1 Like

Hi Stainless,

I’m just using forward rendering for this with a custom VertexType that can store ColorMap UV and lightmap UV and other normal stuff , render the the model the using colormap texture UV and then render it again using lightmap UV, to make the long story short this is dual texture one is ColorMap and the other LightMap rendered in transparent mode.

1). This is the link of DELGINE 3D Tools it’s an old tool but open source 3D Modeling application, it very easy to use : )

http://www.delgine.com/

**2).The actual DMX file is very simple text file, pretty much just like an OBJ file with lightmaps info **

3). DeleD can output the DMX file, it’s color textere and generate lightmap texture.

I’ll prepare the loader and parser that anyone can study, enchanced, use it, and hopefully someone can convert it to create a pipeline tool for DMX file : )

^_^y

1 Like