This will only work with none dynamic scene’s, do what you would do with deferred, but store the light and shadow maps to file, then just use those at runtime rather than the dynamic ones…
I still don’t get why a single dererred lighting pass to bake the lighting wont work… But, you know what you need I guess. It will be interesting to see what you come up with
I have had a look at some other techniques, the one I was interested in most works by taking all the geometry in the scene and rendering all the triangles in the scene into a texture.
Then it does the lighting calculations using this as an output , and there are often a lot of terms in this lighting calculation , and creates a scene wide texture.
The vertices of all the geometry are then modified to include an extra texture coordinate that indexes into this texture.
The problem with that is I cannot see any way of mapping all the tris in a scene into this texture and cannot find an example of how they do this.
OK, I get you now, I was thinking more of the shadows, the maps are generated from the PoV of the light source, so that should work.
As to the lighting it’s self, I would probably (off the top of my head) have a light baking pass, and render the scene from the PoV of each light, and on the CPU calculate the light term per vertex, storing that either in the vertex buffer of the geometry or a texture that can be given to the model at runtime.
I have not really thought about light baking, but I think I would like to have it as an option on my engine.
Since each object has texture coordinates , which are always 0-1 (at least in my world), it would be possible to pass in cell x and cell y for each object. Use these to generate an output mapping in a 2d texture
So object 0 draws to coordinates 0,0, object 1 draws to xstep * 1 , 0 , object 2 draws to xstep * 2, 0, and object N draws to (object number % size)*xstep, (object number / size) * ystep
Doing this all objects can render into a single large texture
With this mapping , you loop over all the lights rendering the result of the lighting equation into the texture using additive blending
Then run a blur over it and use it as the lighting value in the real time code
The real time code will be REALLY quick, I am just worried that since we won’t be using the normals anymore they will be optimized out of the shader and suddenly we can’t render anything
It does mean that the more objects in the scene, the worse the lighting as the region in the light map used for each object will get smaller.
64 objects will give you a 64 * 64 pixel area for the lighting , 128 32 by 32… but that may be enough for me
Interesting stuff. I noticed something called “Bakery GPU Lightmapper” and was tempted and curious about examining the source just to see how it works. More complex than what I need, technically. Right now, my strategy/idea was: If I alter the game-map or terrain in editor, I save the entire scene block/cell to file and import to 3ds and then bake the entire scene with the-works as a Complete-Map which I then apply as the actual texture to wrap the entire scene (4096x4096) and if more detail needed - I use smaller cells and do multiple bakes. I don’t know if this is a smart way to do it but it works. I suppose if the user can modify the scene - like alter walls or blocks or something - then an in-game light-map thing would be necessary - I suppose technically it’s more ideal for most situations.
I’m just using forward rendering for this with a custom VertexType that can store ColorMap UV and lightmap UV and other normal stuff , render the the model the using colormap texture UV and then render it again using lightmap UV, to make the long story short this is dual texture one is ColorMap and the other LightMap rendered in transparent mode.
1). This is the link of DELGINE 3D Tools it’s an old tool but open source 3D Modeling application, it very easy to use : )
I’ll prepare the loader and parser that anyone can study, enchanced, use it, and hopefully someone can convert it to create a pipeline tool for DMX file : )