Sample currently set RenderTarget2D

If you use multiple render targets you can increase the number of RGB components. I think Monogame has a max of 4 render targets, giving you 16 separate channels (if you include the alpha channel). That max could probably be raised as modern GPUs support more than 4 render targets simultaneously.

You’re right. Don’t know the performance consequences of this on the GPU. But this is at least going to lead to some if-statements in the lighting shader. Although this technique could also be used with my existing solution. I’m currently using a render target with SurfaceFormat.Color + an extra render target with surfaceformat half single for shadow map depth. But that still leaves me with 2 extra render targets. Which would give me 12 channels. I could use SurfaceFormat.Bgra4444 instead now to save space. Too bad there is no Rgba2222 surfaceformat.

So you’re actually doing shadow mapping and you want to have 24 lights being able to cast shadows. that - normally - would mean to use 24 (different) shadowmaps and therefor 24 passes (1 per light) - you then supply 24 textures to the composition shader which then checks each of them if a pixel is lit.

If a quite bit of optimisation (static lights yes no etc) it’s doable, yet, not advised and you should think about other techniques to accomplish the result. Think of geometry shadows, where you compute the rays of each light, do some collision check on geometry and pass the new geomatry as a shadow overlay

My game is a 2D platformer, so I don’t know if those techniques are applicable.

sure why not - a rectangle is just a primitive as a cube is. Even better because you only need collision checks for 2 axis :slight_smile:
If you google “2D Shadow Volume” you find examples of the outcome, maybe it fits your needs. It just sounds to me, that you’re trying to force a solution into something which is not suitable to do so

Oh, I see what you mean. The kind of shadows I have are in the shape of the sprites they come from kind of like this: http://static1.gamespot.com/uploads/original/1547/15470456/2885713-screen+shot+2015-06-15+at+5.15.39+pm.png

Ya if this was for 3d 24 dynamic lights at the same time is not going to be realistic performance wise i think in any scenario for most peoples computers maybe not even the best computers.

2d is another matter though and maybe this is not so unrealistic in that case.

Now i have never tried this and this is off the top of my head.
But thinking about it this should technically be feasable in 2d as way less data is needed.
This would not in any way simulate height for these lights and shadows though.
It would be god awful complicated for indexing mainly but i think doable.

If you were to create a single texture as a shadow map for all the lights in a somewhat un-orthadox way.
Then draw the scene one time to it i think i can see a way to achieve it.
But it would involve some mind numbing math logic due to the way the pixel shaders work.

Lets say you have a array of lights (positions of those lights really) passed to a shader.
Lets say you have also a render target (or pair acting in tandem) to be used as the depth buffer for all lights.
Let each pixel row in this depth rendertarget represent a light
Let each pixel column in that row represent a angle about that light.
As you draw each object to this depth texture.
You calculate the pixel position against the light position to find the distance.
You calculate the normal direction and from the current pixel to light position.
You calculate the angle atan2 from the direction.
You normalize that radian scalar to a range of 0 to 1 for texture indexing purposes. (for the y column draw validation later).
At this point you need to ensure your writing to the texture is extremely selective.
here goes…
You need to do the following…
Make sure rows of 1 only tests against light index 1 and ignores everything else.
Column 1 only writes if both the above is true and the angle returned by atan2 matches a predefined angle column index (mapping angle to index) from 0 to 360 (which needs to be normalized to 0 to 1) ect.
That has to be within some exact tolerance that must be determined by the height of the texture passed in for precision.
You then and only then write the distance to that pixel depending on if it is less then the distance already at that position if not you would use the distance at that position already.

Which of course means you would have to flip flop two rendertargets or feed the same rendertarget back in as a texture to your shader that you have set to be the render target though i have never tried the latter and its probably not possible the first option is though i still don’t know how feasable that is in that case 100 rendertarget sets in one frame is a hellava lot.

This would then result in a output were by after drawing all objects in the scene.
You would have a depth map were
each row of the map equates to a specific light
each column equates to a specific angle around that light for its shadow depth.

This means if you had 100 entities you need 100 draws for the depth map.
From there it would be 100 draws to render the scene with depth.
For that you would need another shader that becomes a bit simpler the indexing is again the hardest part.
You pass each object you draws centroid get the light position normalize get the atan2 for column access get the depth for the shadow at this drawn pixels position.
Do that for each light sum the light intensity for all pixels not in shadow multiply by the pixels actual image color return the color from the pixel shader function.

In both shaders i think also you would need a vertex pixel struct that copys the actual pixel positions into another variable and then passes that to the pixel shader.

This is all just theory though.

That sounds pretty impressive, but it doesn’t sound like it’s applicable for drop shadows like this.

No that wouldn’t handle drop shadows.

Is that all you are trying to achieve ?

Were are all the lights though in this ?

The above example only has one static global light source. Mine has 24 dynamic lights. This means each entity could have up to 24 shadows that are placed in relation to the light source and shadows projected by one light can be illuminated by other lights.

The requirement is each character can be carrying a light and moving right ?

Yes, but static lights are also in the scene.

So essentially this really is a 3d lighting situation in a 2d scene.
24 dynamic point lights with shadow mapping humm.

1 Like

I think you are making it more complicated then it needs to be, this is relatively easy. Mainly if all objects are in single plane and background in second. Do it in deffered manner, render silhouette of all objects, (just accumulate it during rendering, you will open two RTs at same time, one for diffuse, second one for silhouette). After that open third render target where you will acumulate lights, iterate through all lights, render your shadows, then compose it with diffuse RT while rendering final result into back buffer.

If your objects are in different planets then this is still easy, render them as 3D billboards with single plane as shadow receiver, you can use general shadow mapping in this case, altho I am sure you can simplify it greatly.

It might sound complex at first but step by step I am sure you will get to desired result, feel free to ask.

Could you elaborate on “render silhouette”? You are aware that objects can have multiple shadows and shadows coming from one light can still be lit up by other lights, right?

Ofc, silhouette is going to be their alpha and you will use those data to render shadows when you go through all your light sources. There is a chance that you might save that Rt by clearing Diffuse to 0,0,0,0 and using its alpha channel for that purpose but it might depend on what you are actually rendering and what you are exactly trying to achieve. Think about that silhoutte rt as simplified scene depth that you would use in full 3d case when doing shadow mapping. Also if you decide to have objects in different planes it will help you with more complex shadows later. Most of your simplification come from fact that you don’t need to ray march as in case of per pixel 2d shadows

https://devmatt.files.wordpress.com/2013/04/screen-shot-2013-04-03-at-10-01-50-am.png?w=334 (that requires knowledge of polar mapping and needs quite a bit of optimization and it will still always be relatively expensive)

And you don’t need to deal with projection from Light space to Camera space as in case of shadow mapping. So, you will be offsetting silhouttes according to direction between given light and given pixel + if you have attenuation (fall off) for every light then you will include that as well.

This is not what I’m trying to achieve I hope this is just some unrelated example.

Assuming the above, could you please explain to me exactly what information is in each color channel in the shadow map render target?

Also, I have no idea what you mean by “There is a chance that you might save that Rt by clearing Diffuse to 0,0,0,0”

I know you are not trying to achieve it, I just wanted to point out you won’t have to do ray march, etc as in that case and that you are just offsetting said silhouttes. All you need for your drop shadows should be:

Single render target with silhouttes of all objects that are going to drop shadows, very likely it will be just opacity of their texture.

List of all lights with all their parameter, I would say it will be at least: Position (Vec2 or Vec3), Attenuation (float), Color (Color)

Single parameter that will tell you distance between your objects and plane on which you are dropping your shadows.

I just woke up so I hope I ma not forgetting something.

I’ll second this - if you just want drop shadows (no perspective) just render all entities in black and offset that texture for each light and render

That sounds very interesting guys. But I don’t know if this is possible because the offset depends on the distance between light and entity. This is how I calculate the offset for each shadow:

var offset = shadowOffset;
var lightDirection = middlePosition - lightPosition
// Calculate shadow offsetX
if (lightDirection.X == 0)
    offset.X = 0;
else
{
    // 15 = z distance from player to wall/surface light hits
    float lightPositionZ = lightArgs.lightPosition.Z;
    float directionX = ((lightDirection.X * ((lightPositionZ + 15) / lightPositionZ)) - lightDirection.X);
    offset.X = Math.Sign(directionX) * Math.Min(Math.Abs(directionX), offset.X);
}