Ya if this was for 3d 24 dynamic lights at the same time is not going to be realistic performance wise i think in any scenario for most peoples computers maybe not even the best computers.
2d is another matter though and maybe this is not so unrealistic in that case.
Now i have never tried this and this is off the top of my head.
But thinking about it this should technically be feasable in 2d as way less data is needed.
This would not in any way simulate height for these lights and shadows though.
It would be god awful complicated for indexing mainly but i think doable.
If you were to create a single texture as a shadow map for all the lights in a somewhat un-orthadox way.
Then draw the scene one time to it i think i can see a way to achieve it.
But it would involve some mind numbing math logic due to the way the pixel shaders work.
Lets say you have a array of lights (positions of those lights really) passed to a shader.
Lets say you have also a render target (or pair acting in tandem) to be used as the depth buffer for all lights.
Let each pixel row in this depth rendertarget represent a light
Let each pixel column in that row represent a angle about that light.
As you draw each object to this depth texture.
You calculate the pixel position against the light position to find the distance.
You calculate the normal direction and from the current pixel to light position.
You calculate the angle atan2 from the direction.
You normalize that radian scalar to a range of 0 to 1 for texture indexing purposes. (for the y column draw validation later).
At this point you need to ensure your writing to the texture is extremely selective.
You need to do the following...
Make sure rows of 1 only tests against light index 1 and ignores everything else.
Column 1 only writes if both the above is true and the angle returned by atan2 matches a predefined angle column index (mapping angle to index) from 0 to 360 (which needs to be normalized to 0 to 1) ect.
That has to be within some exact tolerance that must be determined by the height of the texture passed in for precision.
You then and only then write the distance to that pixel depending on if it is less then the distance already at that position if not you would use the distance at that position already.
Which of course means you would have to flip flop two rendertargets or feed the same rendertarget back in as a texture to your shader that you have set to be the render target though i have never tried the latter and its probably not possible the first option is though i still don't know how feasable that is in that case 100 rendertarget sets in one frame is a hellava lot.
This would then result in a output were by after drawing all objects in the scene.
You would have a depth map were
each row of the map equates to a specific light
each column equates to a specific angle around that light for its shadow depth.
This means if you had 100 entities you need 100 draws for the depth map.
From there it would be 100 draws to render the scene with depth.
For that you would need another shader that becomes a bit simpler the indexing is again the hardest part.
You pass each object you draws centroid get the light position normalize get the atan2 for column access get the depth for the shadow at this drawn pixels position.
Do that for each light sum the light intensity for all pixels not in shadow multiply by the pixels actual image color return the color from the pixel shader function.
In both shaders i think also you would need a vertex pixel struct that copys the actual pixel positions into another variable and then passes that to the pixel shader.
This is all just theory though.