Sample currently set RenderTarget2D

Thanks, I’ll keep that in mind of course.

If you need help to remove distortion from “shadows” let me know, all you need to do should be projection along the ray to target plane.

Not sure what you mean by that.

Is this what you mean by distortion?

Interesting, well, see how that edge is skewed? I suppose it will also remove this artifact. Just think of that as full 3d case and you calculate distance between planes along the ray casted from source to further plane, this was mainly to show how to handle general render target approach for this case.

This is how you calculate the offset currently, could you write what you had in mind in pseudo code or something, please?

float2 lightVec =  lightPosSS.xy - PSIn.TexCoord.xy;
float2 lightDir = normalize(lightVec);
float2 offset = (lightDir * planeDistance) / viewport;
//we will tap into offseted cords and get simplified shadow term
float shadowTerm=  tex2D(Sampler,  PSIn.TexCoord +  offset).a;

Alright, I am not sure I will get to it today, but I might.

Much appreciated

Are you sure? The size of each vertex is going to be quite large at least: position: 12 bytes, light position; 12 bytes, texture coordinates: 8 bytes, light color: 4 bytes, light strength + decay: 8 bytes = 44 bytes per vertex. This is going to be passed from the CPU to the GPU to the vertex shader to the pixel shader. I’m not sure what the performance consequences are.

At this count… absolutely ridiculously none. You are going to pass those parameters to GPU in any case.

Yes, you’re right. But what about vertex shader to pixel shader? The vertex count is going to be very small, but the amount of pixels processed could be tremendous.

You might want to do some research for yourself on that.

I actually tried batching lights on my previous solution a while ago, I remember it as the GPU usage became greater and vertex struct when drawing on the CPU started generating garbage. I don’t recall a significant CPU boost. Besides the point where the amount of draw calls from lights starts to throttle the CPU the GPU has long past been set on fire, unless the lights are very small.

Also on here it is recommended that the amount of data passed to the pixel shader is kept to a minimum, at least that’s how I interpreted it (Pack variables and Interpolants):


I have to run for about an hour, then I will show you underlying math / pictures for this. I believe it will be more helpful than just sharing code.

As far as performance goes: you can batch lights also be doing several light passes each drawcall (lets say four by feeding shader with arrays)

Anyway, advice (ignore this if your are working on android game tho): Performance of modern computers is insane, perfect optimization cost ridiculous amount of time. Especially if you are single developer working on 2D title there is no point to invest too much time into optimizing every drawcall, obviously, you have to be aware of doing absolutely stupid stuff but don’t worry about doing everything in most efficient way. Lot of optimizations can be done later, don’t burn yourself over spending too much time over every single detail, when you are not sure just profile your game and then decide if performance cost is worrysome.


If you can’t read my handwritting for which I absolutely wont blame you, I did a bit of googling and found:
Which will get you to same result while being readable…
Optimization for our case comes from fact that normal of our plane is (0,0,1) so dot product is simply z coordinate of given vector and that we can chose our point from plane to be directly under light (so vector from that point to light is (0,0,Lpos.z))

And important thing: It is better to do this in world space and then get screen space coords for sampling point, also attenuation will become much less clunky, so it will be just
saturate(1.0f - length(lightVec) / attuneation)

-w = (0, 0, Lpos.z) so the last expression doesn’t make sense because only z-component will be non-zero.

So I had to guess your intentions as this where 15 is plane distance.

float2 lightDirection = lightPosition.xy - pixelPosition;
float2 offset = lightDirection * (15 / lightPosition.z) * screenMultiplier;

Is that correct?

My intentions are correct, that’s why w.z is used, which is z component of w

Ah, I see. I thought the “.” was something else.

sorry about misunderstanding.

Here is Pixel shader
    float4 OmniLightPS(VertexToPixel PSIn) : COLOR0
    		 //We calculate in world space
    		 //simple transform into world space (where one pixel is one world unit), this is our current point on background plane
    		 float3 pointWS;
    		 pointWS.xy = (PSIn.TexCoord.xy) * viewport;
    		 pointWS.z = -planeDistance;

    		 //Vector between light and point
    		 float3 lightVec =  lightPos - pointWS;
    		  //vector from plane we are projecting to, I leave this as whole float3 so its obvious what that was in equation, reduce this to float
    		 float3 w = float3(0,0, -lightPos.z);

    		 //light vec is from point to light so whole thing is *-1 which remove negative signs
    		 float s = w.z / lightVec.z;
    		 float3 samplePoint = lightPos + s * lightVec;
    		 //sample texture, don't forget to get sample point coordinates back to screen space
    		 float shadowTerm=  tex2D(Sampler,  samplePoint,xy/ viewport).a; //as Sample point Z is 0 as that was plane were targeting
    		 //calculate attenuation
    	         float intensity =  saturate(1.0f - length(lightVec) / (attuneation)) ; //sorry about type in attenuation

    		 return float4(PSIn.Color.rgb * intensity * (1 - shadowTerm) ,1);  

Okay, thanks. So the expression I wrote before is correct, right?

no, offset itself is (-light.z / lightVector.z) * lightVector
lightVector is whole vector, not just direction (that implies normalized vector)

I’m calculating the offset from the destination pixel not the light position, sorry forgot to mention that.