2d 360 depth buffer and Image outline point list in sequence.

So ever since the other day talking about 2d shadowing i been trying to get the actual depth buffer to work for doing a real time snap shot of a scene by placing it perpendicular to the camera.

It seems while i can set z bias on a card no one ever thought of a y bias. Even if they did i still dont think it would work.

To that end. I decided to do it by creating a algorithm to turn a image even a complicated one into a series of point lists that were in sequence. That traverse the outer edges of the opaque parts of a image.
This way i could build up a z depth vertice list that mirrored the image then do the shadow technique.

Sounds a lot easier then it is however i just cleared the most difficult parts i think cross fingers.

Anyways i want to show off a hot dog gif

On the far left top corner is a image i drew out the outlines are the program output and the moving line is to visualize the sequence order of the points for the outline. Having a sequenced list of points means being able to generate tangents and normals as well hence collision info.

.

As a byproduct this algorithm can double for a replacement to per pixel collision that is not only faster its far cheaper and can allow for far more information returned.
Not quite there yet the sequencing here isn’t really complete and i need to remove collinar points but over the hump and heading down hill now.

1 Like

Ok i just wanted to show this off a bit more i got a proof of concept test.

Doing this was hard as hell to get a single pass 360 degree depth buffer algorithm made.
This isn’t completely done and i was sick for like a week so it took even longer.
But the hardest part is done.

The red square represents a light source.
The blue circle is a refined version of the previous algorithim.
The teal circle spining around represents a image or the polygonal outline of the image under motion in 2d this is just to make things clearer.
The redish blueish bars moving from side to side represent the depth to the image in 2d as seen from the light. Red means it is farther Blue closer.

So basically this is a single pass 360 degree depth buffer only one pixel line is needed per light. Later the scene when drawn again, can use the depth buffer for true 2d shadow or lighting. As each light has its own 360 degree depth map to everything else.

I sort of wish i didn’t need to generate a outline of a image to do it, but depth biasing wont work in this particular case. And i wanted to be able to have all the lights run in a single pass. Still have to add that in which maybe a little more work but probably not as difficult.

So it took me about 8 test class tries 8 shaders and i dunno how many thousands of lines i typed and tons of extra tests included primarily to get this.

    // simultaneously non intersecting.
    wrldPos.x = (((atan2(worldDirection.x, worldDirection.y) / 3.14159265f))) * ResolutionX;
    // simultaneously intersecting right vertice.
    wrldPos.x = wrldPos.x - (ResolutionX * 2.0f * isUp * isRight * isIntersecting);
    // simultaneously intersecting left vertice.
    wrldPos.x = wrldPos.x - (ResolutionX * 2.0f * isUp * isLeft * isIntersecting);

Well really this very raw vertex shader code, it can be trimmed down more.

    VsOutputCalcSceneDepth2D output;

    float4 wrldlight = float4(WorldLightPosition, 1.0f);
    float4 wrldPos2 = mul(input.Position, World);
    float4 wrldPosPartner = mul(float4(input.PositionPartner, 1.0f), World);
    float isIntersecting = 1.0f - saturate(sign((wrldPos2.x - wrldlight.x) * (wrldPosPartner.x - wrldlight.x)));
    float isUp = saturate(sign(wrldlight.y - wrldPos2.y));
    float isRight = saturate(sign(wrldPos2.x - wrldPosPartner.x));
    float isLeft = (1.0f - isRight);
    float2 worldDirection = wrldPos2.xy - wrldlight.xy;
    float dist = distance(wrldPos2.xy, wrldlight.xy);
    //
    float4 wrldPos;
    wrldPos.y = abs(wrldPos2.z); // +200.0f;
    wrldPos.z = dist;
    wrldPos.w = 1.0f;
    
    // simultaneously non intersecting.
    wrldPos.x = (((atan2(worldDirection.x, worldDirection.y) / 3.14159265f))) * ResolutionX;
    // simultaneously intersecting right vertice.
    wrldPos.x = wrldPos.x - (ResolutionX * 2.0f * isUp * isRight * isIntersecting);
    // simultaneously intersecting left vertice.
    wrldPos.x = wrldPos.x - (ResolutionX * 2.0f * isUp * isLeft * isIntersecting);

    //
    wrldPos = float4(wrldPos.x / (ResolutionX), wrldPos.y / (ResolutionY), wrldPos.z / (ResolutionY), 1.0f);
    output.Position = wrldPos;
    output.Position3D = wrldPos;

The orthographic camera when used to draw the image polygons is placed in the screen plane at the bottom 0, viewport.Height, 0, faceing 0,0,0 then the polygons are draw exactly were the image destination rectangle would be,

1 Like