Deferred Engine Playground - download

Loome Animator is nothing, its my own project, I have ported the code from some different XNA applications.

//Only skinned model is visible ->
Please uncomment and add the other Entities in MainLogic.cs file. I have commented to increase my framerate :slight_smile:

Please let me know if you have any more queries!

Thanks,
Bala

yeah i did that and they don’t show, no idea why

I mean it’s nice and all to get 1000 fps but I’d still like to see some scenery :stuck_out_tongue:

The model itself is pretty distorted anyways, I’ve looked through some earlier threads and it maybe because it’s an old format.
I don’t have any 3d animation software riht now, but it would be nice to have some animation / model with new formats and which we know work in other engines.

May be try this, can you copy the content folder from your working version to this folder and let the dude folder remain.

After that Clean and Build the content. Let’s see, it should work.:slight_smile:

Even I dont have any other Animated model now, Let me check. :smile:

Hi @kosmonautgames

Please change this in Gbuffer.fx DrawSkinned_VertexShader function,

float4 skinPos = mul(input.Position, skinTransform);
float3 skinNormal = mul(input.Normal, skinTransform);
float3 skinTangent = mul(input.Tangent, skinTransform);
float3 skinBinormal = mul(input.Binormal, skinTransform);

Output.Position = mul(skinPos, WorldViewProj) + float4(TemporalDisplacement, 0, 0);
Output.WorldToTangentSpace[0] = mul(normalize(float4(skinTangent, 0)), World).xyz;
Output.WorldToTangentSpace[1] = mul(normalize(float4(skinBinormal, 0)), World).xyz;
Output.WorldToTangentSpace[2] = mul(normalize(float4(skinNormal, 0)), World).xyz;

Thanks,
Bala

Hi, here is the optimize ggx lighting. But compare to urs, which 1 is the best technique?

http://www.filmicworlds.com/2014/04/21/optimizing-ggx-shaders-with-dotlh/

I am using “ggx” if that is the term you want to describe it, in your post there is a link to Brian Kars’ overview of different terms and approximations, some of them I am using (edit: only the ue4 course, here is the real thing http://graphicrants.blogspot.de/2013/08/specular-brdf-reference.html?m=1 ). Am on mobile, I don’t remember exactly which ones.

This variant in the blog uses some precomputation into a texture, so in theory it should be faster and less accurate.

You can plug in his code in the shaders thoughand see how it turns out

I went ahead and reimplemented Screen space reflections.

They are calculated in full resolution, so super expensive obviously.

Also added a slight vignette and adjustable chromatic abberation

2 Likes

I fixed the temporal antialiasing stability issue! Sort of…

1 Like

This looks awesome! Would like to know what is framerate on your side.

One more question is Why Directional Light is not illuminating the wall and other properties except the Floor on Sponza.

Thanks and Regards,
Bala

150 fps, 100 with emissive objects in the scene

it does

Great! Thanks. I am waiting for Skinned Animation update. :slight_smile:

Screenspace reflections are looking really nice. Could you explain in short, how the algorithm works?

Yes, in fact, I have replicated the basic algorithm with the same data in a c# function so I could debug

In engine you would have to press “k” and it will spawn a ray on your mouseposition, going out from the camera.

It’s basically ray tracing - you spawn a ray at the current pixel (you don’t need to trace the ray until the pixel, since we already know that’s where it’s gonna hit) and then we calculate the reflection from said pixel.

If we have the camera-to-pixel vector and the normal vector of the pixel we can simply calculate its reflection.
In the image above you can see this reflection from the side.

Then we step a certain distance and check in our depth map whether our ray is “behind” something. If it is - then yay, we don’t have to trace any further, we can return the pixel we are currently examing. In the picture you can see the violet lines, which mark the distance between where we are with the ray and what depth we measured at this position in screen space.
If not we continue and check the next position.

Of course once we hit something we don’t ahve to continue following our ray, it’s just for debugging purpose in the picture above.

1 Like

And how do you apply screenspace reflections to a model only and not the whole screen? Is it using the material id?

I apply it to the whole screen

Ah ok, I thought it is only applied to the dragon. Thank you for the explanation.

is this the physically based ssr?

no, not at all. I scale the effect based on roughness, that’s it. I don’t even consider fresnel. So basically on the level of Crysis 2 SSR, maybe a bit more accurate and expensive

The problem with Monogame is that I can’t write to mip levels of textures, therefore the good ole’ cone tracing is not really viable. I could do stochastic sampling and recombine with temporal reprojection, but honestly reimplementing stuff that has already been done is only interesting to a certain degree.

Spending a day implementing ray marching is pretty nice, but spending some weeks to make it good is not what I’m looking for right now.

That said I have no real idea how much my algorithm deviates from existing ones, it may be that the approach is different in key details, I just went ahead and implemented it how I thought it would work

here is opensource implementation for unity. maybe u can take a look on the shader to improve ur ssr alogorithm.

it’s interesting to see that he doesn’t credit Playdead’s Mikkel Gjøl, or their team in general, on the main readme page but then have basically their SSR code used and acknowledged in the ssr shader itself (with the MIT licence in there)