yeah i did that and they donāt show, no idea why
I mean itās nice and all to get 1000 fps but Iād still like to see some scenery
The model itself is pretty distorted anyways, Iāve looked through some earlier threads and it maybe because itās an old format.
I donāt have any 3d animation software riht now, but it would be nice to have some animation / model with new formats and which we know work in other engines.
I am using āggxā if that is the term you want to describe it, in your post there is a link to Brian Karsā overview of different terms and approximations, some of them I am using (edit: only the ue4 course, here is the real thing http://graphicrants.blogspot.de/2013/08/specular-brdf-reference.html?m=1 ). Am on mobile, I donāt remember exactly which ones.
This variant in the blog uses some precomputation into a texture, so in theory it should be faster and less accurate.
You can plug in his code in the shaders thoughand see how it turns out
In engine you would have to press ākā and it will spawn a ray on your mouseposition, going out from the camera.
Itās basically ray tracing - you spawn a ray at the current pixel (you donāt need to trace the ray until the pixel, since we already know thatās where itās gonna hit) and then we calculate the reflection from said pixel.
If we have the camera-to-pixel vector and the normal vector of the pixel we can simply calculate its reflection.
In the image above you can see this reflection from the side.
Then we step a certain distance and check in our depth map whether our ray is ābehindā something. If it is - then yay, we donāt have to trace any further, we can return the pixel we are currently examing. In the picture you can see the violet lines, which mark the distance between where we are with the ray and what depth we measured at this position in screen space.
If not we continue and check the next position.
Of course once we hit something we donāt ahve to continue following our ray, itās just for debugging purpose in the picture above.
no, not at all. I scale the effect based on roughness, thatās it. I donāt even consider fresnel. So basically on the level of Crysis 2 SSR, maybe a bit more accurate and expensive
The problem with Monogame is that I canāt write to mip levels of textures, therefore the good oleā cone tracing is not really viable. I could do stochastic sampling and recombine with temporal reprojection, but honestly reimplementing stuff that has already been done is only interesting to a certain degree.
Spending a day implementing ray marching is pretty nice, but spending some weeks to make it good is not what Iām looking for right now.
That said I have no real idea how much my algorithm deviates from existing ones, it may be that the approach is different in key details, I just went ahead and implemented it how I thought it would work
itās interesting to see that he doesnāt credit Playdeadās Mikkel GjĆøl, or their team in general, on the main readme page but then have basically their SSR code used and acknowledged in the ssr shader itself (with the MIT licence in there)