After playing Dead Island Riptide, UT (4), I was wondering how the aberration was made. It seems to be something working togather like:
the current image, shifted on blue and red channel
the amount of shift depends on the depth of the pixel or discontinuities ?
I would say the naive method does not use depth. mabye just a sobel filter. In UT it is clearly linked to depth.
I’m just trying to figure it in a reverse engineering way.
Does anyone has an idea ?
Could you give an example image of the effect?
I have a basic one in my engine, maybe the one in Riptide is different but usually you just scale the picture (the texcoordinates) for one channel, so it’s like a color shift in the direction away from the middle
As you can see here (I did not took this screenshot myself, it is ninstalled on my other computer more powerfull than this one):
It is really subtle, and does not seem to be only shifter accounting on the distance from the center.
http://i.picpar.com/9qpb.png on the bottom and bottom left, whereas it seems to be blurred with a sort of DOF effect. Sort of…
After some readings here and there, it seems to have a nice render when using a fresnel term/refraction shader.
this is exactly this - it shifts the further it gets from the center. You can also modify depending on luma additionally, but this may be a problem if the AA is not good.
Since you posted an unreal engine screenshot here is the official doc about it
You can see it more clearly there.
Hum… I thought it was more accurate than this, not shifting everything but only edges of shapes. Why didn’t I think of going on the UE docs ^^ Maybe too tired. Nonetheless thanks.
In the end it uses the same effect I use in my lensflare. Almost all the work is already done for me then.
I always find it intriguing that game developers spend so much effort trying to simulate the same thing that photographers and cinematographers spend so much effort trying to avoid.
you know how Alien and Star Wars made Sci-Fi much more interesting and realistic when they made it dirty?
A perfect image looks uncanny, if you want to go for photo-realism or at least realistic rendering you’d want to account for some camera deficiencies, since that is “photo”-realistic as of today.
Even if you take a current high-end DSLR with a 3000 lens you will have CAs and you will have barrel distortion, if the image is not post-processed heavily.
But that’s the thing - if you have product photography it’s often hard to tell whether the image is actually “real” or CGI. With cars and electronics I’m often surprised to learn about the lengths photographers went to produce certain shots, when I just assumed it was a 3d render.
That’s not something that is desirable for most games. And since most of their assets and environments still lack in detail (since we don’t have endless polygons, decal textures and artist time) at least some “dirt”/unclean imagery like Chromatic Aberrations and Vignette effects go a long way in making it more believable to the eye.
A common argument against these post-processes is that “the eye is not a camera”. Which of course is incorrect, but fair to some point in that we shouldn’t introduce additional artifacts when our eyes are pretty faulty already. I think in the end, when these effects are applied carefully, the image looks more life-like, and that’s what matters.
With VR interestingly a lot of effort is going into rendering the sides of the image in a lower resolution for performance reasons, since our eyes lose a lot of accuracy to the sides anyways.
In the same vein effects like motion blur etc. are “artifacts” and interestingly many in cinema believe in high-framerate movies, which are, after all, just a brute force attempt in minimizing said artifacts. Food for thought really.
I am always happy with 23.976/24FPS… and 48FPS videos… they just feel, sweeter…
As photorealism has “always”/most of the time been the goal in 3d it is normal to try reproducing imperfections of lenses.
But i agree this is not an effect a human eye encounters
I dont like this effect but as i am making an engine and release it, i want to provide some trendy effects.
Ooh, cool! Is it just post-processing or a complete rendering solution?
I didn’t say it was a bad thing. Just intriguing. I’ve used various effects in some games I’ve been involved with, such as vignette, noise and lens flares.
I hope a complete solution that suppose a lot of work, and it is. I’m ~60% done.
I’m wondering if I’ll implement advanced editors like particles editor, scripting with nodes like UE, etc.
PostProcesses so far: DOF, bloom, HDR, chromatic aberration, vignette
I’m currently working on lighting/shadowing patricles.
But there will be no network, no IA, it is up to the developer to program these ones. No time for this and not required to render something afterall.
@Alkher Would be nice to see some screenshots of the engine