Intel ASSAO ?

Has anybody been able to use Intel’s ASSAO implementation with XNA/MonoGame?

I’m trying to replace my current SSAO implementation. Intel ASSAO seems to be very flexible, but I’m having a hard time even understanding what the C++ part is doing (haven’t touched C++ in 7 years and DX11 has never been my friend)


I had a quick look at it, the c++ is pretty easy.

To understand it just look for every call to FullscreenPassDraw and you should be able to get your head around what it is doing.

You will find a hell of a lot of them though :smiley:

To be honest, I would look at something else though. Intel’s own figures for ASSAO are scary.

Over 4 milliseconds per frame if you are running 4K with everything turned on, that’s just mad.

Over 1 millisecond at reasonable resolution and quality, still very slow

When you factor in memory usage and the fact that there are over 20 pixel shaders used…

I would sit back and look at your renderer and see what information you already have available.

Have you got a normal map available ?
Do you have a full screen depth buffer available ?

Then think about which graphics cards you are targeting , yes it makes a difference :confused:, sadly HDAO and nVidia don’t get along very well.

Once you know what you have available it is easier to make a decision about which of the many SSAO techniques works for you.

Thanks for the answer, Stainless.

I’m using a heavily modified kosmonaut’s engine, which is deferred, so normal and screen depth (linear) are available. Unfortunately I don’t know enough to roll my own SSAO and I’m short on time, so porting or “vampirizing” current SSAOs are my only option. Do you know of anything like this as an alternative to that intel SSAO?

I’m using the SSAO which was on cosmonaut engine, but the time impact is huge (talking like 4ms @ 1920x1080 on 1050Ti, and that’s 33% of the total render time) so I’m looking at something scalable and faster. I’m not happy with the results either. That ASSAO is faster and looks better, at least in my rig.

Another thing I don’t understand is why in some scenes the time that SSAO takes is higher, as it should be approximately constant, but varies ~20-30% depending on scene. Maybe it’s the nvidia nsight doing stupid things though.

I’ve been the whole morning having a look and trying to understand the intel ASSAO. I understand most of it but there are some things I can’t figure at all (definitely not created for newbies to modify and port). I’ll spend some hours more but I doubt I’ll be able to port it to MonoGame (I hate porting things I don’t fully understand). At least this experience will make me enjoy even more C# coding. I can’t believe I lived with that C++/DX crap for 15 years :slight_smile:

Look at this for a brief overview of the most common techniques

And I would advise you to consider porting this rather than ASSAO

I am going to be looking at this myself soon, if I do get around to it in a reasonable amount of time I will share what I do with you.

Hi Stainless, thanks for the answer.

I have some doubts about the gl_ssao and my project. It’s hard to compare because I’ve not been able to compile it yet, but looking at numbers (considering quadro M6000 is twice as powerful as my 1050Ti) the amount of time spend with 1080p seems about the same as the intel assao.

Yes, quality is a lot better (no arguing about it), but I’m more concerned about speed than quality (that’s why intel assao is so attractive, it has several profiles to choose from).

Best choice would be allowing to choose between both, that would be twice the work in a field I’m not confortable at though : (

I don’t know what decission I’ll take yet (porting one or the other, or keeping the actual one with another downscale of resolution) but I’m not in a hurry, so I’ll move to another issue for a while and tackle the ssao later.

If you port the gl_ssao at the end and feel like sharing it’d be great :slight_smile: