Is anyone know how to implement ImportanceSampling to the cubemap?

Is anyone know how to implement ImportanceSampling to the cubemap? since monogame not support geometry shader at this moment.

here is some reference, but it require geometry shader.

You don’t need geometry shaders for that, just render a fullscreen quad (per cubemap face) with a pixel shader that uses 1024 samples per texel. I’ve implemented the Importance Sampling successfully almost a year ago :slight_smile:, here’s a shot:

1 Like

Do u might to share your source code?

Well, maybe in near future. I don’t know just yet, I’d have to check my code for any mistakes, add some documentation and then post it on GitHub. :slight_smile:

The basic principle of prefiltering cubemaps using MonoGame/XNA is (in four very simplified steps):

  1. Foreach cubemap face generate n 2D render targets, where every next target is two times smaller than the previous one. Where n means how many mipmap levels your cubemap should have.
    It’s very easy to calculate the mipmap chain length:
    int mipCount = (int)Math.Log(cubeMap.Size, 2) + 1;

  2. Now set every texture of the mipmap chain to the GraphicsDevice in sequence from largest to smallest and perform importance filtering (link to some very useful resources is posted below).
    This step must be done six times, as there are six cubemap faces (six mipmap chains).

  3. Create a new RenderTargetCube and copy data from every mipmap texture to it.
    This step is done six times just like step 2. except the creation of the RenderTargetCube (you need only one).

  4. Save your complete cubemap to file.

Now here comes the tough part, where you should have a basic understanding of computer graphics.
My Importance Sampling prefilterer uses PBR (Physically Based Rendering).
In short Physically Based Rendering makes rendered object look like in real life, by using some fancy physical phenomena, like light diffusion and reflection, material properties, fresnel term etc.
To properly prefilter your cubemap you need to take those phenomena into account.
This is a too vast topic to be explained in this short reply.
If you need more help just let me know, I’d be glad to answer your questions. :slight_smile:

Presentation on this topic (with some images):

And here with more code and implementation details:

And same code as the second link put together by Bart Wronski:

how do you write to the mip map data of the cubemap

In XNA there are methods:
TextureCube.SetData(CubeMapFace cubeMapFace, int level, Nullable<Rectangle> rect, T[] data, int startIndex, int elementCount) (ref: https://msdn.microsoft.com/en-us/library/ff434428.aspx) and
TextureCube.GetData(CubeMapFace cubeMapFace, int level, Nullable<Rectangle> rect, T[] data, int startIndex, int elementCount) (ref: https://msdn.microsoft.com/en-us/library/bb197085.aspx).
There you can specify the mipmap level and copy the mipmap texture data to the cubemap, it’s very easy. :slight_smile:
BTW: the second method is missing from the original MonoGame source, but it’s pretty straight-forward to implement it.

EDIT: Updated the post, because the method TextureCube.GetData(…) is missing, not TextureCube.SetData(…), my mistake. :slight_smile:

ah ok, I thought something had changed, since I knew there was no manual mip mapping in Monogame.

unlucky since I can’t compile & use the monogame source or use any newer monogame versions any more (There is this thread about Sharp.DX createPixelShader crashes, I have the same problem)

thanks for your reply :slight_smile:

Can that be added as an issue so we don’t forget it?

u mean geometry shader?

Added an issue. This method is a really trivial task to add, but I’m not so familiar with GitHub pull requests, so maybe someone else will fill it. :smile:

No, it’s about TextureCube.GetData(…). :slight_smile:

hi, can you share me how do u perform importance sampling in 2D quad space. i cant get the right coordinate.

Just create four vertices with positions in range [-1, 1] and UV coordinates in range [0, 1] (screen-space coordinates).
Here’s a great article about screen-aligned quads:
http://www.john-chapman.net/content.php?id=6
:slight_smile:

tried it, but got nth.

VShader

Out.Normal = input.Normal;
or this?
Out.Normal = normalize(mul(input.Normal, (float3x3)(World)).xyz);

PShader

float3 importanceSampled = ImportanceSample( input.Normal);
irradianceSampledColor.xyz = importanceSampled.xyz;

Here is mine shader (I’m putting only a snippet with some comments) for rendering a face quad:

// Calculates a normal vector from the specified texture coordinates and cube face.
float3 UVIndexToCubeCoord(float2 uv, uint face)
{
    float3 n = 0; // This is a normal vector.
    float3 t = 0; // This is a tangent vector.

    if (face == 0)
    {
        n = float3(1, 0, 0);
        t = float3(0, 1, 0);
    }
    else if (face == 1)
    {
        n = float3(-1, 0, 0);
        t = float3(0, 1, 0);
    }
    else if (face == 2)
    {
        n = float3(0, -1, 0);
        t = float3(0, 0, -1);
    }
    else if (face == 3)
    {
        n = float3(0, 1, 0);
        t = float3(0, 0, 1);
    }
    else if (face == 4)
    {
        n = float3(0, 0, -1);
        t = float3(0, 1, 0);
    }
    else
    {
        n = float3(0, 0, 1);
        t = float3(0, 1, 0);
    }

    // Calculate a binormal vector.
    float3 b = cross(n, t);

    // Convert the texture coordinates from [0, 1] to [-1, 1] range.
    uv = uv * 2 - 1;

    // Calculate a new normal vector for this pixel (current texture coordinates).
    n = n + t * uv.y + b * uv.x;
    n.y = -n.y;
    n.z = -n.z;

    return n;
}

struct VSInput
{
    float3 Position : SV_POSITION;
    float2 TexCoord : TEXCOORD0;
};

struct VSOutput
{
    float4 Position : SV_POSITION;
    float3 Normal : TEXCOORD0;
};

VSOutput VSMain(VSInput input)
{
    VSOutput output = (VSOutput)0;

    // Pass the vertex position with no transformations.
    output.Position = float4(input.Position, 1);

    // Calculate a normal vector needed for Importance Sampling.
    // CurrentFace is a parameter that is being set from the outside of the shader. 
    // It is just an index of the current face we want to prefilter.
    output.Normal = UVIndexToCubeCoord(input.TexCoord, CurrentFace);

    return output;
}

float4 PSMain(VSOutput input) : COLOR0
{
    // Normalize the normal vector on a per-pixel basis.
    float3 normal = normalize(input.Normal);

    // Calculate the roughness value the way you want.
    // In my case I am just using: Roughness = CurrentMip / MipChainLength.
    float roughness = saturate(CurrentMipLevel / log2(EnvMapSize));

    // Perform the filtering.
    return float4(PrefilterEnvMap(roughness, normal), 1); // Put here your prefiltering function with those variables.
}

i got ur UVindexToCubeCoord worked, but what if i have a spherical map like this? how do i calculate that?