Is it possible to get the texture pixel width height within the shader ?

To say if i have a shader and i load in a texture to register t0.

Texture2D Texture : register(t0);

Surely the card knows the actual pixel size of the texture.
So is it possible to get that in the pixel shader instead of passing the textures size to a shader variable myself ?

refractionEffect.Parameters["TextureSize"].SetValue(texture.Bounds.Size.ToVector2());

The intent is to get the reciprocal of the textures width height so i can increment or grab texel by texel precisely across the texture.

So im wondering if its possible to access the texture size in the shader?

If anyone knows how to get the actual screen resolution from within the shader ?
From the projection matrix or some other way that would also be great. i don’t like the idea of passing a ton of stuff into the gpu that i know it already has.

Hmm, would it be an artifact of the view or projection matrix??

the bounds for the viewport you could probably get out of the projection matrix but im not sure how to do it. More specifically when using a basic effect im not sure how to even get at the projection matrix but im guessing the gpu at least knows the texture size intrinsically.
So im wondering if there is a hlsl command to return it or the reciprocal for it.

What im trying to do at the moment is a box blur and stuff like this keeps coming up.

float4 FuncBoxBlur(float2 texCoord, float halfRange)
{
    int range = halfRange;
    float2 inc = float2(1.0f / TextureSize.x, 1.0f / TextureSize.y);
    float2 texelRange = float2(range * inc.x, range * inc.y);
    float px = -texelRange.x; // texCoord.x - texelRange.x  // we could make a scaler here as well.
    float py = -texelRange.y;
    float4 col = float4(0.0f, 0.0f, 0.0f, 1.0f);
    float total = 0.0f;
    for (int x = -range; x < range; x++)
    {
        for (int y = -range; y < range; y++)
        {
            col += tex2D(TextureSampler, (texCoord.xy + float2(px, py)));
            py += inc.y;
            total += 1.0f;
        }
        px += inc.x;
    }
    col = col / (range *4);  //total; //(range *4) 
    col.a = 1.0f;
    return col;
}

// were cycle time is a variable coming int to shader ranging from 0 to 1.
float4 PsBoxBlur(float4 position : SV_Position, float4 color : COLOR0, float2 texCoord : TEXCOORD0) : COLOR0

{
float t = saturate(CycleTime) * 4.0f + 1.0f;
float4 col = FuncBoxBlur(texCoord, t);
return col;
}

I also just realized that compiler is compiling the steps of the of the range if i let the range go of the collar and don’t saturate the CycleTime this fails to compile. Guess for loops are pretty tricky for the gpu.

There’s Texture.GetDimensions for texture size. It’s vs 4.0 though. https://docs.microsoft.com/en-us/windows/win32/direct3dhlsl/dx-graphics-hlsl-to-getdimensions

No idea about the screen resolution.

ya this doesn’t appear to work on open gl.

float w;
float h;
Texture.GetDimensions(w, h);
float2 inc = float2(1.0f / w, 1.0f / h);

no syntax but on compile it says cannot map expression to pixel shader instruction set.

Yea, for loops can be a problem lol

The GPU also has access to the depth buffer, but (in DX9 to my limited knowledge) you cant get at it in hlsl. So, there may not be an intrinsic way to do it.

I am 9n holiday in Holland at the moment or I would look it up in my HLSL ref book lol