[SOLVED] Shader issue with intel/amd IGP

Hello,

I need help with my shader, for some reason it doesn’t work on Intel’s and AMD’s integrated gpu.

to make my shader I use the default Monogame template.

Vertex Shader:

VSOutputPixelLightingTx VSBasicPixelLightingTx(VSInputNmTx vin) {
	VSOutputPixelLightingTx vout;

	CommonVSOutputPixelLighting cout = ComputeCommonVSOutputPixelLighting(vin.Position, vin.Normal, vin.Tangent, vin.Binormal);
	SetCommonVSOutputParamsPixelLighting;

	vout.Diffuse = float4(1, 1, 1, DiffuseColor.a);
	vout.TexCoord = vin.TexCoord;

	return vout;
}

Pixel Shader:

float4 PSBasicPixelLightingTx(
	float4 PositionPS : SV_Position,
	float2 TexCoord   : TEXCOORD0,
	float4 PositionWS : TEXCOORD1,
	float3 NormalWS   : TEXCOORD2,
	float3 TangentWS  : TEXCOORD3,
	float3 BinormalWS : TEXCOORD4,
	float4 Diffuse    : COLOR0
	) : SV_Target0 {

	float4 color = SAMPLE_TEXTURE(Texture, TexCoord) * Diffuse; 
	float3 eyeVector = normalize(EyePosition - PositionWS.xyz);

	clip(color.a - .001);
		
	float4 bump = SAMPLE_TEXTURE(NormalMap, TexCoord) - float4(0.5, 0.5, 0.5, 0);

	float3 bumpNormal = NormalWS + (bump.x * TangentWS + bump.y * BinormalWS);
	bumpNormal = normalize(bumpNormal);

	ColorPair lightResult = ComputeLights(eyeVector, bumpNormal, 3);

	color.rgb *= lightResult.Diffuse;

	AddSpecular(color, lightResult.Specular);
	ApplyFog(color, PositionWS.w);

	return color;
}

the VSOutputPixelLightingTx struct:

struct VSOutputPixelLightingTx {
	float4 PositionPS : SV_Position;
	float2 TexCoord   : TEXCOORD0;
	float4 PositionWS : TEXCOORD1;
	float3 NormalWS   : TEXCOORD2;
	float3 TangentWS  : TEXCOORD3;
	float3 BinormalWS : TEXCOORD4;
	float4 Diffuse    : COLOR0;
};

I use vs_3_0 and ps_3_0 to compile them.

Result on Nvidia and AMD dedicated card :

Result on intel and AMD integrated chip :

Over the last two day I try every solution i can find online but were not successful.
I think I am doing something wrong…

You can download here my complete shader : https://drive.google.com/file/d/0BzptdjKIFeflV0g4SVBNY3JJRVk/view?usp=sharing

Isn’t it because there are too many instructions for the shadermodel (which is…?) on these cards ?

do you means IGP have a lower maximum instruction per shader ?

if i try to compile with vs_2_0 and ps_2_0 the content pipeline say me I’am above the limit for SM2.0 that why I try the 3.0 but it works only on dedicated graphic card now.

I actually make my test on a Intel HD Graphics 3000 (DirectX 10.1, OpenGL 3.3, OpenGL ES 3.0 and Shader model 4.1)

Ok, I clean all my file and remove everything I do not use in my shader.

I have done more test

here the new pixel shader :

float4 PSBasicPixelLightingTx(VSOutputPixelLightingTx pin) : SV_Target0 {
	// Texture
	float4 color = SAMPLE_TEXTURE(  Texture, pin.TexCoord) * pin.Diffuse; 
	clip(color.a - .001);
	// Normal
	float4 bump  = SAMPLE_TEXTURE(NormalMap, pin.TexCoord) - float4(0.5, 0.5, 0.5, 0);
	float3 bumpNormal = pin.NormalWS + (bump.x * pin.TangentWS + bump.y * pin.BinormalWS);	
	bumpNormal = normalize(bumpNormal);	
	// Lights	
	float3 eyeVector = normalize(EyePosition - pin.PositionWS.xyz);
	ColorPair lightResult = ComputeLights(eyeVector, bumpNormal, 3);
	// Apply Lights
	color.rgb *= lightResult.Diffuse;
	color.rgb += lightResult.Specular * color.a;
	// return
	return color;
}

I tried to return the color at each step, all works as expected, except for lights.

ComputeLights return a black pixel only on integrated graphic card.

ColorPair ComputeLights(float3 eyeVector, float3 worldNormal, uniform int numLights) {
	float3x3 lightDirections = 0;
	float3x3 lightDiffuse = 0;
	float3x3 lightSpecular = 0;
	float3x3 halfVectors = 0;
	
	[unroll]
	for (int i = 0; i < numLights; i++) {
		lightDirections[i] = float3x3(DirLight0Direction,     DirLight1Direction,     DirLight2Direction)    [i];
		lightDiffuse[i]    = float3x3(DirLight0DiffuseColor,  DirLight1DiffuseColor,  DirLight2DiffuseColor) [i];
		lightSpecular[i]   = float3x3(DirLight0SpecularColor, DirLight1SpecularColor, DirLight2SpecularColor)[i];
		
		halfVectors[i] = normalize(eyeVector - lightDirections[i]);
	}

	float3 dotL = mul(-lightDirections, worldNormal);
	float3 dotH = mul(halfVectors, worldNormal);
	
	float3 zeroL = step(float3(0,0,0), dotL);

	float3 diffuse  = zeroL * dotL;
	float3 specular = pow(max(dotH, 0) * zeroL, SpecularPower);

	ColorPair result;
	
	result.Specular = mul(specular, lightSpecular) * SpecularColor;
	
	// Celshading -> result.Diffuse
	float3 intensity = diffuse;	
	// X
	if 		(intensity.x > 0.10)	intensity.x = 1.0;
	else if (intensity.x > 0.05)	intensity.x = 0.8;
	else							intensity.x = 0.5;
	// Y
	if 		(intensity.y > 0.10)	intensity.y = 1.0;
	else if (intensity.y > 0.05)	intensity.y = 0.8;
	else							intensity.y = 0.5;
	// Z
	if 		(intensity.z > 0.10)	intensity.z = 1.0;
	else if (intensity.z > 0.05)	intensity.z = 0.8;
	else							intensity.z = 0.5;
	// 60% celshading
	intensity = (mul(intensity,  lightDiffuse)*.6) + (mul(diffuse,  lightDiffuse)*.4);	
	result.Diffuse  = intensity  * DiffuseColor.rgb + EmissiveColor;
	//
	return result;
}

The only modification made here is to apply a celshading on result.Diffuse
but I also try to replace it with the original ComputeLights() on MonoGame’s github, same result.

That make me think I had something wrong in the arguments to ComputeLights or there is a problem with MonoGame on the Intel HD Graphics 3000. (I use MG 3.5)

The shader in ZIP file : https://drive.google.com/file/d/0BzptdjKIFeflV0g4SVBNY3JJRVk/view?usp=sharing

I finally found it!

I just forgot to put “Generate TengentFrames” to true in the Monogame pipeline…

For whatever reason it work anyway on most nvidia card but make issue with intel and amd card…
Practically a week of work just for this… I feel dumb…

1 Like

A thing to save on a CheatSheet :wink: