HLSL losing vertex elements?

I’m converting an XNA project into MonoGame, and I have just one more issue to sort! However, I’ve been pouring over it for a few hours, and cannot work out why it’s doing this.

I have a vertex structure defined as follows (note the s in Normals, to distinguish):

        public struct VertexPositionNormalsTexture : IVertexType
        {
            public Vector4 Position;
            public Vector3 Normal;
            public Vector3 Binormal;
            public Vector3 Tangent;
            public Vector2 TextureCoordinates;
            static VertexDeclaration vertexDeclaration;
            static VertexPositionNormalsTexture()
            {
                vertexDeclaration = new VertexDeclaration(
                    new VertexElement(0, VertexElementFormat.Vector4, VertexElementUsage.Position, 0),
                    new VertexElement(16, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0),
                    new VertexElement(28, VertexElementFormat.Vector3, VertexElementUsage.Normal, 1),
                    new VertexElement(40, VertexElementFormat.Vector3, VertexElementUsage.Normal, 2),
                    new VertexElement(52, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0)
                    );
            }
            public VertexDeclaration VertexDeclaration { get { return vertexDeclaration; } }
            public VertexPositionNormalsTexture(Vector3 position, Vector3 normal, Vector3 binormal, Vector3 tangent, Vector2 texCoord)
            {
                Position = new Vector4(position, 1);
                Normal = normal;
                Binormal = binormal;
                Tangent = tangent;
                TextureCoordinates = texCoord;
            }
        }

When I draw using a custom effect, it acts as though the incoming vertex data is all zeroes, except for the position. Even if I try outputting any of them as a colour in the pixel shader, it’s black. However, if I assign a literal value to them in the vertex shader, it works just fine. So it seems that the matter is as it’s being passed in to the vertex shader. There are no error messages, warnings, or exceptions given.

Here are the relevant bits of my effect:

float4x4 World;
float4x4 View;
float4x4 Projection;
//other parameters here

struct VertexShaderInput
{
    float4 Position : POSITION0;
    float3 Normal : NORMAL0;
    float3 Binormal : NORMAL1;
    float3 Tangent : NORMAL2;
    float2 TexCoord : TEXCOORD0;
};

struct VertexShaderOutput
{
    float4 Position : POSITION0;
    float3 Normal : TEXCOORD0;
    float3 Binormal : TEXCOORD1;
    float3 Tangent : TEXCOORD2;
    float2 TexCoord : TEXCOORD3;
    float2 ScreenPosition : TEXCOORD4;
};

//sampler states here

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
        VertexShaderOutput output;
	output.Normal = normalize(mul(input.Normal, (float3x3)World));
	output.Binormal = normalize(mul(input.Binormal, (float3x3)World));
	output.Tangent = normalize(mul(input.Tangent, (float3x3)World));
	output.TexCoord = input.TexCoord;
	float4 worldPosition = mul(input.Position, World);
        float4 viewPosition = mul(worldPosition, View);
        output.Position = mul(viewPosition, Projection);
	output.ScreenPosition = output.Position.xy * .5 * float2(1, -1) + .5;
        return output;
}

//other pixel shaders here

float4 LightMapPixelShaderFunction(VertexShaderOutput input) : COLOR0
{
	return float4(input.Tangent, 1);
        //even this fails, displaying just black, even though the incoming vertex data has a Tangent of (1, 0, 0).
}

//other techniques here

technique LightMap
{
    pass Pass1
    {
        // TODO: set renderstates here.

        VertexShader = compile vs_2_0 VertexShaderFunction();
        PixelShader = compile ps_2_0 LightMapPixelShaderFunction();
    }
}

So, am I just overlooking something really stupid? Did I miss something in the conversion? Or perhaps is Monogame doing some sort of unfortunate optimisation?

Maybe change Normal1 Normal2 etc to Binormal, tangent etc? Although that shouldn’t be it.

Make sure that in the content pipeline you tick “binormals/tangents” for the models on, but that should be a given otherwise the program would crash anyways I think.

EDIT: Here is my struct for non-terrain without colors.

struct DrawWithShadowMapNormals_VSIn
{
    float4 Position : SV_POSITION0;
    float3 Normal : NORMAL0;
    float3 Binormal : BINORMAL0;
    float3 Tangent : TANGENT0;
    float2 TexCoord : TEXCOORD0;
};

////////////
I use a similar version, but also with vertex color, so that’s the one difference.

struct DrawTerrain_VSIn { float4 Position : SV_POSITION0; float3 Normal : NORMAL0; float3 Tangent : TANGENT0; float3 Binormal : BINORMAL0; float2 TexCoord : TEXCOORD0; float4 Color : COLOR0; };

this works for me, the vertex declaration is

public struct VertexPositionColorNormal : IVertexType
    {
        public Vector3 Position;
        public Color Color;
        public Vector3 Normal;
        public Vector2 TextureCoordinate;
        public Vector3 Tangent;
        public Vector3 Binormal;

        public VertexPositionColorNormal(Vector3 position, Color color, Vector3 normal) : this()
        {
            Position = position;
            Color = color;
            Normal = normal;
        }

        public VertexPositionColorNormal(Vector3 position, Color color, Vector3 normal, Vector2 textureCoordinate)
            : this()
        {
            Position = position;
            Color = color;
            Normal = normal;
            TextureCoordinate = textureCoordinate;
        }

        public VertexDeclaration VertexDeclaration
        {
            get
            {
                return new VertexDeclaration
                    (
                    new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0),
                    new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0),
                    new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0),
                    new VertexElement(sizeof(float) * 3 + 4 + sizeof(float) * 3, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0),
                    new VertexElement(sizeof(float) * 3 + 4 + sizeof(float) * 3 + sizeof(float) * 2, VertexElementFormat.Vector3, VertexElementUsage.Tangent, 0),
                    new VertexElement(sizeof(float) * 3 + 4 + sizeof(float) * 3 + sizeof(float) * 2 + sizeof(float) * 3, VertexElementFormat.Vector3, VertexElementUsage.Binormal, 0)
                    );
            }
        }
    }

After playing with it some more, I figured I’d try this suggestion, although - like you said - it shouldn’t be that. But apparently it was. O.o

Or at least, making that change seemed to sort it. In messing around with it, though, I discovered a few interesting points:

  1. NORMAL is only a valid semantic in vertex shader input. I knew this one already.
  2. NORMAL does not have to be followed by any number, nor does any other input semantic.
  3. Input semantics can, however, be followed by any number. Any number! I tried NORMAL100 and it worked the same way.

It seems that the reason it wasn’t working at first was because I had three input normals - which, as you can see, would effectively all share the same semantic because the index is ignored - and I guess that caused confusion.

Is this expected behaviour, or is this a glitch?

Nope, I fear I spoke too soon.

So, even with these semantics changed as described above, it seems like the only element that the vertex shader is receiving is Position; all the others appear to be zero. Once again, I can set them to a literal value in the vertex shader, and that works fine, but it doesn’t seem to want to read them from the incoming vertex data.

Could it have anything to do with the “location” field of the VertexAttributes? When I poked around at the internal variables in the GraphicsDevice and shader, it found that the VertexAttributes have the following “location” field values:

Position: 0
Normal: 3
Binormal: -1
Tangent: -1
TextureCoordinate: -1

I have no idea what these values are used for, nor whence they came, but none of the other effects I’m using have a negative location (or anything other than 0, really). Could this have something to do with it?

Edit: Ran it again, and found these locations:

Position: 0
Normal: -1
Binormal: -1
Tangent: -1
TextureCoordinate: 4

It will use 0 as index if you don’t specify any. [quote=“ed022, post:4, topic:7724”]
even with these semantics changed as described above
[/quote]
It’s really weird that that makes a difference. There should be no problem with using 3 normal semantics with a different index.

Try adding

    [StructLayout(LayoutKind.Sequential, Pack = 1)]

above the definition of your custom VertexType. It might fix the ordering. Though I thought it was sequential by default, so it might not fix it.

Nah, that didn’t make any difference. =/ But thanks for the suggestion.

I don’t know for sure if the -1 location is indeed the cause, but it seems like a likely culprit. Looking at Monogame’s source, it seems to come from being read in from the compiled shader code directly. I also don’t know where else it is used. However, if that is indeed the problem, it’s coming from the shader compiler.

I removed all but two of the vertex attributes (in both the shader and the vertex declaration) - Position and TexCoord. The location issue (if it was an issue at all) resolved, but my shader in general is still not working; it’s as though the texture coordinates are (0, 0) at every point, although the incoming vertex data has valid coordinates.

Now I’ve replaced my custom vertex structure with the default VertexPositionNormalTexture, and the same things are happening, both in terms of the VertexAttribute’s “location” field (if relevant) and the data coming in as zero. Therefore, I infer that it must be a problem with the shader or its compilation, as these are where both of these remaining issues would reside.

Are you using a non-ancient version of MG? If so, the issue is most likely on your side. Could you show the parts of your code where you set the parameters for the shader, set the data in the vertex buffer and also how you draw your model. I think there must be some kind of mistake in one of those parts of your code.

Hmm, alright, thanks, I’ll poke at it some more. I’ll try isolating the draw to see if there’s a conflict arising somewhere else.

Ok, I’m still utterly baffled, perhaps now more than before. It seems that something decidedly bizarre is going on.

I’ve traced it down to a single line of code - a GraphicsDevice.DrawUserIndexedPrimitives - that, if I comment it out, everything works just fine; subsequent draws have the proper incoming TexCoords, Normals, etc. However, if I leave the aforementioned line in, the later draw in question seems to have all of its incoming vertex data as zero, even though no other code changes, the offending line is never called again in between, and there are three other sets of SetRenderTarget and Clear in between these two. How can this be? =O The only connection I can see between these two seemingly conflicting draws are that they’re using the same effect, but why would a DrawUserIndexedPrimitives change its behaviour? Even the EffectPass.Apply doesn’t seem to affect it.

DirectX or OpenGL? It would be useful if you can show a bit more code of how you handle the effect and drawing, like the failing draw calls and initialization you do before that.
Seems this might be a bug after all…

OpenGL.

My code is spread out over several classes, methods, and files, but here is the sequence of events (so I don’t have to send my entire program:

Exhibit A:

        public override void Prepare(Level level)
        {
            Camera.GraphicsDevice.SetRenderTarget(backbuffer);
            Camera.GraphicsDevice.Clear(Color.Black);
            level.DrawHighlight();
            base.Prepare(level);
        }

Within level.DrawHighlight:

            foreach (IDrawable d in AllObjectsToBeDrawn(0, float.PositiveInfinity))
            {
                Camera.GraphicsDevice.DepthStencilState = DepthStencilState.None;
                Camera.GraphicsDevice.BlendState = BlendState.AlphaBlend;
                d.DrawSilhouette(Camera, this);
                Camera.GraphicsDevice.DepthStencilState = DepthStencilState.None;
                Camera.GraphicsDevice.BlendState = BlendState.Additive;
                d.DrawHighlight(Camera, this);
            }
            Camera.GraphicsDevice.DepthStencilState = DepthStencilState.Default;

In one of the IDrawables to be drawn: (which, mind, is not an XNA IDrawable, but rather an interface of my own):

        public override void DrawSilhouette(Camera camera, Level environment)
        {
            Vector3 backup = DiffuseColour;
            DiffuseColour = Vector3.Zero;
            ApplyStandardParameters(effect, environment);
            effect.World = World;

            camera.ApplyParameters(effect);
            foreach (EffectPass pass in effect.CurrentTechnique.Passes)
            {
                pass.Apply();
                //This is the line that, if uncommented, prevents vertex data from working later on
                //Camera.GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionNormalsTexture>(PrimitiveType.TriangleList, vbuf, 0, vbuf.Length, ibuf, 0, ibuf.Length / 3);
            }
            DiffuseColour = backup;
        }
        protected void ApplyStandardParameters(ObjectEffect effect, Level environment)
        {
            effect.Mode = ObjectEffect.Modes.Standard;
            effect.World = World;
            effect.Alpha = Alpha;
            effect.DiffuseColour = DiffuseColour;
            effect.FogColour = environment.FogColour;
            effect.FogDistance = environment.FogDistance;
            effect.FogEnabled = environment.FogEnabled;
        }

In the Camera class:

        public void ApplyParameters(ObjectEffect effect)
        {
            effect.View = View;
            effect.Projection = Projection;
        }

And later, Exhibit B:

        public void PrepareLightMap()
        {
            Camera.GraphicsDevice.BlendState = BlendState.AlphaBlend;
            Camera.GraphicsDevice.SetRenderTarget(Camera.LightMap);
            Camera.GraphicsDevice.Clear(new Color(LightColour));

            IEnumerable<IDrawable> objects = AllObjectsToBeDrawn(0, float.PositiveInfinity);
            foreach (IDrawable d in objects)
                d.DrawToLightMap(Camera, this);
        }

And in the same IDrawable’s DrawToLightMap method:

            ApplyLightMapParameters(effect, Level);
            effect.NormalMap = normalmap;
            effect.LightDirection = new Vector3(0, .5f, .866f);
            camera.ApplyParameters(effect);
            foreach (EffectPass pass in effect.CurrentTechnique.Passes)
            {
                pass.Apply();
                //This is the call that seems to be lacking its incoming vertex data (apart from Position)
                Camera.GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionNormalsTexture>(PrimitiveType.TriangleList, vbuf, 0, vbuf.Length, ibuf, 0, ibuf.Length / 3);
            }

And its ApplyLightMapParameters:

        protected void ApplyLightMapParameters(ObjectEffect effect, Level environment)
        {
            effect.Mode = ObjectEffect.Modes.LightMap;
            effect.World = World;
            effect.Alpha = Alpha;
            effect.LightColour = environment.LightColour + EmissiveColour;
            effect.ShadowColour = environment.ShadowColour;
            effect.ShadowMap = environment.Camera.GetShadowMap(Z);
        }

I think that’s all the relevant bits. O.o But as you can see, in between the two draws, I’m setting effect parameters, setting CurrentTechniques (which occurs with the effect.Mode property), setting RenderTarget, and clearing the graphics device; I figured this would be enough to completely separate the two instances, let alone such odd vertex behaviour.

The shader code is effectively the same as in my original posts, but I now have it just returning float4(input.TexCoord, 0, 1), which comes out in many colours when it’s working properly, and solid blue when it’s not.

This is really werid… Are you doing this in DirectX or OpenGL? Edit: Oops, didn’t see you already mentioned that!
So it looks like some state of the graphicsdevice or something else gets changed by the first DrawUserIndexedPrimitives and not overwritten by the second one, though that’s very weird… You could try debugging the VertexDeclaration.Apply calls. As the DrawUserIndexedPrimitives call is causing the issue, it looks like that is the place were something might go wrong.

How do we go about debugging this? Is it something I should do personally, or can we somehow raise the issue to those managing the project? As you can see, and as it seems you agree, I really don’t think it’s something that I’m doing wrong. =I

You can debug this by getting the source code and switching out the MG reference in your game for the source project, then you can step through and put breakpoints. You’re right that this might be a bug, so it seems like a good idea to open an issue on GitHub. It’s probably fastest if you can figure it out yourself though.

Ok, thanks. =)
I’ll confess that I’m still a bit of a noob when it comes to Monogame (been developing in XNA for nearly a decade though)… can you point me to where/how I can obtain the source?

There’s a guide in the readme on GitHub :slight_smile:

This issue just reared its ugly head once again in a different place. Calling upon DrawUserIndexedPrimitives in one method whilst preparing back textures prevented the TexCoords from being properly passed into another technique of the same effect later on, despite being drawn to a different (and cleared) RenderTarget2D. That seems to be the common cause. I’d propose that perhaps it was optimising out those vertex elements, but they’re used in both techniques; besides, that wouldn’t make sense anyway…

I opened an issue on GitHub. Any idea how long it takes for them to notice these?