[SOLVED] Strange custom vertex data fx problem

Im seeing something that i really don’t understand with my shader, when i add a normal to my custom vertice data structure, then send it to my .fx shader shown below (which doesn’t really do anything) i get problems.

//_______________________________
// PNCMT PosNormalColorMultiTexture fx
#if OPENGL
#define SV_POSITION POSITION
#define VS_SHADERMODEL vs_3_0
#define PS_SHADERMODEL ps_3_0
#else
#define VS_SHADERMODEL vs_4_0_level_9_1
#define PS_SHADERMODEL ps_4_0_level_9_1
#endif

float4x4 gworldviewprojection;

Texture2D TextureA;
sampler2D TextureSamplerA = sampler_state
{
    Texture = <TextureA>;
};

Texture2D TextureB;
sampler2D TextureSamplerB = sampler_state
{
    Texture = <TextureB>;
};

struct VertexShaderInput
{
    float4 Position : POSITION0;
    float4 Normal : NORMAL0;
    float4 Color : COLOR0;
    float2 TexureCoordinateA : TEXCOORD0;
    float2 TexureCoordinateB : TEXCOORD1;
};
struct VertexShaderOutput
{
    float4 Position : SV_Position;
    float4 Normal : NORMAL0;
    float4 Color : COLOR0;
    float2 TexureCoordinateA : TEXCOORD0;
    float2 TexureCoordinateB : TEXCOORD1;
};
struct PixelShaderOutput
{
    float4 Color : COLOR0;
};

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
    VertexShaderOutput output;
    output.Position = mul(input.Position, gworldviewprojection);
    output.Normal = input.Normal;
    output.Color = input.Color;
    output.TexureCoordinateA = input.TexureCoordinateA;
    output.TexureCoordinateB = input.TexureCoordinateB;
    return output;
}

PixelShaderOutput PixelShaderFunction(VertexShaderOutput input)
{
    PixelShaderOutput output;
    // test
    float4 A = tex2D(TextureSamplerA, input.TexureCoordinateA) * input.Color;
    float4 B = tex2D(TextureSamplerB, input.TexureCoordinateA) * input.Color;
    A.a = B.a;
    float4 C = tex2D(TextureSamplerA, input.TexureCoordinateB) * input.Color;
    float4 D = tex2D(TextureSamplerB, input.TexureCoordinateB) * input.Color;
    if (C.a < D.a) A.b = .99;
    // normal use it to get the shader working
    float4 norm = input.Normal;
    //if (norm.x >0) A.b = .99;
    output.Color = A;
    //
    return output;
}

technique BlankTechniqueA
{
    pass
    {
        VertexShader = compile VS_SHADERMODEL VertexShaderFunction();
        PixelShader = compile PS_SHADERMODEL PixelShaderFunction();
    }
}

Ok first to explain this problem clearly ill show a couple screenshots to illustrate.

If i use a data structure Without the normal passed to the above shader. Position Color Texture coordinates

    public struct VertexPositionColorUvTexture : IVertexType
    {
        public Vector3 Position; // 12 bytes
        public Color Color; // 4 bytes
        public Vector2 TextureCoordinate; // 8 bytes
        public Vector2 WhichTexture; // 8 bytes

        public static int currentByteSize = 0;
        public static int Offset(float n) { var s = sizeof(float); currentByteSize += s; return currentByteSize - s; }
        public static int Offset(Vector2 n) { var s = sizeof(float) * 2; currentByteSize += s; return currentByteSize - s; }
        public static int Offset(Color n) { var s = sizeof(int); currentByteSize += s; return currentByteSize - s; }
        public static int Offset(Vector3 n) { var s = sizeof(float) * 3; currentByteSize += s; return currentByteSize - s; }
        public static int Offset(Vector4 n) { var s = sizeof(float) * 4; currentByteSize += s; return currentByteSize - s; }

        public static VertexDeclaration VertexDeclaration = new VertexDeclaration
        (
          new VertexElement(Offset(Vector3.Zero), VertexElementFormat.Vector3, VertexElementUsage.Position, 0),
          new VertexElement(Offset(Color.White), VertexElementFormat.Color, VertexElementUsage.Color, 0),
          new VertexElement(Offset(Vector2.Zero), VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0),
          new VertexElement(Offset(Vector2.Zero), VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 1)
        );
        VertexDeclaration IVertexType.VertexDeclaration { get { return VertexDeclaration; } }
    }

Then pass it to the same .fx shader shown above previously.
Via the draw call shown below.

Then it works fine…

        public void DrawCustomVertices()
        {
            SimplePNCT.Parameters["gworldviewprojection"].SetValue(worldviewprojection);
            foreach (EffectPass pass in SimplePNCT.CurrentTechnique.Passes)
            {
                pass.Apply();
                GraphicsDevice.DrawUserIndexedPrimitives(
                    PrimitiveType.TriangleList,
                    verticesTer, 0,
                    2,
                    indicesTer, 0,
                    quads *2,
                    VertexPositionColorUvTexture.VertexDeclaration
                    );
            }
        }

Producing the shown result.

However… and here is the strange part.

if i use the same shader with the same draw call other then changing the vertice format passed in.

This vertice format below, then it goes haywire.

        public struct PositionNormalColorUvMultiTexture : IVertexType
        {
            public Vector3 Position;
            public Vector3 Normal;
            public Color Color;
            public Vector2 TextureCoordinateA;
            public Vector2 TextureCoordinateB;

            public static int SizeInBytes = (3 + 3 + 1 + 2 +2) * sizeof(float);
            public static VertexDeclaration VertexDeclaration = new VertexDeclaration
            (
                  new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0),
                  new VertexElement(sizeof(float) * 3, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0),
                  new VertexElement(sizeof(int) * 1, VertexElementFormat.Color, VertexElementUsage.Color, 0),
                  new VertexElement(sizeof(float) * 2, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0),
                  new VertexElement(sizeof(float) * 2, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 1)
            );
            VertexDeclaration IVertexType.VertexDeclaration { get { return VertexDeclaration; } }
        }

Even though im not really using the normal at all, im just pulling in the normal data.

Its like scrambled eggs all of a sudden.

Anyone have any clue at all why in the world id get that.

i think your sizeinbytes doesn’t add up, you multiply by size of floats for everthing but you have color which just has 4 bytes.

Plus - i’m not sure about how the element sizes have to be ordered, I thought they have to be added to the previous value

That’s what i was thinking like the byte sizes were off but i don’t see were or how? Color is actually just stored as a int via 4 packed bytes for argb, im pretty sure they are.

I didn’t think there was a required order is there ?

It really looks like the positions are being thrown off somehow like the data is wrong for the bytesizes.

Ill try to change the order around and see what happens this has had me stumped for hours now.

with order i meant that you have to say at which byte the elements are, but there is no information in your declaration.

The first one is at 0 - good.
The second one is at sizeoffloat * 3. fine
The third one is at sizeofint * 1 - won’t work
The fourth one is at sizeoffloat * 2 - won’t work

I thought sizeof(int) is different to sizeof(float), but they seem to be the same in c# (32 bit)

1 Like

Oh lol duh they are offsets damnit lol thats gotta be it

I should of copy pasted the way i did it in the first one every time i try to short cut that, god.

The way it is done in the first attempt is not great though. Have all these functions in a helper class maybe.

Especially since the code doesn’t allow reuse, since the bytesize is not reset. So if you would use the declaration again it would be all wrong (n-times the size)

public static VertexDeclaration VertexDeclaration = new VertexDeclaration
        (
          currentByteSize = 0; // <- this line is needed
          new VertexElement(Offset(Vector3.Zero), VertexElementFormat.Vector3, VertexElementUsage.Position, 0),
          new VertexElement(Offset(Color.White), VertexElementFormat.Color, VertexElementUsage.Color, 0),
          new VertexElement(Offset(Vector2.Zero), VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0),
          new VertexElement(Offset(Vector2.Zero), VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 1)
        );

So if you would use the declaration again it would be all wrong (n-times the size)

Ya your right i never really thought about that.

It was a quick hack from the last time i forgot they were offsets lol.
I can’t believe i did that again.

Have all these functions in a helper class maybe.

i cant really think of a good way to do it.

humm… it’s not elegant or the greatest solution but whatever works right… edit woops their we go.

    public struct PositionNormalColorUvMultiTexture : IVertexType
    {
        public Vector3 Position;
        public Color Color;
        public Vector3 Normal;
        public Vector2 TextureCoordinateA;
        public Vector2 TextureCoordinateB;

        public static VertexDeclaration VertexDeclaration = new VertexDeclaration
        (
              new VertexElement(VertexElementByteOffset.PositionStartOffset(), VertexElementFormat.Vector3, VertexElementUsage.Position, 0),
              new VertexElement(VertexElementByteOffset.OffsetColor(), VertexElementFormat.Color, VertexElementUsage.Color, 0),
              new VertexElement(VertexElementByteOffset.OffsetVector3(), VertexElementFormat.Vector3, VertexElementUsage.Normal, 0),
              new VertexElement(VertexElementByteOffset.OffsetVector2(), VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0),
              new VertexElement(VertexElementByteOffset.OffsetVector2(), VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 1)
        );
        VertexDeclaration IVertexType.VertexDeclaration { get { return VertexDeclaration; } }
    }

    public struct VertexElementByteOffset
    {
        public static int currentByteSize = 0;
        public static int PositionStartOffset() { currentByteSize = 0; var s = sizeof(float) * 3; currentByteSize += s; return currentByteSize - s; }
        public static int Offset(float n) { var s = sizeof(float); currentByteSize += s; return currentByteSize - s; }
        public static int Offset(Vector2 n) { var s = sizeof(float) * 2; currentByteSize += s; return currentByteSize - s; }
        public static int Offset(Color n) { var s = sizeof(int); currentByteSize += s; return currentByteSize - s; }
        public static int Offset(Vector3 n) { var s = sizeof(float) * 3; currentByteSize += s; return currentByteSize - s; }
        public static int Offset(Vector4 n) { var s = sizeof(float) * 4; currentByteSize += s; return currentByteSize - s; }
        
        public static int OffsetFloat() { var s = sizeof(float); currentByteSize += s; return currentByteSize - s; }
        public static int OffsetColor() { var s = sizeof(int); currentByteSize += s; return currentByteSize - s; }
        public static int OffsetVector2() { var s = sizeof(float) * 2; currentByteSize += s; return currentByteSize - s; }
        public static int OffsetVector3() { var s = sizeof(float) * 3; currentByteSize += s; return currentByteSize - s; }
        public static int OffsetVector4() { var s = sizeof(float) * 4; currentByteSize += s; return currentByteSize - s; }
    }

you’re not resetting the 0 again i think

provided i always call the PositionStartOffset() first i am. Though forgetting to do so would be bad even if you would normally start with the position,

public static int PositionStartOffset() { currentByteSize = 0; …

i should probably stick a comment in there.

I really cant think of a much better way to do it at the moment.

Some what of a edge case problem structurally in this case, as the Vertex Declaration statically returns a array from within a struct and you have to perform the calculation within the element definitions in a conditional and contiguous manner.

Im open to alternatives if you had something in mind.

Thanks for the prior help to kosmo i forget to tell you thanks i appreciate it. It was driving me batty the simpler it is i swear the harder it is for me to catch.

:slight_smile: Try

// 0 - 11 = (3*float)12bytes
new VertexElement( 0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0),

// 12 - 23 = (3*float)12bytes
new VertexElement( 12, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0),

// 24 - 27 = 4 bytes
new VertexElement( 24, VertexElementFormat.Color, VertexElementUsage.Color, 0),

// 28 - 35 = (2*float)8bytes
new VertexElement( 28,VertexElementFormat.Vector2,VertexElementUsage.TextureCoordinate, 0),

// 36 - 43 = (2*float)8bytes
new VertexElement( 36, VertexElementFormat.Vector2, VertexElementUsage

Total of 44 bytes in Size

Sorry did not read everything in detail.

Just noticed, that you have written in the first post:

float4 Normal : NORMAL0;
float4 Color : COLOR0;

for the VertexShaderOutput

I don’t know if this is an issue in up to date versions of MonoGame or Shaders but in the past only for the VertexShaderInput these semantics where allowed. For VertexShaderOutput you would just use TEXCOORD0…TEXCOORDN semantics.

This is not a problem, you can use whatever semantics you want.

Thanks guys the problem was solved kosmo caught it.

Im not going to post the entirety of the code yet because this is far too large. My nurbs functions are in it and its not straightened out its pretty messy i literally haven’t continued to work on it in years. It would just confuse people. At best till i really pick it up and work on it its only good for generating terrain ill do that in a bit and make a version to effortlessly generate terrain in code pretty soon and post it up when i get the time to fully encapsulate it into a new class, its based on david c rodgers nurb code site found here… http://www.nar-associates.com/nurbs/c_code.html
I rewrote some of it from c code into c# actually i had to alter most everything i rewrote substantially but hey you can see the results nurbs are just plain awesome.

We were simply talking about the offsets after that. I always forget that those are supposed to be byte offsets not element sizes within each VertexElement paramater.
I have had this screw me up many times its just easy to forget those are offsets.

It’s really a convoluted thing to have to manually count offsets for each element of a VerticeDeclaration’s array of elements so i was trying to think of a solution to automate that shown above, but ya the main problem is solved it works now.

But ya im not so great with semantics, like im not even sure why i have to write Color : COLOR0 ; were i get it that the left hand variable is like the reference of float4; i guess that the right hand is like calling new to instantiate on the type?

Not to good with monogames fx files either, the overaggressive optimization which throws errors instead of just ignoring input and compiling, is fairly annoying.

This one’s easy :slight_smile:
Semantics are used for binding your vertex channels to the vertex snader input, your pixel shader output to a RT and output from one shader stage to input of the next.
E.g. if your retun something with the TexCoord0 semantic in your vertex shader, whatever variable in your pixel shader that has that same semantic will get the value of your output (but of course it will be interpolated between the values from the three verts of the triangle the pixel is on by default).
It’s also why you need to specify the semantics in your VertexDeclaration.