Is it possible to have a second semantic like SV_POSITION?

Hi guys and gals,

I’m trying to figure out, why SV_POSITION and POSITION1 are not calculated and interpolated the same way between Vertex- and PixelShader.

I’ve been looking for hours now and I guess, it’ll be pretty easy, if you know it already :D. I understand that SV_POSITION is given as normalized coordinates between {-1,-1} and {1,1}. What would I have to do, to get the same for both semantics?

struct VIn
  float4 position : POSITION0;

struct VOut 
  float4 positionA : SV_POSITION;
  float4 positionB : POSITION1;

VOut VS( in VIn input )
  // Do usual calculations like World, View, Projection

  VOut output;
  output.positionA = input.position;
  output.positionB = input.position;

float4 PS( VOut input ) : COLOR
  // Here input.positionA and input.positionB differ
  // What do I have to do, to get the same values?

Thank you for having a look : )


You can use TEXCOORDS1 to pass thru a secondary position.
Say for example if you need the world position in the shader.

Here is a example were im doing it now in one of my shaders.
some of its omitted so its not confusing.

struct VsInTexNorm
    float3 Position : POSITION0;
    float3 Normal : Normal0;
    float2 TextureCoordinate : TEXCOORD0;
    //float4 BlendIndices : BLENDINDICES0;
    //float4 BlendWeights : BLENDWEIGHT0;

struct VsOutTexNorm
    float4 Position : SV_POSITION;
    float2 TextureCoordinate : TEXCOORD0;
    float3 Position3D : TEXCOORD1;
    float3 Normal3D : TEXCOORD2;

then the vs shader

VsOutTexNorm VsSkinned(VsInSkinned input)
    VsOutTexNorm output;
    float4 pos = ... float4(, 1.0f) .... ;
    float4x4 vp = mul(View, Projection);
    output.TextureCoordinate = input.TextureCoordinate;
    output.Normal3D = norm;
    output.Position3D =;
    output.Position = mul(pos, vp);
    return output;

i could then use those in the pixel shader if i were so inclined.

float4 PsTextureNorm(VsOutTexNorm input) : COLOR0
    // float3 pos =;  // <<<<<<<<<<<<<<
    // float3 norm=;  // <<<<<<<<<<<<<<

    float4 result = tex2D(TextureSamplerBaseColor, input.TextureCoordinate) + float4(0.5f, 0.5f, 0.5f, 1.0f);
    result.a = 1.0f;
    return result;

If however you mean to do something like you need extra vertex coordinates per vertex passed in like your doing something really complicated, you must create a vertex definition and then the struct must match on the shader im guessing however you aren’t and the above is what you need.

Hi @willmotil,
unfortunately that’s not what I was looking for. I need something that’s transformed exactly the same way like the 3D position. I want to change the vertex in the vertex shader but carry the orignal vertex and its transformation with me, because I need the depth value at the position of the original value.

Please have a look into the code example given above.

Ok. To narrow it down. What would I have to do, when using POSITION0 instead of SV_POSITION, to get the same result?


Found out that doing:

input.positionB.y *= -1;
input.positionB= (input.positionB+ 1.0f) * 0.5f;

brings me almost there. It converts my coordinates to texture coordinates, which seems to be pretty much the same like what happens to positionA between vertex and pixel shader.

Normal vertex shader outputs like texcoords just get interpolated in the pixel shader and nothing else is happening to them. SV_POSITION is special. It’s used to calculate screenspace positions, that’s why it goes through one extra processing step that’s happening automatically. To get screenspace positions you have to divide by what’s in the w-coponent. In your case to take PositionB to screen space you have to do the same thing

float3 screenSpacePos = / input.PositionB.w;

If you then want to go to texture coordinates you have to map the -1…1 screenspace range to 0…1 texture range and flip the y-coordinate.

float2 texCoord = (screenSpacePos + 1) / 2;
texCoord.y = -texCoord.y;

Thank you @markus & @willmotil : )

That solved it.