Continuing the discussion from [solved] How to achieve this type of camera:
Linking the above topic instead of replying directly because my questions isn’t really related to the original camera question of the topic, but uses almost the exact same code described.
I’m doing pretty much exactly what is described in the linked topic - rendering 2D sprites into a 3D world by using the 3D point they’re “anchored” to, converting it to 2D pixel space, and using SpriteBatch calls with a custom effect. Things are actually working as I intend, but I’d like to move the code from the CPU into the Vertex Shader to allow for better efficiency and to better support some custom work I’m doing (such as simulating the lighting working on the 2D sprites).
Here is the code I’m currently using:
//Set up MatrixTransform field in the shader
var viewport = GraphicsDevice.Viewport;
var projection = Matrix.CreateOrthographicOffCenter( 0, viewport.Width, viewport.Height, 0, 0, 1 );
renderEffect.Parameters["MatrixTransform"].SetValue( projection );
// Needed for lighting until we get everything reworked to do these calculations in the shader
renderEffect.Parameters["WorldPos"].SetValue( _WorldPosition );
Vector4 screenPos4 = Vector4.Transform( new Vector4( _WorldPosition, 1.0f ), viewMat * projMat );
Vector2 screenPos = new Vector2( screenPos4.X, screenPos4.Y ) / screenPos4.W;
Vector2 pixelPos = ( new Vector2( screenPos.X, -screenPos.Y ) + Vector2.One ) / 2 * new Vector2( Viewport.Width, Viewport.Height );
Point rectPosition = new Point( (int)(pixelPos.X) - _ModelTexture.Width / 2, (int)(pixelPos.Y) - _ModelTexture.Height );
Rectangle rectangle = new Rectangle( rectPosition.X, rectPosition.Y, _ModelTexture.Width, _ModelTexture.Height );
spriteBatch.Begin( 0, null, null, DepthStencilState.DepthRead, RasterizerState.CullNone, renderEffect );
spriteBatch.Draw( _ModelTexture, rectangle, Color.White );
spriteBatch.End();
And the vertex shader:
VSOutput output;
output.position = mul(input.position, MatrixTransform );
output.color = input.color;
output.texCoord = input.texCoord;
return output;
I’d like to do the bulk of the code I’m currently doing in C# in the vertex shader, but am having trouble wrapping my head around the exact relation between doing it before sending the vertex information to the SpriteBatch Render call and what I get in the VertexShader when calling it.
Notes:
-I am only using SpriteBatch Begin and End for each Draw call temporarily while I get this sorted. Once I get things working properly I’ll be doing proper batching.
-I do need the “raw” world position at some point in the Vertex Shader because I use it to allow lighting on these objects. This is why I’m currently using the “WorldPos” parameter. This is also part of the reason I want to convert everything into HLSL, as I would like to get the verts in world space so I can use them for lighting.