Issues using SpriteBatch and custom Vertex Shader to simulate 2D objects in 3D space

Continuing the discussion from [solved] How to achieve this type of camera:

Linking the above topic instead of replying directly because my questions isn’t really related to the original camera question of the topic, but uses almost the exact same code described.

I’m doing pretty much exactly what is described in the linked topic - rendering 2D sprites into a 3D world by using the 3D point they’re “anchored” to, converting it to 2D pixel space, and using SpriteBatch calls with a custom effect. Things are actually working as I intend, but I’d like to move the code from the CPU into the Vertex Shader to allow for better efficiency and to better support some custom work I’m doing (such as simulating the lighting working on the 2D sprites).

Here is the code I’m currently using:

//Set up MatrixTransform field in the shader
var viewport = GraphicsDevice.Viewport;
var projection = Matrix.CreateOrthographicOffCenter( 0, viewport.Width, viewport.Height, 0, 0, 1 );
renderEffect.Parameters["MatrixTransform"].SetValue( projection );

// Needed for lighting until we get everything reworked to do these calculations in the shader
renderEffect.Parameters["WorldPos"].SetValue( _WorldPosition );

Vector4 screenPos4 = Vector4.Transform( new Vector4( _WorldPosition, 1.0f ), viewMat * projMat );
Vector2 screenPos = new Vector2( screenPos4.X, screenPos4.Y ) / screenPos4.W;

Vector2 pixelPos = ( new Vector2( screenPos.X, -screenPos.Y ) + Vector2.One ) / 2 * new Vector2( Viewport.Width, Viewport.Height );
Point rectPosition = new Point( (int)(pixelPos.X) - _ModelTexture.Width / 2, (int)(pixelPos.Y) - _ModelTexture.Height );

Rectangle rectangle = new Rectangle( rectPosition.X, rectPosition.Y, _ModelTexture.Width, _ModelTexture.Height );

spriteBatch.Begin( 0, null, null, DepthStencilState.DepthRead, RasterizerState.CullNone, renderEffect );
spriteBatch.Draw( _ModelTexture, rectangle, Color.White );
spriteBatch.End();

And the vertex shader:

VSOutput output;
output.position = mul(input.position, MatrixTransform );
output.color = input.color;
output.texCoord = input.texCoord;

return output;

I’d like to do the bulk of the code I’m currently doing in C# in the vertex shader, but am having trouble wrapping my head around the exact relation between doing it before sending the vertex information to the SpriteBatch Render call and what I get in the VertexShader when calling it.

Notes:
-I am only using SpriteBatch Begin and End for each Draw call temporarily while I get this sorted. Once I get things working properly I’ll be doing proper batching.
-I do need the “raw” world position at some point in the Vertex Shader because I use it to allow lighting on these objects. This is why I’m currently using the “WorldPos” parameter. This is also part of the reason I want to convert everything into HLSL, as I would like to get the verts in world space so I can use them for lighting.

I’m curious to see if anyone else has a solution, but I’m suspecting the actual answer here will be to abandon SpriteBatch and just manually handle everything by a rendering an orthographic quad in 3D space. Then I can transform all the vertices by _WorldPosition before passing them to the shader, which would give world space values I can use to calculate my lighting before transforming to screen space. I should be able to avoid the “WorldPos” parameter set doing this, which would let me properly batch things.

You cannot modify the vertex information sent by SpriteBatch afaik.

What you can do is use the Effects Parameters to hand over control variables and do more calculation in the vertex shader based on it. But I never tried this - in such cases I normally abondon SpriteBatch and just render on my own

Build from source, modify spritebatch, I personally wrote thing based on spritebatch (named it quadbatch). Spritebatch uses vertex with Vector3 for position. So don’t use extra world pos parameter, throw it into vertex of modified spritebatch, transform position in vertex shader to screen position through camera or whatnot.

It appears what you are doing is bypassing world space and view space transforms altogether and directly transforming from local space to screen space. While that works its not very intuitive.

I guess you could modify monogame… but i don’t think that’s really the right way to go.

I think that you should modify this so you are assigning positions in world space.
You might not need to dump spritebatch honestly i would dump it though just as you thought above.
You could use a single vertex buffer for the map.
Then a second one for the billboarded sprites which only would need to be updated if the sprites or camera moved.

That way you wouldn’t even need to much extra on the vertex shader either.