I think I can use two RenderTarget one to draw the playing area and one to draw the information area but I’m not sure whether it’s the way to go in this scenario.
So the question is when splitting the screen what’s the technique used to draw each area?
I think there is no specific way how it should be done. You can experiment and check what works out for you. If you use rendertargets, you have the advantage that its easy to re-Scale the whole thing afterwards. The Problem is, scaling fonts this way will be pretty ugly, except you’re using 8-bit Fonts.
If you use RenderTargets with specific sizes, or you just render the Information area on top of the playing area… Everything has advantages and dis-advantages.
E.g. If you want the Information area to be toggleable, rendering it on a higher layer and overlapping the playing area would be an idea. If you don’t plan to do that, it could be called a waste of rendering effort.
Yeah, I know that everything has advantages and disadvantages that’s why I’m asking here so someone that has implemented it before can give some tips like you’re doing now
In this game the information is not toggle-able, it will be static and visible all the time. That’s why I think I can use two different RenderTargets to avoid overlapping.
Well then just have 2 different rendertargets that don’t overlap.
You can draw rendertargets the same way you can draw any other texture with SpriteBatch, so in that case just have the appropriate destination rectangles for these two and you are good to go.
If you need help setting up and writing to RenderTargets feel free to ask
Before using RenderTargets this was the way I used to draw sprites
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.Black);
spriteBatch.Begin();
// Everyting is drawn here
spriteBatch.End();
}
This is the method Game1.Draw() the main Draw method. But I realized that using RenderTargets I need to call spriteBatch.Begin(); after calling graphicsDevice.SetRenderTarget(…); otherwise the sprites aren’t drawn in the RenderTarget, they are drawn in the back buffer. Before RenderTargets I had only one spriteBatch.Begin(); and one spriteBatch.End(); but now I’ll have much more calls to these methods.
Is there a way to use the same SpriteBatch.Begin() to draw in every RenderTarget?
How can I measure the time needed to call SpriteBatch.Begin()?
The other advantage that the Viewport approach has is that it allows you to use full screen anti-aliasing. This won’t be a problem with a sprite-based game but will make a difference if you’re doing 3d.
huh. This thread turned out pretty good! So when you draw in these viewports, each starts a new coordinate system? draw at 0,0 on all viewports is top left corner? And can you have any number of viewports?
Yes, the Viewport is a property on the graphics device so you can only ever have a single one active at any time but you can freely change it as much as you want during your drawing. And yes, they have their own coordinate system with 0,0 in the upper left corner.
The viewport transformation is just the final stage of vertex transformation in the graphics pipeline. Its task is to map the points from the normalized view volume you end up with after applying world, view and projection to their final position on screen. The normalized view volume is a cube with side length of 2 centered around the origin. So to map this to screen coordinates, you will need a transformation consisting of a scaling followed by a translation, so that everything fits inside your destination rectangle on screen. the z-coordinate holds the depth, it gets mapped from normalized coordinates (-1, 1) to (0, 1).