Transforming objects for a RenderTarget2D

I have a render targets:
If I wanted my game to have a virtual resolution of say 480x270, for a pixelated look, but run in 1080p, would I have to run Vector2s and Rectangles through some sort of transform method, if I’m using them for logic rather than drawing, or is there an easier way to do that?

It’s great that you mentioned render targets, since that’s exactly how I’ve been managing this kind of scaling. Create a RenderTarget2D with the desired “virtual” resolution (480x270) in your case. Then set it as the active render target via GraphicsDevice before you draw the game. After the game is done drawing and spriteBatch.End() has been called, you should set the render target to null. Then repeat the drawing process again, but this time only draw the RenderTarget2D that you just rendered the game to before. Make sure that it is drawn at the full size of the display window (1080p in this case). Hopefully this explanation, and the code below, illustrates clearly enough what I’m trying to say.

        // Drawing everything to render target here
		GraphicsDevice.SetRenderTarget(renderTarget);
		GraphicsDevice.Clear(/*Background Color*/);
		spriteBatch.Begin();

		// Call all game drawing code here

		spriteBatch.End();
		GraphicsDevice.SetRenderTarget(null);

		// Drawing the render target to the screen here
		GraphicsDevice.Clear(/*Background Color*/);
		spriteBatch.Begin();
		spriteBatch.Draw(renderTarget, /*Actual Window Bounds*/, Color.White);
		spriteBatch.End();

I should also mention how to set the display size of the window and how to create a RenderTarget2D just in case. The following code would ideally be placed in the Initialize() method of your Game class.

        graphics.PreferredBackBufferWidth = /*Desired Window Width*/;
		graphics.PreferredBackBufferHeight = /*Desired Window Height*/;
		graphics.ApplyChanges();

		renderTarget = new RenderTarget2D(GraphicsDevice, /*"Virtual" Width*/, /*"Virtual" Height*/, false, GraphicsDevice.PresentationParameters.BackBufferFormat, DepthFormat.Depth24);
1 Like

Thanks for the explanation. I actually know how to use render targets, what I meant was what is the best way to do logic in 1080p, while the screen looks like it’s in 270p? My problem is basically as follows: I have a sprite class, which I call a draw method in, and it draws the sprite with a Vector2. If I draw it at say (240, 135) it’s in the middle of the screen, which make sense given the size of the render target. The problem is, that position doesn’t work if I’m trying to if I’m not drawing, because the window is in 1080p. Essentially what I’m saying is that the position of my sprite on the render target, if the render target is scaled to the size of the back buffer, isn’t consistent with the rest of the game. If I want to create a hitbox for my sprite, and I put in (240, 135, width, height) that wouldn’t actually be around the sprite, it would be closer to the top-left because that is based on the screen resolution, instead of the render target. I feel like I’m rambling. My question is, how do I keep the position (vector2 or rectangle) of a sprite where I code logic for it and draw with it the same value, if I’m using a scaled render target? Do I need to code a method that transforms it based on the render target, or am I overthinking this?

Ah I see, thanks for the clarification. It’d be easiest just to code all your game logic in terms of 270p, so that no conversions are needed. If you absolutely need to do game logic as if the game were running at native 1080p, I suppose you could by having a static class that has conversion functions in it. But it’s probably best to avoid that headache if possible.

Thanks. How would I go about coding it in 270p but have it run in 1080p without making conversion functions? Or are those the only two options? (Besides making a shader)

// When you're setting up the game, store this somewhere easily accessible,
// like a static InputManager class or something of that nature
Vector2 cursorScaling = new Vector2(renderTarget.Width / (float) windowWidth, renderTarget.Height / (float) windowHeight);

// When querying the cursor position...
Vector2 cursorPosition = CursorPosition.ToVector2() * cursorScaling;
// After it has been scaled, use cursorPosition as you would normally

I just remembered that there’s actually one game logic thing that must change as a result of scaling. Mouse input won’t map to the correct in-game coordinates if the render target scaling isn’t accounted for, so the code above should help remedy that.

But for everything else, no conversions are needed. If you tell a rectangle to be drawn at (100, 100) it’ll be displayed on screen at a larger coordinate, like (540, 540), since the render target is automatically scaling whatever gets drawn to it. If you wanted to give that object a bounding box in the code, for collision or whatever, it would also have its corner at set (100, 100) and it would behave accordingly. Just try it out and then what I’m saying should be more apparent. Think of the render scaling like a post-processing effect, it doesn’t effect the game logic at all. One thing to consider is how to tell your game logic what the size of the window is, since its important for aligning UI and such. Storing the size of the render target somewhere easily accessible is probably the best bet.

Thanks! I’ll let you know how it goes.

It was actually much eaiser than I thought. I basically just scaled it down when I was drawing, rather than scaling it up everywhere else:

    protected Vector2 DrawPosition
    {
        get { return Position / (screenSize / renderTargetSize); }
    }