Resize a Texture2D

Does anyone know how I could resize a Texture2D object? Not draw it at a larger scale but actually resize it with a new width and height?

Here’s the why if anyone cares:
I render my entire scene to a renderTarget which copies to a Texture2D object. I scale it to the size of the user’s screen when they are running fullscreen. I have a scanline shader I apply to give it an old CRT look. This works great if the user is running in Windowed mode but when I apply the scanlines and then draw that to the size of the screen, the scanlines are a mess because of resizing. I think if I could resize the Texture2D object to be the size of the screen and THEN apply the scanlines, they would look correct, but I’m not seeing now anyone does this :confused:

You don’t have to do this by the way, RenderTarget is a texture2d anyways. You can just draw it like any texture or pass it to the shader like a normal texture.

I assume you are talking about your shader in the other post.

So first of all - the question is - do you change the rendertarget size of the backbuffer when resizing?

That’s not done by default and if you don’t do it you can’t make the final image appear sharp.

If you don’t already have this setup here is my code

In my main game.cs (so the first file you have along with program.cs)

 public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content"; 
           
            [other stuff ... ]


            Window.ClientSizeChanged += ClientChangedWindowSize;

            Window.AllowUserResizing = true;
            Window.IsBorderless = false;
}

private void ClientChangedWindowSize(object sender, EventArgs e)
        {
            if (GraphicsDevice.Viewport.Width != graphics.PreferredBackBufferWidth ||
                GraphicsDevice.Viewport.Height != graphics.PreferredBackBufferHeight)
            {
                if (Window.ClientBounds.Width == 0) return;
                graphics.PreferredBackBufferWidth = Window.ClientBounds.Width;
                graphics.PreferredBackBufferHeight = Window.ClientBounds.Height;
                graphics.ApplyChanges();

                screenManager.UpdateResolution();

            }
        }

my screenManager.UpdateResolution() changes all the rendertargets i use to have the correct new resolution, too.

Ok, but let’s presume you don’t want to have your rendertarget scaled. That’s fine, too, but you have to scale the backbuffer rendertarget for the scanlines not to be blurry.

So ideally your setup should be something like this

… draw stuff to my rendertarget

… redraw my rendertarget to the backbuffer or another rendertarget and apply shader

Important: If you apply your shader your input for the height of the image must be your game resolution, not the resolution of the image!

So if your image is 800x640 and your screen resolution is 1280x800 you want to use 800 as an input instead of 640

Yes this is for that shader.

My rendertarget texture is 1280x720 I believe. I was hoping I could avoid multiple steps of drawing to a buffer and getting the new rendertarget at the correct size by simply resizing the existing Texture2D that I already have. Is that not possible to do?

It’s possible but not efficient to do in real time continuously, if you look to the methods of Texture2D you will see that…

You can setdata a texture to a array[] of type color.
Then getdata a color array[] to a new scaled texture.

but that will get done cpu side.

For every frame that would be a massive slow down of your entire app.

So if your planning to do it just one time like when your app starts up ok ill drop you my ColorArray class if need be. As im not sure monogame actually has the resize function in there like xna did. but if this is for realtime your going to need to work with the render target in some way.

What do you mean? There’s no way to stretch or shrink the data to make it fit in a differently sized texture. To anyone considering using getdata on one texture and setting that data on another texture, don’t do it! It has the same effect as drawing a texture to a rendertarget except it’s a bunch slower.

There’s no resize function for textures/rendertargets in XNA…

It’s not possible to resize a texture, at least with the MG API. Maybe it’s possible with DirectX or OpenGL directly, but I didn’t find anything in a quick search.

That’s pretty much what i said.

Im a little confused as to what he is saying because, when he says ‘his texture’. Im not sure exactly if he means this is just some loaded image that needs to be resized one time. Or if its his screen being snapped off to a render target each frame, that for some reason needs to be resized which really makes no sense to me.

What do you mean? There’s no way to stretch or shrink the data to make it fit in a differently sized texture.

What i meant is that if this is just a image your loading from disk that needs to be resized on the way in one time.
Then ya its possible to manually do it, by putting it into a array of colors, then scale it manually with a algorithm, to a new array of a different size, then send that data to a newly created texture2d.
Which i actually wrote a scaling algorithm in a class, that will do just that,… but like you said… it would be really slow… which it really is.

So im saying depending on what he is trying to actually do.

It can be used for a single save or load from disk to resize a image/texture.
Not for real time resizing each frame, your frame rate would be like fps = 2 lol.

but why would you need to resize a render of your screen per frame anyways.

by simply resizing the existing Texture2D that I already have.

Was thinking maybe he means he is using a texture2d that has scanlines to shade his current screen to a new render target. But the image could be any size on each persons monitor ?

Anyways trust me when i say if he try’s to use this in real time he will give up on the idea fast. He will have to pull out what he needs or possibly make some minor alterations but the scaler is at the bottom. https://drive.google.com/open?id=0B1zD887frY04c0pZUnFPeXJkeGc

Any suggestion or idea of using GetData()/SetData() to do this is just crazy. Stop that right now. There is no “resizing” of a Texture2D aside from rendering that texture as a full-screen quad to another render target of the desired size or to the back buffer.

@kosmonautgames has described the correct way to do this. Your render target remains at its current size. You use the render target as a texture input to the shader that generates the CRT scanline effect and draw a full-screen quad to the backbuffer. The game’s vertical resolution is also used as an input to the shader so you can generate the scanlines at a suitable size for the screen resolution.

Thanks for all the responses. Looks like I’ll have to setup a second render target then as resizing isn’t going to be a good option performance wise as I do need to do this for the full screen every frame.

I draw everything to a render target and then resize that to the size of the screen. When I apply the scanline shader before resizing, the lines are a mess once I do resize (instead of being 1px they are all wider and not consistently wider) so that is why I have to resize first and then apply the shader to the resized texture.

You need to get the notion of resizing textures out of your head. You can’t resize and then apply the shader, what you would do is

The only render target/texture you need to create for this process is the one you render your scene to.

Not sure if it works for you , but you can resize a texture with VertexPositionTexture[] and BasicEffect. But i don’t think this is what you want or how the quality will be.

You do not want to resize anything at all but instead.

  1. Rendering to a target that will fill the game window on a 1 to 1 ratio exactly.
  2. Render that target to your game window / screen with a shader.

To clarify that’s what you need to focus on, how to avoid any resizing in the first place.

So first of all - the question is - do you change the rendertarget size of the backbuffer when resizing?
That’s not done by default

To that regard.
You should note that their is a difference between the ideas of

the rendertargets width and height
the back buffer
the client size window bounds
the tile safe area
the current modes resolution
the monitors resolution itself.

Im not sure a 1 pixel scanline is as simple as it sounds i mean if this has to really be 1 pixel.
I can imagine a lot of ways this could get messed up.

Monogame has been a little quirky on this stuff. To the point you will need to check these things for equality for yourself and possibly adjust them during runtime not just at construction.

Any suggestion or idea of using GetData()/SetData() to do this is just crazy. Stop that right now.

Yep just wanted him to be able to see it for himself.

Its not unusual for some new people to have to see it fail to fully believe it or in general for some people to want to know why something won’t work in fact its expected. The quicker he is 100% on that, the better.

He could literally test it with that class in about 15 minutes there wasn’t much to tweek to make it work he really would get like 1 to 3 fps tops, its simply not going to work anyways.

That class i showed was originally made for xna specifically for programmatically done paint operations from and to disk only, within xna and for operations on multiple enormous bitmap sprite-sheets for automated editing operations. Hence why there are methods for cutting and placing parts of a color array in and out of a texture. Which is all its really useful for. Not real time rendering in any way.

as resizing isn’t going to be a good option performance wise as I do need to do this
for the full screen every frame.

The heavy iterative nature of these operations and other similar operations is why we have gpu’s. Hardware scaling is much much much faster then cpu side scaling. but scaling has other considerations gpu side as well such as aliasing (which are still explored to this day). This particular shader problem specifically hits on another which is known as artifacts but in this case it can generate entire lines of them instead of pixels here or there.

Consider the following… which can apply to even a simple painting operation
You have just 3 white lines at the following rows y0 y2 and y4 in a texture that has a total of 5 lines.
You have to expand this to a texture that has a total of 8 lines up to row 7.
Do you now put them at y0 y3 y6 leaving a 2 line gap between each or do you make a extra line and have a line at y0 y2 y4 y6 with a extra empty line at the end ?
Or do you have the same number of lines but some are two pixels in height or are some antialiased to a shade of grey.
Can you tell me exactly how you can make just this original texture proportionally identical to the destination with integer based real world physical screen elements that may not match what your final position width height is lets say when it has a total of 10 lines, i.e. this https://images.duckduckgo.com/iu/?u=http%3A%2F%2Fecx.images-amazon.com%2Fimages%2FI%2F41J73haYBKL.jpg&f=1 will your app automatically make the users screen resolution his monitors maximum. The problem with scaling is there is a lot of context and the more detailed you get the more complex it is to control it and some of it you might not even realize may not be fully in your control.

When you talk about a one pixel scan line, that’s pretty deep there is a lot of context to consider.

So the simple solution here is to ensure you avoid allowing any scaling at all, or avoid any solution that depends on using it at all. In combination with being assured the physical monitor or screen wont also be getting a scaled version of things visually that affect what you expect to see,

Its not like the screen will change its physical size because you change the back buffer hieght.