# [SOLVED] UV scale for 2D sprite

Hello everyone I’m currently working on a 2D game and I would like to stretch out a 2D sprite without deforming the texture used. This texture is a repeatable pattern, so I’m wondering if it’s possible to play with the UV scale of the texture using the repeat wrap mode for texture sampling. Does that make sense? Is there another way to do what I want?

Thanks in advance for your answer If i follow, you mean to map the position as if it were a window directly onto the image.
As if that image repeats virtually like it did so in some imaginary space filed with these tiles end to end ?.

As far as i know (unless there is another way using the minification filter and wrap or something ?)
You would have to do those calculations yourself to map each rectangle.

It’s not a trivial thought process either.

The mapping itself is actually simple enough, i.e. you simply base uv coordinates on your actual draw position.

However it will get complicated because each rectangle would have to be split into as many as 4 parts. Each with its own re-mapping to draw positions and texture coordinates to the imaginary texture space and for both start and end coordinates as well.

In short you must keep a actual rectangle into the this imaginary texture space as you move positionally by destination you have to calculate new drawing rectangles and uv areas,

shwooo… if your going to seriously go with this, i might have a class or two that you can look at if i can find them.

but basically it goes something like this.

Lets say your texture is 100x100 width height
(it lives in a imaginary space filled with these back to back)

(well just pick a random draw area)

You want to draw to position 250 x 250 with a Width Height of 125 125 and a Right Bottom of 375 375
(well just calculate x you would do the same for y)
lets call the above draw destination rectangle the… drawArea

this is just a pseudo algorithm

``````dx = 250;
a = (int) ( 250 / 100) ;  // a = x / texture.Width a is rounded down to 2
b = (int)(a * 100); // a is 2 2x100 = 200
u = dx - b; // 250 - 200 = 50  this is were on the texture we start our drawing.
uw = texture.Width - u; // the distance of our source draw
// The above is the u or texture source rectangles x position on the 100 x 100 texture.
// Since b was the start of the texture in the imagined texture space.
// We find the end to compare were our destination draw ends.
// if it is more then this texture spaces single texture area.
e = (b + texture.Width); // b is the start 200 e is the end 200 + 100 = 300 = e
if( drawArea.Riight > e)
{
// Were going to need another tile as well.
// you would need some sort of loop here to track how much of the drawArea you draw.
// wittling down that area till you know its drawn.
// we already have our first set of uv coordinates, and our destination coordinates.
// to review... drawArea is were we want to draw to on screen
// to draw one to one we need to split that up.
// were d denotes a spritebatch destination rectangle and s denotes a source rectangle.
x = drawArea.X;
xw = e - drawArea.X;
Rectangle d0 = new Rectangle(x , ... , xw , ... )
Rectangle s0 = new Rectangle(u , ... , uw , ... );
// this process is the same for y so doing this in vectors maybe simpler.
//our original drawArea is not completed because we are in this if statement.
x = drawArea.Riight - e;
u = 0;
// ect.... ect...

// of course this is just scratched out but this is the jist of it.
// this isn't fully done either but this process basically just repeats at this point....
// you'll have to devise methods to break up each part to keep it simple.
// its possible to do this in a loop as well.
}
``````

When its all prototyped it can probably be compressed down to a small bit of code.
Simple math heavy logic.

1 Like

Thank you very much @willmotil for this complete answer I didn’t think of doing it in my code directly instead of a shader, and it’s actually pretty easy to do what I wanted.

``````var scale = new Vector2(2.5f);
var drawCount = new Point(1 + (int)Math.Floor(scale.X), 1 + (int)Math.Floor(scale.Y));

for (int x = 0; x < drawCount.X; x++)
{
for (int y = 0; y < drawCount.Y; y++)
{
var positionOffset = new Point(texture.Width * x, texture.Height * y);
var destinationRectangle = new Rectangle(300 + positionOffset.X, 50 + positionOffset.Y, texture.Width, texture.Height);

if (x == drawCount.X - 1)
destinationRectangle.Width = (int) (destinationRectangle.Width * (scale.X - x));
else if (scale.X < 1f)
destinationRectangle.Width = (int) (destinationRectangle.Width * scale.X);

if (y == drawCount.Y - 1)
destinationRectangle.Height = (int)(destinationRectangle.Height * (scale.Y - y));
else if (scale.Y < 1f)
destinationRectangle.Height = (int)(destinationRectangle.Height * scale.Y);

var sourceRectangle = new Rectangle(0, 0, destinationRectangle.Width, destinationRectangle.Height);

spriteBatch.Draw(texture, destinationRectangle, sourceRectangle, Color.White);
}
}
``````

And here is the result:

The first image at the left is the original one, then the scaled one but without “scaling the UV”, and the third is the final result, what I wanted to obtain.

Thanks again for your help Alternatively you can set the TextureAddressMode of the sampler state to wrap or mirror and draw vertices with texture coordinates equal to your scale. You can’t use SpriteBatch to do this, so you’d need to apply the same projection SpriteBatch uses. There has been talk of SpriteBatch.Draw overloads that let you specify the vertices directly though!
It probably won’t matter much which of the two methods you use unless you repeat the pattern a thousands of times. I previously used this method to combine neighboring faces with the same texture in a voxel engine.

1 Like