SpriteBatch and PointWrap stretches texture rather than tile

Hi

For my games UI I have been drawing lots of tiles using SpriteBatch at 1:1 scale (Source and destination rectangles are the same size) Then I have calculated how many of these I need to fill the larger area and also using % to fit the last tile to the remaining gap.

This is working well, but I want to reduce the draw calls by having a single draw call and providing the full destination rectangle. I was under the impression that for this all I needed to do, was set the SamplerState to PointWrap and the texture would tile.

My issue is that even with SamplerState.PointWrap (Or any other, I’ve tried them all), the texture just gets stretched to fill the larger destination rectangle. What could I be missing?

SpriteBatch (internally just using quads batched to a single VertexBuffer) will assign a UV from 0-1 … no matter what you set in SamplerState, as long as it’s 0-1 (and what you want is 0-x in order to wrap) it’s just a stretching texture (not sure if there is a way around that)

If you wanna stay with SpriteBatch, you can just render your desired “whatever” to a Texture2D first and just render that Texture2D later on the GUI to spare some drawcalls or expensive calculations. (this is what I did for the speachbubble in Robo Miner 2)

Thanks reiti. So it seems MonoGame here is different to the original XNA behaviour here i guess. The link i was reading that said SamplerState would tile textures in XNA is here…

You can try to set the source rectangle width/height to something larger than the size of the texture. If for example the texture is 64 pixels, set it to 640 to tile it 10 times.
I am not sure if this will work but it’s worth to try.

Indeed, I think it should work if you set the Source-Rectangle bigger than the texture and it should actually wrap . try it

SamplerState works the same in MonoGame and XNA - but your problem is not the sampler state but how the UV coordinates are set. Behind the scenes the sprites are just quads rendered with a regular shader, using UV to map pixels to the texture

The source rectangle is part of a single UI texture with other images for other parts of the UI as part of the single texture, so that would not be an option even if it did work. Good idea tho if it wasnt for that issue.

Wrap does not work in that case anyway. Wrap just means, that a UV of 1.5 is wrapped around to 0.5 (while clamp means it’s clamped to 1). [that’s actually a hardware limitation]

…well, you could hardcode it in a shader - but I doubt that is efficient :slight_smile:

I believe that the problem here is that XNA changes the UV coords on the target quad based on the difference between source and target rectangles. Monogame does not, it has a fixed UV of 0-1 on the quad. Also the wrapping should be based on the source rectangle, so when it goes over 1, it wraps back into that source rectangle and any information outside that source rectangle should not matter.

Basically MG should change the Quads UV based on difference is scales between target and source rectangles as XNA and not just leave it fixed as it currently does.

General: Each vertex has UV as information attached (which is 0-1) and this UV is based on the texture supplied to the shader - that’s how the hardware/gpu works

If you have a single texture with many sprites in it and you define a source-rectangle which is only a part of that texture, what happens is, that SpriteBatch does not supply 0-1 but - for example - 0-0.5 (when there are 4 images in the textures, you want to draw).

So just imagine, you have a 256x256 texture with 4 images side by side. You want to draw the top right, than you would supply 128,0,128,128 to the source rectangle. But what happens is, that SpriteBatch is building Quads with 4 Vertices, and so this vertices will have 4 UVs being 0.5/0, 1/0, 0.5/0.5 and 0.5/1

Each Vertex is than processed in the VertexShader - they basically don’t know of each other and are passed to the PixelShader. The Pixelshader now reads the interpolated UV of every pixel (of the triangle) and samples it from the texture (each pixel anywhere between 0.5 and 1 in our example) - the pixel has no idea where it started, it just knows it’s own UV, so how should it wrap somewhere inbetween? (this is actually done by a sampler, which has no ideas of the vertexdata anyway and just delivers a pixel of the texture sampled to the UV coordinate - this sampler does the wraping and has no information about start and end outside of texture level)

TL;DR: wrap/clamp is only applied to the texture as a whole, it’s the same in XNA.

You CAN (theoretically) write a shader, where you pass additional information about wraping and do the wraping in the pixel-shder on your own - but I don’t think that would anywhere near to efficient :slight_smile:

So your options are:

  • Write your own shader to work with inner wraping of a texture-atlas
  • Use single texture/images for your sprites (then wraping works just fine - or should)
  • Draw the sprites individually (like you already do)

Ok, I guess right now my best option is to keep things as they are and maybe consider render targets or individual textures in the future to optimise. Right now I will just concentrate on getting new stuff added and optimisation can wait. It would be useful if MG had a method to draw textures directly to the backbuffer and skip the added resources of using vertices etc. Would be a lot faster for use with 2D UI work.

Thanks for your input on this matter :slight_smile:

Using vertices is actually much faster, because the GPU is capable of calculating thousands of verts/pixels in parallel - it’s basically no real issue (gfx-wise) if you draw 10 or 10.000 sprites, because SpriteBatch will pack it into a single draw-call for you - it’s not getting faster than that, a 2D library writing pixels sequentially into a buffer (and copying that buffer to the GPU every frame) would be way slower. (SpriteBatch only sends some bytes with VertexData each frame)

Hello, I know this is an old topic but I happened to ran in a similar issue: I have different textures in the same png file (a big texture atlas), and want to draw a specific texture from it on a destination rectangle, with the texture repeating itself. This works with Linear Wrap if the texture is on its own unique file but it doesn’t work with the atlas, as mentionned in the answer from this topic. My question is, couldn’t we just create a new Texture2D using the existing atlas texture and the source rectangle? I haven’t found any Texture2D constructor that use another Texture and a source rectangle, so I was wondering if there was some way to do that. I may just split my atlas in a lot of small png if it is impossible, but it wouldn’t be very good performance-wise.

You can use GetData to get the pixel data from the source rectangle and copy it to the new texture using SetData. The data will be duplicated though.

If the textures in your atlas are of the same size, texture arrays might be the best solution. It only works for DirectX right now, for OpenGL there’s a pull request on GitHub, but it’s not merged yet.

There was a discussion about how to create texture arrays just recently: How to create a texture2d-array in monogame? - #8 by 3r0rXx

1 Like

Is this for SpriteBatch? SpriteBatch doesn’t support texture arrays, so you would need a custom shader for that.

Yup, it’s for spritebatch, and no, my textures aren’t the same size. Thanks for the insight anyway! I might try to duplicate and see how it goes.