Using Tiled I want to make a game in the same graphical style as the Gameboy where there’s a viewport resolution of 160 x 144. This would make it where each tile is 8 x 8 and there would be a width/height of 20 x 18 of those tiles.
These are the 8 x 8 tiles I made in Aseprite to be used as Tilesets in Tiled:
Arranging the tiles a certain way in Tiled I would like the tilemap to look like this:
Hmm, that looks right. You’re scaling by a whole number and you’re using PointClamp. I wonder if the resulting scale factors of BoxingViewportAdapter.GetScaleMatrix isn’t what you expect it to be. That’s the only explanation I can offer at the moment, sorry!
However when I close that file in Visual Studio and run the application it works fine so I usually ignore it. But I’m throwing that out there in case that may have anything to do with it but I’m not sure.
I agree that a render target is probably a better way to do it, but what he’s doing should still work. If you look at his resulting image, the aspect ratio is off. The only thing that would cause this would be the scale matrix he’s passing into his spritebatch on begin.
That comes from the BoxingViewportAdapter, which is a part of MonoGame.Extended and I don’t really have much knowledge about it.
@ReadrX - Try something for me… stop using BoxingViewportAdapter for a bit. Manually set your screen resolution to something like 1024x768 and then replace your scale assignment with the following…
var scale = Matrix.CreateScale(4f);
You’ll have excess space on the right and bottom, but I suspect the pixels will be square.
*Edit: Oh! I just noticed something. Not only are you passing scale into your SpriteBatch.Begin call, you’re passing it to the tiledMapRenderer.Draw call. Again, I don’t know what this does, but I would only expect scale to be required once. Either on the SpriteBatch.Begin call, which will scale everything that SpriteBatch draws, or on the map draw, which would presumably scale the things that it draws.
The Tiled map is not rendered using SpriteBatch. However, yes, it is still an orthographic projection, and yes the view matrix may have a scale.
The problem is due to how the texels are mapped into pixels with magnification. This process happens when the texels and pixels are not 1-1. As it was mentioned earlier, a work-around to this problem is to render where the texels and pixels are 1-1 to a framebuffer (RenderTarget2D).
Dell, Windows 10, 64-bit operating system and x64-based processor, Intel® Core™ i5-3470 CPU @ 3.20GHz,16GB RAM, OpenGL, MonoGame 3.8.
Here’s the MonoGame Extended files concerning the Viewport Adapters which will show what is does with scaling. BoxingViewportAdapter inherits from ScalingViewportAdapter which has the code on how it scales.
Hmmm interesting. As @LithiumToast pointed out, the tiledMapRenderer call doesn’t even take the spritebatch. I looked at the code and it looks like it’s drawing a textured quad with a cached texture that it scales using the matrix you pass in. It looks like it renders this with a default effect, or one that you can override. I’m not sure if that has its sampler state set to point clamp by default.
If the quad that gets drawn is the same size as the scaled viewport coordinates, I wouldn’t expect to see this… but the bottom line is that I don’t know as I don’t have experience with MonoGame.Extended.
I know that if you render the tiles yourself via the spritebatch, or if you draw 1x scale to a render target and then draw that scaled up to the screen with a sprite batch as Lithium suggested, it will work because I’ve done both of these things
Yea I figured it was for optimization; however, there would have to be a way to control the sampler state on the effect used to render the quad, wouldn’t there?
I dug a bit through the code and saw where the effect was created, but didn’t see where any sampler state was set. I’m not even sure if that’s why his rendering was incorrect. A quad mapped to the screen space should yield the same aspect ratio. That looked off and so I think something’s up.
He’s solved his issue with a render target. I’m more curious than anything else, really
It has to do with the half-pixel offset that is added to the UV coordinates. It was originally added to correct for texture bleeding. The value that used for the half-pixel is most likely not 100% correct, it is likely that instead of adding a half-pixel offset to the UV coordinates the Position coordinates should be subtracted by half a pixel. It also likely that instead of only applying the offset to the top/left positions of the quad that it should also apply to the bottom/right positions of the quad. When mapping samples of a texture to fragments (pixels), the problem gets worse when scaling up as the UV mapping is not exactly right causing anything other than 1:1 to not be perfect. Anyways the half-pixel offset was removed in the master branch of MonoGame.Extended because people no longer seem to have issues with texture bleeding; likely that MonoGame fixed this issue internally.