Render the Background of a Large 2D Map

Hello everybody,

I am working on a Monogame Projekt with a few People for University. Its basically a Colony simulator similar to Rimworld.
The Map is seperated in Tiles with a resolution of 32px * 32px.
In the GameRenderer i am Rendering each Layer to a RenderTarget using SpriteBatch and then just Combinig it in the Final Draw call. I am also using TextureAtlasses.

Now heres the Problem:
until now i Have Rendered the Ground Textures of the Tiles on the Map into one big RenderTarget (since they dont change), wich i could then just render using a Translation Matrix. This worked well for small Map sizes (128 * 128) But since our Goal is more around 10241024 or even 20482048 i ran into the Problem, that the Ground is simply not displayed anymore, when using those Mapsizes. I imagine this is because the RenderTarget is simply getting to big.

I tried to switch it so only the Textures currently on Screen are getting Rendered one by one, wich works great when zoomed in but gives massive Spikes when zooming out.

So in short i am trying to figure out whats the Best way to accomplish the Drawing of the GroundTextures for large Maps.
I am quite new to working with Graphics and optimizing them for Performance.

One Idea i had was to create a Collection of RenderTargets that contains prerendered chunks of Ground Textures, and only draw those affected by the View Matrix. But i dont know if this will impact memory usage negatively.

If anybody has any Input on how to work with this kind of issue it would be greatly appreciated.

(the Game is supposed to run on a PC with a dedicated Graphics Card)

@willmotil I think you know this one

@Superschnizel Hi, Welcome to the forums.

Isn’t your username a kind of humorous thing in Dutch?

Happy Coding!

One Idea i had was to create a Collection of RenderTargets that contains prerendered chunks of Ground Textures, and only draw those affected by the View Matrix. But i dont know if this will impact memory usage negatively.

For a static map i think render targets would work well either way zoomed in zoomed out.

However mip maps are typically used for zooming out to avoid down sampling (minification). When tiles must be dynamic you can cheat by just using the default point clamp filter or point in a shader for the minification filter.

I actually made a rough basic example for dynamic tile mapping recently below, this however has nothing to do with a render target solution which for a static map is probably the better choice.
There are other solutions on that post as well some more relatable to what you are doing.

1 Like

I am working on a tile shader, you can take the source from here
what is missing is the importer.

The shader takes two textures, the first is the TileMap, the second the TileAtlas.
The values in the TileMap are not pixels, The R,G values map to a tile in TileAtlas.

The limitations are those of the GPU, on a lower end card (DX9.1) you can have a map size 2048x2048. On modern DX11 hardware you can go up to 16kx16k. The map values are from 0-255, therefore the TileAtlas can be a grid of 256x256 tiles. Here’s how I setup a 4x2 map. The A value is used as transparency.

The size of the tiles depend on the hardware and how many tiles you want. Asuming the lowest hardware with 2048 texture size for the TileAtlas and 32px tiles, you get a grid of 64x64, or 4096 individual tiles in total. That means that only the values 0-63 will be valid in the tileMap texture. The example/test app is using a 8x4 TileAtlas.

You render the entire map in a single quad, performance shouldn’t be an issue no matter how big a map you have. In the example I’m rendering it with a spritebatch.Draw() call.
With a map of 2048x2048 and tiles of 32x32, the entire map will span 65kx65k pixels.


The first is the TileMap, the second the TileAtlas.
You render the entire map in a single quad,

Nkast is this based on using vertex texture fetching for this?

Or are you doing this purely on the pixel shader with some sort of view bounding rectangle sent in for the tilemap image?

Or something else ?

There’s nothing fancy like that, all the work is done in the PS.
It’s a ps_4_0_level_9_1 shader that takes vanilla VertexPositionColorTexture as input.
Evidence to this is that you can draw using the spritebatch as if the entire map was a single sprite.

The shader first samples the TileMap (POINT sample), divides the UV by the mapSize, and combines the two to map it into a new UV to sample from TileAtlas.


There’s nothing fancy like that

Ah so modest… That is brilliant.

Humm the only drawback here i see though is that the source sprite sheet has to be a uniformed grid right.
There isn’t a way to use all 4 values to define a source rectangle in the sheet not enough data right?
Or is that possible?
I think rendertargets support higher bit depths but im not sure about textures or how you could set that data.

Is it possible to bypass the first indexMap texture altogether with a array of shorts ?

That’s correct, but that’s the definition of a tilemap. Bigger tiles can are composed by combining smaller ones.

You can try different things if you want. for example, use a Texture3D if you want more tiles and utilize B value, or attach more AtlasMap textures, a shader can have up to 4 textures (Reach profile), (a modern cards i think can have 16 textures).

The current implementation already allows you to have huge maps with 16k texture size on modern hardware. but sure, you can modify the shader to use HDR texture. But you can also pass addition info on another color texture.
A byte value and a range of 0-255 is enough imo, you are limited by the size of the atlas on how many tiles you can have.

The constant buffer? it’s kind of limited in size. You could use it together with the B value to add tile animation for example, It really depends on what your specs are.