Converting a 2D platformer to 3D

Hi,

I’m currently working on a 2D platformer game. I’d quite like to add dynamic lights that properly interact with the map, and cast shadows across the different map layers. I’ve had ideas of how to ‘fudge’ this with 2D (like using shaders to check pixels in other layers, but this would require a lot of texture-swapping…) but none seem like a good solution. I would therefore like to convert my game logic from 2D to 3D, so then I can just implement lights as point lights in 3d space.

My question is therefore this. How do I go about changing my 2D game to 3D? It’s obviously not as easy as just changing every Vector2 to a Vector3, so what should be the steps of integrating 3D logic into my 2D game.

Thanks

I would almost always go with deferred lighting. It has several advantages:

  • If you have many small lights, it will perform much better.
  • You don’t need to build lighting support into all your shaders, since lighting happens as a separate independ step in the end.
  • You barely have to change anything in your current setup.

You only need to output depth, in addition to color (into the alpha channel?), and maybe some additional things to make lighting more interesting, if you feel like it (normals, specular,…)

The trickier subject with dynamic lighting is generally shadows.

It should be quite doable with deferred lighting. Every layer is in a separate render target. When you draw a light into a render target, for every drawn pixel, you have to check one pixel in each of the shadow casting layers. I don’t know how many layers you need. If it’s too many, most lights probably don’t have that much range. So you can probably limit the shadow casting to just a few layers per light.

Thanks for this reply. I’m glad you said this, because converting my project to 3D seemed like a massive pain, and although I wasn’t quite able to figure out how I could properly implement lights, I’m sure there must have been a smart way to do it in 2D.

May I ask exactly what deferred lighting is? I’ve heard the term be thrown around, and after some searching I have a decent idea, but I can’t quite figure out what it would look like in the context of a monogame project.

The method of having each layer as a render target was my initial thought, I might just crudely implement it and see what it’s performance is like.

Thanks for the suggestions

Just look up “deferred rendering”, there should be plenty of info out there.

The short version is:

  • You don’t include any lighting during normal rendering. You just output color and depth directly to a render target (and maybe normals, specular, …).

  • For every light you then draw a quad/circle/sphere to a lightmap render target. As small as possible to save performance, large enough to cover the lights range of influence. Here you sample the depth/normals from the previous rendering, do all the lighting calculations, and take shadow maps into account.

  • In the end you blend your unlit render target with the lightmap render target to get the final output.

EDIT: If everything in one layer is at the same depth, you obviously don’t need to output per-pixel depth

This looks interesting, would the method be altered at all by the fact that I’m achieving light by using a palette swap shader. Because of this, creating a ‘lightmap rendertarget’ isn’t going to work, since I don’t want to achieve lighting by blending, rather, by swapping discrete coloured pixels for other coloured pixels.

At the end of the day, for every pixel, you want a value/vector that tells you how much light the pixel receives. That’s what the lightmap render target is for. How you use this value is up to you. The standard way would be to multiply it with the unlit base color, but you can use it for palette swapping too.

Sorry for resurrecting an old post, but I’ve been wondering how to actually implement a depth buffer. I have a render target that I want to set each pixel to a value indicative of the depth of each pixel. How can I do this? Using a pixel shader makes sense, but I can’t work out where I would use this shader, or what the shader would look like. Help would be appreciated.
Thanks

If you have a free channel in your render target (alpha channel maybe), you could put the depth there. Otherwise you need a 2nd render target. You can use MRT (multiple render targets) to render to both, the color and the depth target, simultaneously. If you don’t use MRT, you need to draw your objects twice.

1 Like

This may be a stupid question, but I would like to use the alpha channel for depth, as I don’t use any transparency effects. However, I of course want to use sprites that need to be transparent around the outside. Is there a way I can still use the alpha channel whilst still maintaining full transparency in sprites?

Edit: If this isn’t possible, do you recommend any tutorials on using MRT, since I can’t seem to get it to work

For MRT, here’s a shader that sends the depth to the second render target:
https://github.com/PixieCatSupreme/AnodyneSharp/blob/master/AnodyneSharp/AnodyneSharp.Shared/Content/effects/render_depth.fx

Used like this in our SpriteRenderer:
https://github.com/PixieCatSupreme/AnodyneSharp/blob/master/AnodyneSharp/AnodyneSharp.Shared/Drawing/SpriteDrawer.cs#L80

1 Like

If you are working with 8 bit per channel, there is 256 possible alpha values. You could pick one specific alpha value to represent transparency, that still leaves you with 255 depth values.

Alternatively you could pick one specific color, out of the 256 x 256 x 256 possible colors, to represent transparency.

You won’t be able to use alpha blending this way, but you can just clip/discard pixels that meet your transparency condition.

1 Like

This is great, thanks