I understand normal maps, and why they exist, and I was planning to use them…
But would it be possible to use heightmaps, now that computers are SO modern?
The idea would be for pixels to cast shadows beyond the pixel itself, and deriving its own normal from neighboring pixel heights… Hopefully improving the effect.
I don’t want to use 3d models IN my game, for some reason.
If you’re already calculating normals, why not just use the normal map? Are you generating heightmaps at runtime?
(But yes, you should be able to do this. That’s what a normal map is… it’s just a representation of normals using RGB as the XYZ values. The normal map is pre-generated, but if you wanted to do it on the go there’s no reason you couldn’t.)
It’s because normal maps are only used to shade each pixel, based on its normal… They don’t cast shadows onto other pixels, the way a heightmap might be able to.
Oh I see, yea I missed that detail. Presumably you should be able to this because you’ll know if any given pixel is obscured from the light source.
That could be interesting
Yea the basic idea sounds nice, but I don’t know if its viable or practical…
It is a lot easier to generate heightmaps though, i feel… even by hand, roughly.
Honestly, it’s worth a test. I suspect if you make some tradeoffs (ie, don’t sample every pixel in the sprite you want to light) it could be reasonably fast. Assuming a rectanglular sprite, I suppose you could generate a line between the light source and each corner of the sprite to generate an illumination value. Then trace the line over the height map to see if there’s any obstructing geometry. I suspect walking the line could end up being similar to the Bresenham line algorithm to generate heightmap indexes?
I’m completely spitballing here but honestly, that doesn’t sound too bad. The heightmap is static data with a fast lookup and tracing the line drastically limits the number of points you need to sample.
For the terrain itself, as long as your light source isn’t moving every frame, you could probably prebake the shadows on that.
I’m just spitballing here but it seems feasible. Like I said, it could be interesting
Google parallax occlusion, that’s what you want. It exist for ages and is easy to implement. However you will still need normal to know amount of reflected light. Additionally there is another variant using prebaked averaged light vector that can also fake GI, however creating required assets is slightly more difficult where parallax occlusion will work with just height map and normal.
I wonder if AI will be able to generate all these assets for me, if I just wait a few years…
“Sir at this point in the development cycle, we send everybody home, and wait for the technology to mature. It’s the wiser investment, rather than doing the process manually today… Either way in 4 years, we’ll have the results we need. No reason to pay anyone for that.”