How would you handle passing from one reflection cube to another?

This may be a dumb question but I have no clue about that.
Lets say you have an octree with 8 cubes, each one having/being a cubemap/reflectionprobe used for reflection.
If you approach the center, how would you choose the one to use?
-Just abruptly use the one the camera or object’s center is?
-Or by a naïve approach, blending the 8 cubemaps/probes by a distance factor which seems overkill to me.
-Or another technique?

Here are some sources that helped me bring IBLs into my renderer:

Are you using deferred rendering? If yes then you could render all IBLs to your light buffer.
Each light should have its own falloff (sphere, box) and you should use alpha blending with them.
Just remember that alpha blended lights will replace all lighting info from your light buffer, so you should render IBLs first and then the rest of lights.

What does the term probe mean?

Well that’s a thing that captures (“probes”) a cube map of a scene that surrounds it.
I think Unity made the term “probe” so popular.

Oh i see after reading a little. Since this is precomputed. I would guess that each cube should have a range and as objects are passed to the scene to be drawn they would be in some space of the octrees influence which has been pre set. Like i guess they would say probe 1 and 4 are in this area 1 6 8 are in this area ect. When drawing a object i suppose the distance between the ones that are active affects the blending probably proportionally.

That’s just a guess i didn’t fully read both articles i just glanced it over.

Im not sure cubemaping like this is the best way to do it unless your a big company with lots of artists building the light maps like this would take a lot of time.

My thanks to @Sizaar, I’ll have some interesting reading for quite a time :slight_smile:

@willmotil: I have probes implemented in my which can use a predifined cubemap texture (lowest quality), or use the environnement around it to draw realtime objects (medium or high quality if I add a “gamer” quality). As of now, it only uses opaque objects, but alphamaps etc are not hard to add. The biggest challenge would be to do realtime shadows too, and render them to the cube + particles etc.

I just need a way to handle reflections on my spaceships, and have them displayed at a decent quality for eye candy, with some degree of realtime which will be needed later with ice asteroids fields.

Shadow maps seem easier with cubes. As the cube doesn’t have to rotate around lights. But they seem like they would be expensive with a lot of lights. I was thinking about trying to build a 360 degree projection matrix… that would simplify things. I was trying with a single shadow map but i don’t have a clue how to do it with a orthagonal one how to fit it to a frustrum. The perspective version i was attacking almost works but i would have to build a 360 degree projection matrix to really make it work right ack.

Reflections off moving surfaces like ships i dunno how they do this cheap if they do it cheap at all.
I suppose if you have a environmental cube scene map around your ship. Then the surface normals on the ship itself should equate to u.v. coordinates on the map itself. However that enviromental map would need to rotate with the ship to snap of shots of the scene. So it would be using the ships oriented forward back up down left right to snap off shots.
But the non rotated surface normals on the ship would dictate the uv coordinates to the oriented map were it would pick up the reflected colors of the scene and that wouldn’t be a contiguous mapping different parts on the ship could point to the same uv coordinates. kinda expensive with more then one ship i think.
Though ive never tried that it seems like that should work.

I’ve used the cube correction before. It was mostly okay, works better when the blended area is either on the mostly rough side or has high-frequency normals if it’s glossy.

For a space scene you could probably get away with using imposters for most things outside of a certain distance. If your ships have left/right symmetry than you could cram 16 ships worth of imposters (for 5x5 angles of pitch x yaw) with rendered model-space normals (or 32 ships if you can live without normals) into a 2048x2048 texture.

You’d probably need a different distance threshold for capital ships.

Cubes generally don’t need to be particularly high-res. I use 128px for mine in PBS and that’s not a problem, I wouldn’t even consider going higher than 256px.

IBL is all fudgery anyways. Unfortunately, it’s mandatory fudge.

The reflected vector of the view-vector and world-space normal is the cubemap lookup (for specular, diffuse it’s just world-space normal at lowest mip). There’s no need for the cubemap to rotate with the ship, only translate with it.

The only need to rotate the environment renderer would be if using something awkward like dual-paraboloid maps in order to try to conceal the obtrusive artifacts.