Hi, I wanted to write here some insights regarding VR state of the art and some personal experiences.
So far, we have:
- Oculus and HTCVive proprietary drivers for Windows
- Google Cardboard/Daydream proprietary drivers for Android
- OpenVR which is an attempt to do an open standard, but it is heavily biased towards HTCVive/Steam
- Microsoft Mixed Reality SDK: Initially designed for HoloLens, now it is open for any vendor to write implementations, obviously, exclusive to Windows and biased towards windows store.
So, writing true cross platform VR engines is still… a nightmare.
My personal experience: I wrote a very simple game/demonstrator, with OculusSharp + SharpDX
The lesson I learned is that it’s a total paradigm shift in terms of engine design, I explain:
Most engines around are used to have logic updates at a fixed time step, and then let the rendering to play “catch up” with the logic. So you can have a game running the logic at 60fps and then let the rendering adapt to the scene load, from 60fps down to 15fps if neccesary.
Now, VR demands to render at a fixed timestep, which ca go from 60fps, to 90fps, to 120fps in future headsets. The way some engines are solving this is by fixing the logic update timestep to the one of the image rendering.
This presents all sorts of issues with compatibility between headsets, physics updates running at fixed timesteps, or rendering actually running faster than the logic.
My solution for the game/demonstrator was to create a custom “visual scene definition” in which, for each visual node, instead of having classic “position, rotation,scale” values, I also have delta values. That way, I can update my logic at fixed 30fps, and then the rendering can run as fast as it needs, becase it can interpolate the positions of all visual nodes. The result is to have an easy to update, fixed time logic step, and faster-than-logic render step with smooth interpolated movement.
Sorry for the blob of text, just wanted to share my experiences