So... VR/AR?

VR is basically approaching and may or may not make a decent dint in 2016/2017. Time will tell. I’m into first person games, so, VR is right up my alley.

As for XNA/MonoGame and my own games, I’ve barely scratched the surface with 3D and probably won’t really do anything with it until next year. However, in the mean time one of you boyz can start the coding to make the process a lot easier to switch over when the time comes… :wink:

There is nothing special needed from MonoGame to render to Oculus or other VR goggles. You just render the scene twice and apply a shader to distort the image. This was covered fairly well a few years ago:

https://visualstudiomagazine.com/articles/2013/10/30/virtual-reality--in-dotnet-part-1.aspx

There’s a lot of setup code described in that article that we could perhaps make a template for. But with Oculus being priced out of range of normal people (over $1100 here is Australia), it will limit the number of people who can test or use it.

I could see us doing that after Add project creation tool by hach-que · Pull Request #4295 · MonoGame/MonoGame · GitHub is merged.

Went ahead and added an issue for this.

The VisualStudioMagazine article is quite old already, and many things regarding how to render VR have changed since then. It’s true that in the first iterations of the SDKs, everything boiled down to render distorted frames for each eye side by side.

But since Oculus SDK 6.0+ , there’s been a turn of events, major graphics cards vendors have agreed to step in, like the nVidia GameWorks VR SDK so now it’s the graphics driver the one responsible to do some operations, like the lens distortion.

I haven’t gone too deep to research the new APIs, but apparently, for VR, there’s the concept of “layers”, which are special render targets with purpose built VR features.

I would suggest to take a look to this project: http://oculuswrap.codeplex.com/ , which happens to be up to date with the latest Oculus SDK 8.0

I happen to own a DK2 kit, so I would be able to test nightly builds and give feedback.

Willing to test nightly builds is fine, but it needs someone with a Rift to develop the support in the first place. I don’t think any of the core devs have a Rift to do this, so it would have to be a community driven feature at this stage.

I have a consumer version of the Rift, and I’ve got it working with MonoGame by PInvoking into a C++ dll I wrote using the Oculus SDK.

It is true what gitviper says, earlier versions of the Rift acted like a second monitor attached to your PC, so you didn’t need to do anything special to render to the Rift display.
In newer versions you use an SDK function to create swapchain rendertargets for each eye (or one for both eyes).
You render your scene into these rendertargets using proper view and projection matrices. When you’re done rendering, you submit the swapchains to the Rift. The Oculus runtime then applies lens distortion, chromatic correction, time warping etc.

I found that the simplest solution is to just render to MonoGame rendertargets normally and then copy those into the swapchain rendertargets created by the SDK.
I needed to access the following things from MonoGame/SharpDX to make this work:

GraphicsDevice._d3dDevice.NativePointer
GraphicsDevice._d3dContext.NativePointer
Texture._texture.NativePointer

The device is needed so the SDK can create the swapchains.
The context and the texture are needed to perform the copy operation inside the C++ dll.

Here is code for the OculusRift C++ dll plus a MonoGame sample project that uses it:

there is also a binary version of the sample, so if you have an OculusRift you could give it a try. So far I’ve only tested it on my own PC.

Hi, I wanted to write here some insights regarding VR state of the art and some personal experiences.

So far, we have:

  • Oculus and HTCVive proprietary drivers for Windows
  • Google Cardboard/Daydream proprietary drivers for Android
  • OpenVR which is an attempt to do an open standard, but it is heavily biased towards HTCVive/Steam
  • Microsoft Mixed Reality SDK: Initially designed for HoloLens, now it is open for any vendor to write implementations, obviously, exclusive to Windows and biased towards windows store.

So, writing true cross platform VR engines is still… a nightmare.

My personal experience: I wrote a very simple game/demonstrator, with OculusSharp + SharpDX

The lesson I learned is that it’s a total paradigm shift in terms of engine design, I explain:

Most engines around are used to have logic updates at a fixed time step, and then let the rendering to play “catch up” with the logic. So you can have a game running the logic at 60fps and then let the rendering adapt to the scene load, from 60fps down to 15fps if neccesary.

Now, VR demands to render at a fixed timestep, which ca go from 60fps, to 90fps, to 120fps in future headsets. The way some engines are solving this is by fixing the logic update timestep to the one of the image rendering.

This presents all sorts of issues with compatibility between headsets, physics updates running at fixed timesteps, or rendering actually running faster than the logic.

My solution for the game/demonstrator was to create a custom “visual scene definition” in which, for each visual node, instead of having classic “position, rotation,scale” values, I also have delta values. That way, I can update my logic at fixed 30fps, and then the rendering can run as fast as it needs, becase it can interpolate the positions of all visual nodes. The result is to have an easy to update, fixed time logic step, and faster-than-logic render step with smooth interpolated movement.

Sorry for the blob of text, just wanted to share my experiences :smiley:

1 Like

This sort of decoupling is done in some normal games, too, but honestly it’s rare that the CPU main logic thread is the bottleneck - the types of games that are one-sidedly bad for the CPU are often not the ones you play from first person. Plus then you are running behind 3 frames behind in terms of game logic, and VR especially benefits from responsiveness.

In general though multi resolution frame rates and async ops for different threads are super interesting

1 Like

What about the support for ARKit? Any chance we can use it in future?