The challenges of creating a VR application

For a lot of people, VR is quite simple: add the Oculus Rift plugin in your application and it’s over !

VR is much more complex than that. As we have seen in the article “What is VR?” the goal is that the user feels subconsciously present in a virtual world, and this feeling of presence is very fragile.

Our brain has evolved for millions of years to perceive the natural reality. Creating a virtual reality that our brain can accept subconsciously during the whole experience is the biggest challenge of VR.

VR as a New Medium

We should say a word of warning here before continuing: adapting existing applications to VR is difficult if they weren’t designed for this from the outset. VR is like radio or TV at their beginnings: radio was only used to broadcast opera, and TV was only used to broadcast theater plays. Slowly, people started to create content specifically tailored for these new media. Camera movement, zoom, and cuts created a new grammar for film, for instance.

The same will happen with VR! At first, there will be a lot of adaptations of existing applications that don’t take full advantage of presence, and might even damage the field: adding VR will only marginally improve immersion, thanks to the display, but awkward controls and interactions unsuited to VR could potentially make the experience poorer than it originally was.


The first enemy of VR is latency. If you move your head in the real world and the resulting image takes one second to appear, your brain will not accept that this image is related to the head movement. Moreover as a result, you will probably get sick. John Carmack reports that “something magical happens when your lag is less than 20 milliseconds: the world looks stable!”

Some researchers even advise a 4ms end-to-end latency from the moment you act to the moment the resulting image is displayed. To give you an idea of what this means, when your 3d application runs at 60 frames per second it’s 16ms from one frame to another. Add to that the latency of your input device, which can range from a few milliseconds to more than 100ms with the Kinect, and the latency of the display, which also ranges from a few milliseconds to more than 50ms for some consumer HMDs.

And if you want to run your application in stereoscopy, keep in mind that the 3D engine needs to compute the left and right images for each frame. As an application developer, you can’t do much for the input and display latency, but you have to make sure that your application runs fast!

For more information about latency, we recommend these great articles by Michael Abrash and John Carmack: “Latency, the sine qua non of AR and VR” and “Latency mitigation strategies.”


Interactions with a 3D/VR world is more complex than it seems. There are even several scientific conferences dedicated to this sole topic: 3D User Interfaces (3DUI) , Spatial User Interfaces (SUI). The main problem is that you don’t have access to a regular keyboard or mouse. The second problem is that interacting in 3D is a hard ergonomic problem!

There are multiple ways of interacting in VR:

– Navigation in an environment,

– Selecting and manipulating an object,

– Menus and graphical user interfaces (GUIs),

– Entering numbers and text,

– …

All those tasks can be accomplished in a lot of different ways which depend on a particular application, hardware and even user! Think of the navigation: it can be done with a simple joystick, by pointing a particular destination in space with a hand-held device, by saying out loud the destination, by walking in place, by gestures, by looking at it, by picking it on a map…

Knowing which interaction technique to use in which context requires a strong expertise.


When you are using a head-mounted display (HMD), you are completely immersed in the virtual world, and you don’t see your own body anymore. It is very important to display a virtual representation of yourself and others, called avatars. They can be realistic, look like yourself, or be completely different.

If your VR system offers full body tracking and if you want an avatar that has exactly the same dimensions as you, this can be a simple task. But if your VR system only has a few trackers and you want an avatar that is taller or smaller than you, it can be difficult to extrapolate the position of the limbs that are not tracked and to adapt your body posture to a different virtual body.


Once you start being in a virtual world, you very quickly want to share the experience and not be alone. VR is particularly suited for collaborative work with people physically in the same space or in completely different parts of the world! When creating a collaborative application, you need to make sure each VR application is connected to one server, that all data between the applications are synchronized securely, that you can see the avatar of others.

You also need to pay a particular attention to the interactions: for example if one user is manipulating an object, you might want to prevent other users to manipulate the same object.

High-end VR systems

There are specific issues when dealing with high-end VR systems such as CAVEs(tm), domes, workbenches etc:

– computing the correct perspective depending on the position of the user,

– displaying different types of stereoscopy (active, passive, autostereoscopic, side-by-side etc.),

– managing multi-screens and/or multiple graphics cards,

– managing multiple computers and the synchronization of the application (Framelock), display of new images (Swaplock) and display of stereoscopic images (Genlock)

– managing the warping and blending of several projectors projecting on a non-planar surface,

haptics force-feedback,

– and many more other issues.

High-end VR systems require a lot of specific experience to be managed correctly.

Running on different VR hardware

When your application is finished, you might want to run it on a different hardware: You might have created your application for the Oculus Rift and want to try with another HMD, or maybe a stereoscopic wall or a Cave.

You then have to modify your application to take into account the new trackers, the new screens (by managing the virtual cameras and viewports), the cluster synchronization. You will also probably need to modify the interactions, because you don’t necessarily have a perfectly equivalent hardware, or tracking volume.

Deploying your application on different VR systems can prove to be a very difficult and time consuming task.

A Coherent World, Not Necessarily a Realistic One

We have seen that perceptive presence requires you to fool your senses in the most realistic way. Cognitive presence — fooling the mind, not the senses — results from a sense that your actions have effects on the virtual environment, and that these events are credible. This means that you must believe in the “rules” of the simulation. For this, you must make sure that your world is coherent, not necessarily realistic. If a player can grab a particular glass, for example, but can’t grab another one, it will break presence because the rules are not consistent. Once cognitive presence is broken, it’s very difficult to “fix” it. The player is constantly reminded that the simulation is not real, and it will take some time to accept it again as reality.

If you’re targeting a visually realistic environment, it is more likely to generate breaks in presence. This is because your brain will expect many things that we are not yet able to achieve technically: perfect physics, sound, force feedback so that your hand doesn’t penetrate an object, objects breaking in pieces, smell, etc. Having a non-realistic environment will lower your expectations that everything should be perfect, resulting in a more consistent presence feeling.

If you manage to achieve cognitive presence,  and fool the mind of your user, the events from the simulation will affect his sensations. If an attractive character looks at a shy guy into the eyes, his heart rate might increase, he will blush, etc. People with a fear of public speaking will react with anxiety if speaking to a virtual audience.

This is why one of the most immersive application was “Verdun 1916-Time Machine.” It fools many senses at a time: vision, smell, touch… But the most important point is that, by design of the “experience,” the interactions are extremely simple: you can only rotate your head, because you’re a wounded soldier.

Given that extreme limitation, it’s extremely simple to keep the player from experiencing a break in presence. You can’t move your hand, so it cannot penetrate objects; you aren’t forced to navigate with an unnatural joystick. It has been reported several times that some people smiled at another virtual soldier that came to save the player in the simulation!


We have seen that creating a good VR application is more than simply having a stereoscopic camera that rotates with one tracker. When creating a VR application you have to make sure that:

– latency is minimal,

– interactions are appropriate for a particular application, hardware and user,

– avatars behave correctly,

– collaboration is efficient,

– and above all that presence is maintained for the whole experience!

If needed, we provide professional services to help you develop your VR applications!