MiddleVR User Guide

MiddleVR

1 1: What is MiddleVR?

MiddleVR is a virtual reality (VR) middleware.

It has two main features:

The Wikipedia definition for a middleware is: “Middleware is computer software that connects software components or people and their applications.”

Figure 1: MiddleVR
Figure 1: MiddleVR

At its core, MiddleVR is a library handling all aspects of a VR application: input devices, stereoscopy, clustering, interactions. It offers C++ and C# APIs (application programming interfaces), and a graphical user interface to configure a VR system.

MiddleVR is generic: it does not depend on any particular 3D engine and it is designed to be used with many different 3D applications.

Each software that wants to take advantage of MiddleVR must be adapted. We want the host application to be VR-aware to take the full advantage of its capabilities. We have designed MiddleVR to be as easy to integrate as possible.

2 2: Installing MiddleVR

2.1 2.1: Requirements

2.1.1 2.1.1: Operating system

MiddleVR requires Vista, 7, 8, 32 or 64 bits with the latest Service Packs.

You also need:

2.1.2 2.1.2: Devices

MiddleVR has native support for the following hardware:

If your device it not yet supported, contact us to evaluate integration options.

2.1.3 2.1.3: Unity

MiddleVR is compatible with Unity 4.2 and above, including Unity 5.

In order to be able to use the OpenGL quad-buffer (active stereoscopy) feature, the “Force OpenGL window” mode or any HMD you need:

Note: MiddleVR 1.6 does not support Unity 3.5, 4.0 and 4.1 anymore.

2.1.4 2.1.4: Requirements for using a 3D monitor or 3D projector

MiddleVR supports OpenGL quad-buffer (active stereoscopy) output. Your graphics card must support this 3D mode. This mode is only supported by professional graphics cards such as a NVidia Quadro or a ATI FireGL Pro. See section Stereoscopy.

In order to be able to use the OpenGL quad-buffer (active stereoscopy) feature, the “Force OpenGL window” mode or any HMD you need:

2.1.5 2.1.5: Requirements for using a 3D TV

MiddleVR is compatible with any 3D TV supporting side-by-side 3D input.

Contact us for more information.

2.1.6 2.1.6: Requirements for using a Head-Mounted Displays (HMD)

MiddleVR is compatible with any dual-input HMD and any HMD that supports OpenGL quad-buffer stereo or side-by-side stereo.

In order to be able to use any HMD you need:

Contact us for more information.

2.1.7 2.1.7: Stereoscopy - S3D

2.1.7.1 2.1.7.1: Passive stereoscopy

MiddleVR is compatible with any passive stereoscopy system.

2.1.7.2 2.1.7.2: Active stereoscopy

For active stereoscopy (Quad-Buffer) in Unity you need:

Compatible cards:

Incompatible cards:

If your card is not in the list, contact us. (source: http://academic.cleardefinition.com/2011/08/17/nvidia-gpus-and-product-series-cross-reference/)

Contact us for more information.

2.1.8 2.1.8: Other hardware

MiddleVR has been successfully tested with a Matrox DualHead 2GO.

2.2 2.2: Installing

Run the MiddleVR installer. The following window will appear:

Check the license agreement and Press “INSTALL”.

When installation is done, you can choose to read the “ReadMe” file, directly “Run” MiddleVR Config or close the installer by pressing the “Finish” button:

If a previous installation of MiddleVR is present, it is asked to remove the old MiddleVR license, preferences and logs. Be sure to get a copy of your license file if you choose to press “Yes”:

Note: You must restart Unity after installing MiddleVR, so it takes the new PATH into account.

Note: You must restart MiddleVR Config and Unity after manually modifying the PATH.

2.3 2.3: License

2.3.1 2.3.1: Trial

When you first start MiddleVR, you will see the following screen:

If you don’t have a valid license, you can run the Free version of MiddleVR Config by pressing the “Use ‘Free’ edition” button.

You can get a valid trial license for 30 days by pressing the “Start 30 days ‘Pro’ trial”, “Start 30 days ‘Academic’ trial” or “Start 30 days ‘HMD’ trial” button.

2.3.2 2.3.2: Installing automatically a license with an activation Key

You can download and install automatically a permanent or temporary license after having received an “activation key” from us, by pressing the “Install file license automatically…” button.

Note: Before trying to get a valid license, make sure that you have received an “activation key”.

After pressing the “Install file license automatically…” button, you will see this window:

Automatic license activation.
Automatic license activation.

Enter your activation key and the license should be automatically obtained.

Do not process this way for floating/network licenses, see the dedicated section.

2.3.3 2.3.3: Getting a license file manually

If you get an error, you still have the option of trying to manually activate your key if you click on the “Get file license manually…” button:

Manual license activation.
Manual license activation.

You can then get a valid license file either through the web, or via an automatic e-mail. Choose the option that best suits your situation.

The license file will then need to be opened via the “Open license file…” button as explained in the “Loading the license file” section, except for floating/network licenses.

2.3.3.1 2.3.3.1: Getting a valid license file via the Web

When you press the “License file via Web” button, you will get the following window:

Getting a license via Web.
Getting a license via Web.

The “HostID” is a string that uniquely identifies your computer. This string will be used to generate a valid license for this particular computer.

Access the license activation website by pressing the first button or open your favorite web browser to the following address: http://license.middlevr.com/.

You should then enter your activation key:

Copy the HostID to the clipboard and add it to the website.

Once you have entered a valid activation key, you will get access to the following form:

Simply copy the “Ethernet” and “Hard Disk” contents in the relevant text inputs and press the “Activate” button. Once the license has been generated, you can download the license file and store it anywhere.

Note: You can download again the license file from the license website simply by entering the same activation key. You don’t need to enter the HostID again.

Next, read the “Loading the license file” section.

2.3.3.2 2.3.3.2: License via e-mail

After pressing the “License via e-mail” button you should get the following form:

After entering your activation key, your form should look like that:

The content of the text-box is automatically filled with the content of the e-mail that you should send to get your license file.

After pressing the “Send by e-mail” button, if you have correctly configured your default e-mail application, MiddleVR should create an e-mail that you just have to send:

You should soon after receive an e-mail containing your license. Store the attached license anywhere.

You can also manually send an e-mail, from a webmail for example. Simply copy the content of the text-box by pressing the “Only copy to clipboard” button and paste the text in the body of the e-mail. The subject of the e-mail does not matter. The e-mail recipient should be: .

2.3.3.3 2.3.3.3: Loading the license file

Once you have stored your license file, you can load it via the License > Open license file... menu.

Locate the license file that you’ve just downloaded. You should get the following message:

At this point you can remove or backup the downloaded license file:

MiddleVR will automatically copy this file to the %appdata%/MiddleVR folder, typically: C:\Users\<current user>\AppData\Roaming\MiddleVR\.

Note: If you log in as a different user, MiddleVR will not find the license. You can go in the License > Open license file... menu and load the license file again, or manually copy it to the %appdata%/MiddleVR folder corresponding to the current user.

Note: Do not process this way for floating/network licenses, see the dedicated section.

2.3.4 2.3.4: Floating / network licenses

2.3.4.1 2.3.4.1: License server

When dealing with a floating/network license, don’t load the license file into the configuration editor. The following steps describe how to easily install a license server on Windows using the LM-X End-user Tools (Windows only).

  1. Download and run lmx-enduser-tools\win32\x86.msi.

  2. During the installation process, make sure that “Install LM-X server” is checked.

    Browse to find our liblmxvendor.dll in the MiddleVR installation path /bin/licensing.

    Keep “Install LM-X license server as a service” checked.

After the installation, open the configuration file lmx-serv.cfg from the LM-X tool installation folder (for example “C:\Program Files (x86)\X-Formation\LM-X End-user Tools 4.5.4\lmx-serv.cfg”).

Set the value LICENSE_FILE to the path leading to your floating/network license file.

You can choose your TCP_LISTEN_PORT if needed.

Make sure to open this port (TCP and UDP) in your firewall, both on the license server machine and the client machines.

If there are any problems running the license server, the log file will indicate the cause of the problem. You can find the log file at the path indicated in the configuration file value LOG_FILE. If still not found, check to make sure that the path is correct.

You also can manage the license server through a web GUI:

Open a web browser to the right machine IP address and the right license server port, for example: http://localhost:6200/.

The administrator password is the one defined in the license server configuration file at REMOTE_ACCESS_PASSWORD.

2.3.4.2 2.3.4.2: Configuring clients

On the client machines, you can either:

2.3.5 2.3.5: Storing your license in a different folder

You can store your license in a different folder and set the LMX_LICENSE_PATH environment variable to the full path of the license.

3 3: Tutorials

3.1 3.1: Tutorial - Running MiddleVR demo application “Shadow”

In this tutorial, you will:

3.1.1 3.1.1: Requirements

3.1.2 3.1.2: Download and run the “Shadow” demo

The “Shadow” demo can be downloaded from our website: http://www.middlevr.com/demos.

Unzip the package and run the Shadow.exe application.

If MiddleVR is successfully installed, you will be able to navigate in the scene:

If you can’t navigate, please see the Troubleshooting installation section of our Knowledge base or contact our support team.

You can run this application in any VR system. See this tutorial.

If a Wand is correctly configured, you can grab the objects and move them around, interact with the YouTube video and the webpage in the TV on the right, and navigate to the kitchen or bathroom.

3.2 3.2: Tutorial - Using MiddleVR in Unity

In this tutorial you will learn:

3.2.1 3.2.1: Requirements

3.2.2 3.2.2: Add MiddleVR to your Unity project

3.2.2.1 3.2.2.1: Import the MiddleVR package

(If you are upgrading an old Unity project, make sure to read this article: Upgrade MiddleVR Unity Package.)

MiddleVR is split in two parts:

To import the MiddleVR package, open the Asset menu, then Import package and Custom package:

You will find the MiddleVR UnityPackage in the data folder of your MiddleVR installation, typically C:\Program Files (x86)\MiddleVR\data:

Open the MiddleVR.unitypackage file.

This will open a new Unity window:

Simply click “Import”.

The package will then be imported and two folders are added: MiddleVR and Plugins:

3.2.2.2 3.2.2.2: Add the VR manager to your scene

Importing the package is not sufficient, you need to add an important component to your project that will manage all the VR aspects: the VR manager.

Open the MiddleVR folder in the Project tab.

Drag and drop the VRManager prefab to the Hierarchy tab of your project:

After pressing the Unity play button, you can now navigate in the scene with your mouse:

Note: This won’t work if you have ‘Maximize on Play’ activated. See this article in the knowledge base.

3.2.2.4 3.2.2.4: Export your application

In Unity, open the menu File > Build Settings...

Select x86 or x86_64 depending on your needs.
Select x86 or x86_64 depending on your needs.

Make sure that the Platform is PC, Mac & Linux Standalone, then set the Target Platform to Windows.

Press Build and choose a location for your application.

3.2.2.5 3.2.2.5: Run your application

There are two ways to run your application:

3.2.2.5.1 3.2.2.5.1: MiddleVR configurator

The MiddleVR configuration tool allows you to create the configuration for any VR system.

It also allows you to manage and run all your VR applications in the Simulations tab.

MiddleVR configurator - Simulations tab - Quick links.
MiddleVR configurator - Simulations tab - Quick links.
MiddleVR configurator - Simulations tab - General simulations.
MiddleVR configurator - Simulations tab - General simulations.

You can add your application by clicking the + button, or by a drag’n’drop of the exe.

There are a lot of predefined configurations that you can use, make sure to explore them all.

The default configuration that allows you to navigate with the mouse can be found in Misc/Default.vrx.

Select your application in the list on the left, then select any configuration on the right.

Pressing the Run button will run your application with the selected configuration by executing the Current command line.

3.3 3.3: Tutorial - Using the Oculus Rift

In this tutorial you will learn:

3.3.1 3.3.1: Requirements

3.3.2 3.3.2: Add MiddleVR to your Unity project

Start by adding the MiddleVR to your Unity project as described in a previous tutorial.

Note: You should not import the official Oculus Rift Unity package.

3.3.3 3.3.3: Change the configuration file

The next step is to specify a different configuration file to the VR Manager. The default configuration simply allows you to navigate in the scene with the mouse.

MiddleVR comes with a lot of predefined configurations for multiple VR systems (Oculus Rift, Microsoft Kinect, Leap Motion, zSpace, immersive cubes, 3DTVs, etc.).

You can also create your own VR system configuration.

Here we will simply use the predefined configuration for the Oculus Rift.

Click on the VRManager in your Unity project to display its information in the Unity inspector:

Notice that the configuration file is currently set to

C:/Program Files (x86)/MiddleVR/data/Config/Misc/Default.vrx

All predefined configurations are in the folder:

C:/Program Files (x86)/MiddleVR/data/Config/

Now click the Pick configuration file button:

Go in the HMD folder and select the configuration depending on you hardware: HMD-Oculus-Rift-CV1.vrx if you have an Oculus Rift CV1, HMD-Oculus-Rift-DK2.vrx if you have an Oculus Rift DK2, etc.

Press play: you should be able to look around using your Oculus Rift DK2!

If you want to play your application after you have built/exported it, make sure to configure your Windows desktop as described in the section “Oculus Rift DK2”.

3.4 3.4: Tutorial - Create a basic configuration

In this tutorial you will learn:

3.4.1 3.4.1: Requirements

3.4.2 3.4.2: Creating a mouse-simulated 3D tracker

We will start by creating a fake 3D tracker. This 3D tracker will be simulated by a mouse with three buttons. Later you will be able to replace this fake 3D tracker with an actual 3D tracker.

We will then specify that this 3D simulated tracker, representing a position and orientation in space, will move a 3D camera.

To create this mouse-simulated tracker, go into the Devices window, press the ‘+’ button to add a device, and select the “Tracker Simulator - Mouse” device in the 3D Trackers section.

Moving the simulated tracker

As specified in the Help section of the Mouse Tracker Simulator, the virtual tracker is moved by pressing the middle mouse button.

If you go forward or backward, you’re moving the tracker forward or backward. You will see the Y value of the tracker increase or decrease.

If you move the mouse left or right you will rotate the tracker left or right. You will see the Yaw value change accordingly.

You can reset its values by pressing both the left and right button at the same time.

3.4.3 3.4.3: Moving the camera with the 3D tracker

Then go to the 3D nodes window. There you can see that a predefined user-description has been created:

To specify that you want to animate the HeadNode with the fake tracker, simply click on the HeadNode in the hierarchy to display its properties. In the Tracker property select the MouseTracker.

This simply assigns the MouseTracker to this 3D node. Now the HeadNode will follow the 3D tracker!

Try moving the HeadNode by pressing the middle button of your mouse, or Ctrl on your keyboard, and moving it.

You can translate up, down, left and right by adding the Alt key.

Note: You can reset the mouse tracker values by pressing both left and right mouse buttons at the same time.

Save the configuration file and make sure you remember the full path.

3.4.4 3.4.4: Testing in Unity

Start by importing the MiddleVR Unity package in your Unity project as seen in a previous tutorial.

Then specify the full path to the configuration file you’ve created:

Press Play!

You will notice that the hierarchy that was described in your configuration is automatically re-created in Unity:

You should be able to move the camera around by pressing the middle button of your mouse.

Note: If the viewport you have defined in MiddleVR and the viewport in the Unity editor don’t have the same aspect ratio, the view will appear distorted. As soon as you run your application with the standalone player the view will have the right aspect ratio.

Note: The TrackerSimulatorMouse might get stuck if you’re using the “Maximize on Play” option from Unity’s game viewport.

3.4.5 3.4.5: Have fun!

Now you can go back in the configuration tool and modify the hierarchy, add cameras, change the viewports layout. Save this description and simply press play again in Unity. MiddleVR will automatically reconfigure your application to match your configuration.

3.5 3.5: Tutorial - Using and extending MiddleVR VR menu

In this section you will learn:

3.5.1 3.5.1: Requirements

3.5.2 3.5.2: Introduction

MiddleVR offers an immersive menu that you can customize to include your own menu items. The default menu allows you to change the navigation scheme, the manipulation scheme and various other options. See section VR menu.

The menu can contain many different types of entries: Web menu Widgets

See section VR widgets.

By default you activate the menu by pressing button 3 of your Wand. This can be changed on the VRMenu GameObject:

You interact with the menu by pressing the button 0 of your Wand.

You can deactivate the menu by disabling the option “Use default menu” in the VRManager options:

3.5.3 3.5.3: Extending the menu

3.5.3.1 3.5.3.1: Add a command

The easiest way to start extending the menu is to add the MiddleVR/Scripts/Samples/GUI/VRCustomizeDefaultMenu component to any Unity GameObject. It adds a simple command to the menu: when the menu item is clicked, it will display “My menu item has been clicked” in Unity console.

Here is how it works:

The first thing to do is to create a method that will be called when the item is clicked:


vrValue MyItemCommandHandler(vrValue iValue)
{
    print("My menu item has been clicked");
    return null;
}

Then we must get a reference to the existing menu:


VRMenu MiddleVRMenu = null;
while (MiddleVRMenu == null || MiddleVRMenu.menu == null)
{
    MiddleVRMenu = FindObjectOfType(typeof(VRMenu)) as VRMenu;
}

We will now create a vrCommand which holds a pointer to the MyItemCommandHandler method we have defined above:


m_MyItemCommand = new vrCommand("MyCustomVRMenuItem", MyItemCommandHandler);

This will then be passed to the constructor of a Widget which is called vrWidgetButton. This widget will simply call the method held by the vrCommand when it is clicked. The third argument is the label of the button:


vrWidgetButton button = new vrWidgetButton("", MiddleVRMenu.menu, "My Menu Item", m_MyItemCommand);
// By default the widget is added at the end of the menu
// We can position it at the top by using SetChildIndex:
MiddleVRMenu.menu.SetChildIndex(button, 0);

Now when you activate the menu you will see your item at the top of your menu:

3.5.4 3.5.4: Going further

You can also move or remove existing items or create sub-menus, create a menu from scratch, or create a graphical user interface in HTML5.

See sections “VR menu”, “Custom VR menu” and tutorial “Creating a graphical user interface in HTML5”.

3.6 3.6: Tutorial - Creating a graphical user interface in HTML5

In this section you will learn:

3.6.1 3.6.1: Requirements

3.6.2 3.6.2: Introduction

Using standards HTML5 / CSS3 / JavaScript you can create great graphical user interfaces. You can use any template that you can find on the internet, or that any webdesigner can create. Your application can now use GUIs with buttons, sliders, tabs, fancy animations, etc.

Here are a few great examples of what you can achieve:

Example of immersive HTML5 GUI.
Example of immersive HTML5 GUI.

Example of HTML5 GUI. Example of HTML5 GUI. Example of HTML5 GUI

You can buy the source of this GUI here: http://www.cssflow.com/ui-kits/clarity-ios7.

Example of immersive HTML5 GUI-Cloudy-UI Example of immersive HTML5 GUI-Sticky Butterscotch

This website contains a lot of great samples:

http://www.multyshades.com/2012/03/45-best-ui-web-elements-with-source-files/.

The Zebra-UI, an open-source HTML5 toolkit, is also a great way to achieve even more complex GUIs using HTML5:

Zebra-UI Zebra-UI

3.6.3 3.6.3: Creating a simple GUI

3.6.3.1 3.6.3.1: HTML GUI

We will start with a very simple GUI:

Example of HTML5 GUI.
Example of HTML5 GUI.

This is a sample that you can find in C:\Program Files (x86)\data\GUI\HTMLBasicSample\index.html:


    <html>
    <head>

        <script>
            function OnClick()
            {
                MiddleVR.Call("MyCommand"); 
            }

            function AddResult(text)
            {
                document.getElementById('result').innerHTML += text + '<br>';
            }
        </script>

    </head>

    <body style="background-color: white;">

        <button onclick="OnClick()">Click Me!</button>
        
        <p>Result:</p>
        
        <div id="result"></div>
    </body>
    </html>

You can see:

3.6.3.2 3.6.3.2: C# code

The JavaScript in your HTML webpage needs to transmit information to your C# code in Unity, so that your application can react to events triggered when the user interacts with the webpage.

Here is a C# sample reacting to a JavaScript call. You can find it the MiddleVR Unity package here: MiddleVR/Scripts/Samples/GUI/VRGUIHTMLBasicSample:


public class VRGUIHTMLBasicSample : MonoBehaviour
{
    private vrCommand m_MyCommand;

    private vrValue CommandHandler(vrValue iValue)
    {
        print("HTML Button was clicked");

        // Uncomment the following line if you want this function
        // To call a JavaScript function
        //CallJavascript();
        
        return null;
    }

    protected void Start()
    {
        m_MyCommand = new vrCommand("MyCommand", CommandHandler);
    }

    protected void CallJavascript()
    {
        vrWebView webView = GetComponent<VRWebView>().webView;
        webView.ExecuteJavascript("AddResult('Button was clicked!')");
    }
}

3.6.4 3.6.4: Adding the HTML files to your project

There are two things to do to embed the HTML files in your project:

3.6.4.1 3.6.4.1: Copy the HTML files

Copy the “C:\Program Files (x86)\data\GUI\HTMLBasicSample\” folder which contains the HTML file into your project’s folder “Assets/.WebAssets”. You should now have a “Assets/.WebAssets/HTMLBasicSample” folder.

Note: To create a “.WebAssets” folder from the Windows File Explorer, you will have to type “.WebAssets.”. The additional dot at the end is necessary, and will be removed by the File Explorer.

3.6.4.2 3.6.4.2: Add a VRWebView

A VRWebView is a component that will actually display the webpage in your virtual world. See section VRWebView for more information.

The simplest way to add a VRWebView is simply to drag a prefab called VRGUIHTMLBasicSample3D from “MiddleVR/Scripts/Samples/GUI”.

Notice the VRWebView script attached. The URL is currently set to the original HTML file. Modify it to point to “.WebAssets/HTMLBasicSample/index.html”.

Also notice the VRGUIHTMLBasicSample script. That’s the C# script described above that will communicate to and from the webpage’s JavaScript.

Press Play:

The webpage is displayed and when you click on the “Click Me!” button with the wand, the Unity console displays:

"HTML Button was clicked"

If you uncomment the CallJavascript call in the VRGUIHTMLBasicSample C# script, it will also modify the “Result:” in the webpage.

3.6.5 3.6.5: Communication from JavaScript to C#

Communicating from JavaScript to C# is very easy: simply use the MiddleVR.Call function. For example:

MiddleVR.Call("MyCommand");

Will call the vrCommand registered by:

m_MyCommand = new vrCommand("MyCommand", CommandHandler);

Which is a pointer to the CommandHandler method, as passed as the second argument of the vrCommand constructor.

Note: You can pass arbitrary arguments from JavaScript to C#. See section MiddleVR.Call.

3.6.6 3.6.6: Communication from C# to JavaScript

Communicating from C# to JavaScript is easy: simply use the webView.ExecuteJavascript function after:

vrWebView webView = GetComponent<VRWebView>().webView;
webView.ExecuteJavascript("AddResult('Button was clicked!')");

Note: You can pass arbitrary arguments from C# to JavaScript. See section vrWebView.ExecuteJavascript.

3.6.7 3.6.7: Going further

There is another HTML sample using JQuery for tabs, sliders and buttons in “C:\Program Files (x86)\data\GUI\HTMLJQuerySample\”.

The corresponding C# can be found in: “MiddleVR/Scripts/Samples/GUI/VRGUIHTMLJQuerySample”.

Then make to read the following section: Graphical User Interfaces.

3.7 3.7: Tutorial - What next?

Now you’re ready to learn more about MiddleVR.

At this point, typical questions are:

4 4: Basic concepts

4.1 4.1: Configuration and workflow

A typical workflow to use MiddleVR is first to create a description of your VR system.

MiddleVR will then use this description to configure the 3D application to match this description.

This description is called a Configuration and is stored by MiddleVR as an XML file with the .vrx extension.

MiddleVR will also provide access to the data of all the devices that you specified (3D trackers data, button states, joystick axis) thanks to its application programming interface (API).

The description includes:

This description is stored as a VRX (VR XML) configuration file.

4.2 4.2: Examples of predefined configurations

MiddleVR ships with multiple configuration examples located in the “data/Config” folder:

Here is the complete list of predefined configurations:

4.3 4.3: Portability - Create once, presence everywhere!

One goal of MiddleVR is to help you deploy your application on many different VR systems. Wikipedia defines portability as “the software codebase feature to be able to reuse the existing code instead of creating new code when moving software from an environment to another.”

MiddleVR also has the ability to bring VR capabilities to your 3D application, increasing the number of tools that you can use on your VR system:

If you VR system is modified, you only have to modify its description from the configuration file, and all the applications using MiddleVR will be automatically reconfigured to adapt to your changes.

4.4 4.4: Drivers and devices

Devices are managed by drivers. Each driver can create several devices. For example, the driver responsible for handling basic devices uses Microsoft DirectInput to create devices such as a keyboard, a mouse or joysticks.

Below are the different devices currently supported by MiddleVR.

4.5 4.5: Trackers

The goal of a tracking device is to give information to the computer about the position and orientation of a tracked object/human in space.

A VR system typically needs to know where the hand or the head of the user is. A tracking system can also report the position of arbitrary objects, such as a wand or any object whose position is useful for the application.

The trackers hold the position and/or orientation information of a device in space. This information is typically stored as a transformation matrix. The data can also be accessed more simply by asking for a position (a vector of three floats), and an orientation (a quaternion).

4.5.1 4.5.1: Types of tracking devices

Over time, tracking devices have evolved and gained in precision and usability. Popular tracking techniques include magnetic tracking, optical tracking and inertial tracking.

4.5.1.1 4.5.1.1: Optical trackers

The current trend is to use optical tracking (A.R.T, Vicon, Natural Point, Motion Analysis, IO Tracker, etc.) by putting inexpensive markers on your body and watching them through special video cameras. This technique has the advantage of being wireless and is becoming cheaper and cheaper.

4.5.1.2 4.5.1.2: Inertial trackers

Inertial trackers are also quite popular nowadays, since you can find then in any mobile phone and a lot of HMDs. An inertial sensor is made up of one or all of the following:

After fusing the information reported by those three sensors, an inertial tracker is able to give an accurate and fast information about the current orientation, or even position in some cases, of an object.

Note that it is not currently possible to obtain a correct information about the position of an object using only an inertial sensor.

4.5.1.3 4.5.1.3: Magnetic trackers

The most common trackers used to be magnetic trackers, (Polhemus, Ascension) but they require cables (except the new Polhemus Patriot Wireless) and can lose precision as the magnetic field is perturbed by metal.

4.5.2 4.5.2: Tracking data

A tracking device can report:

When comparing trackers, there are several parameters to take into account:

4.5.3 4.5.3: Coordinate systems

There is no accepted norm for how the data from a tracking device is reported.

Moving on the Up/Down axis can modify data on the Y or Z axis, positive or negative.

4.5.3.1 4.5.3.1: MiddleVR native drivers

All the devices that are integrated natively in MiddleVR report the following:

MiddleVR coordinate system
MiddleVR coordinate system

4.5.3.2 4.5.3.2: Adjusting the coordinate system

Some devices must be configured so that their axis match the definition above. For example: A.R.T DTrack or VRPN trackers.

The representation of 3D information in space is not standardized. Sometimes moving a tracker from the user to the screen can be seen as an increase of value on the Z axis, sometimes as a decrease of value on the Y axis. This is dependent on the way the device reports its data and how the driver interprets this data.

The drivers that require a specific coordinate system come usually with the “TrackerCoordinateSystem” property that it is needed to set up properly. For VRPN trackers, “TrackerCoordinateSystem” is replaced directly with Right/Front/Up.

Example of the TrackerCoordinateSystem property displaying the entries Right/Front/Up to set up the coordinate system for the A.R.T DTrack driver.
Example of the “TrackerCoordinateSystem” property displaying the entries Right/Front/Up to set up the coordinate system for the A.R.T DTrack driver.

One easy way to configure the axis of a device is first to add it without modifying the existing Right/Front/Up definition. Then go to the 3D nodes tab, assign the device to any node, for example the “HandNode”.

Now try to move the 3D tracker to the right, to the front (going away from the user towards the screen), and upwards. If the 3D node moves correctly, you’re lucky, your calibration is done.

If this is not the case, you will have to write down some information:

You will end up with something like:

Now remove the existing driver and recreate it by specifying the coordinate system with “TrackerCoordinateSystem” property.

4.5.4 4.5.4: Configuring a tracking system

Please read the section “Configuring a tracking system”.

4.6 4.6: Axis

Axis typically store data and events about the axis of a joystick or a mouse, but this could represent any kind of analog information, such as a slider. This information is stored as an array of floats.

4.7 4.7: Buttons

Buttons store buttons data. This information is stored as an array of booleans.

4.8 4.8: Joystick

The Joystick device is used to store information about joysticks, gamepads, or similar devices. A joystick internally stores its information as both Axis and Buttons types.

4.9 4.9: Keyboard and mouse

MiddleVR can of course handle basic devices such as a keyboard or a mouse.

4.10 4.10: Wand

In most immersive cubes you interact with the simulation with a standard device called a Wand, or a Flystick.

This device is held in the hand and tracked in space. It commonly has several buttons and a two-axis joystick.

A.R.T. Flystick 2.
A.R.T. Flystick 2.
A.R.T. Flystick 3.
A.R.T. Flystick 3.

A Wand can be decomposed in three parts:

MiddleVR includes standard interactions based on the Wand: navigation, grabbing of objects.

You first have to configure the three parts of the Wand before you can use those interactions.

4.11 4.11: Does MiddleVR support output devices such as force feedback or haptic devices?

MiddleVR supports Haption’s haptic devices. Contact us for more information.

4.12 4.12: 3D Nodes

MiddleVR internally uses its own scenegraph to describe the user and the VR system:

This representation is made up of cameras, screens and simple 3D nodes.

4.12.1 4.12.1: Coordinate system

MiddleVR uses a right-handed coordinate system, with X pointing to the right, Y pointing away from the user, towards the screen, and Z pointing up.

The coordinate system of MiddleVR.
The coordinate system of MiddleVR.

4.12.2 4.12.2: 3D Node

Most of the time 3D nodes represent real world objects, like a screen, or a body part like a user’s head, hand, or eyes (i.e. cameras).

In the 3D nodes view, you can configure what your VR system looks like “in the real world”: what body parts of your user you’re tracking, where are the screens located, how they’re seeing the virtual world.

A basic 3D node is represented as a blue square.

Representation of a basic 3D node.
Representation of a basic 3D node.

4.12.3 4.12.3: VR hierarchy

The VR hierarchy is the whole scenegraph describing your VR system and users. The root of the VR hierarchy represents the physical center of your VR system. We call it the “VRSystemCenterNode”. The VR hierarchy is the whole scenegraph including the VRSystemCenterNode and all its children.

A scenegraph with VRSystemCenterNode as root.
A scenegraph with VRSystemCenterNode as root.

4.12.4 4.12.4: Cameras

Cameras are like real world cameras. They capture a view of the virtual world. They can also represent a user’s eye position and orientation. You can either use regular cameras or stereoscopic cameras, which will render two slightly different views so as to recreate the 3D perception.

A camera (pointing here to the right).
A camera (pointing here to the right).

4.12.5 4.12.5: Screen

A Screen is the physical representation of a display surface. A display surface can be for example a projection screen or a computer monitor. A screen node is useful to specify the position, orientation and size of the display surface.

A screen.
A screen.

A Screen doesn’t hold any information about resolution or refresh rate. This information is handled by the Display.

A Screen is used by a camera to determine its viewing frustum in order to compute the correct perspective based on the user position with respect to this Screen.

A typical computer monitor is a combination of a Screen and a Display: the display surface is the monitor, which is not the case with a projector.

A projector is a good example of why the two concepts are separated. The projector in itself is responsible for the refresh rate and resolution of the image, a Display, whereas the projection surface can be very different depending on where you place the projector. The physical position and size of the display surface is then stored as a Screen.

You can find the concept of Screen in other software, named as projection referential, projection surface, display surface, etc.

4.13 4.13: Viewports

A viewport is simply the layout of cameras on a display. A viewport is a 2D area on your display where you display the rendering of a particular camera.

For example here you can see three different viewports that were each assigned a camera:

Note: Two blue rectangles are displayed behind the viewports because the configuration was authored on a computer with two connected screens.

This will result in the following layout in your 3D application:

4.14 4.14: Display

A display is only the electronic part of a viewing system: a combination of your graphics card and the pixels in your projector or monitor. A display knows about the refresh rate and the resolution of your monitor or projector. The actual physical display surface needed to compute the correct perspective is defined by the screen.

4.15 4.15: Stereoscopy

To create stereoscopy in MiddleVR you will most of the time use a stereoscopic camera, then associate this camera with a viewport. The stereoscopic options are chosen in the viewport parameters.

See also Understanding stereoscopy.

There are several ways to display stereoscopic images in MiddleVR.

4.15.1 4.15.1: Active stereoscopy

Active stereoscopy means displaying the left and right images alternatively on the screen. You then need glasses that will hide the left eye when the right picture is visible, then switch: when the left picture is displayed, hide the right eye. Those glasses have electronic shutters to achieve this, which is why they are called active glasses, which also gives the name to active stereoscopy.

The active stereo mode, also known as the OpenGL quad-buffer mode, requires specific graphics card, typically professional cards such as NVidia Quadro cards or ATI FireGL pro cards.

You also have to active this mode in your graphics drivers. Refer to your drivers manual to find out how.

4.15.2 4.15.2: Passive stereoscopy

Passive sterescopy is often achieved by displaying the left and right images side by side or at least displaying the two images at the same time. You then have to put “passive” glasses, which don’t have any electronics, which can achieve the separation between the two pictures by means of optical filters, which are often simply polarized filters.

4.16 4.16: Understanding head-tracking and perspective

When in front of a screen or projected image, it is often said that this display surface is like a window to the virtual world. Ideally, this display will work exactly the same as a window in the real world.

When the user moves closer to the window, he will see more of the scene. If he moves further from the window, he should see less of the scene.

If he goes to the left, he should, counter-intuitively, see more of the right side of the scene. Conversely, if he goes to the right, he should see more of the left side of the scene.

4.16.1 4.16.1: Symmetric cameras

With a simple 3D screen, or when watching a 3D movie in a theater, the game/movie always assumes that the user is always perfectly in front of the center of the screen. It does not take into account the potential movements of the viewer. If you move your head, the game/movie doesn’t know about it and nothing will change. You will just get a wrong perspective.

In this case, the virtual camera is said to be “symmetric”, because your are as far from the left border of the screen than from the right border. Same goes for top and bottom of the screen.

4.16.2 4.16.2: Asymmetric cameras

Now, if the computer knows exactly where your eyes are in front of the screen, it can modify the parameters of the virtual camera: the goal is that the computed image looks like the picture you would see through a virtual window of the exact same size as the screen, being at the same relative position in real and virtual world.

In this case, the virtual camera is said to be “asymmetric”: your eyes are not in front of the center of the screen anymore, they can be anywhere in space, and the perspective will be computed correctly.

Asymmetric cameras are required to achieve virtually all stereoscopic images in HMDs, 3D screens / Walls and CAVEs.

4.16.3 4.16.3: Video

This is perfectly illustrated in this famous video by Johnny Lee.

4.17 4.17: Configuring a head-mounted display (HMD)

Most VR head-mounted displays (HMD for short), are made up of two parts:

The Oculus Rift DK1 and DK2 have integrated trackers, but some HMDs, like the NVisor SX 60, SX 111 or Sony HMZ-T1/T2 series don’t have one. It is quite easy to add one depending on the requirements of the final application.

Note that most of the time the tracking system only tracks the head rotation, rarely the head position, and even more rarely the eyes direction.

4.17.1 4.17.1: Understanding the display system

The display system can be monoscopic or stereoscopic but we will focus on stereoscopic ones, even if stereoscopy might not be important in every cases.

Make sure to read the article: “Understanding stereoscopy”.

The idea is quite simple: place two screens in front of your eyes! The reality is of course a bit more complex, involving a lot of design choices between different types of screens and the optical lenses.

The difficulty on the user side is to know the size and location of the virtual screens (the screens as seen through the lenses) with respect to the eyes. This will determine the configuration of the cameras / screens and viewports in MiddleVR.

4.17.2 4.17.2: Understanding the tracking system

The tracking system from a HMD can be any of the existing tracking system.

Make sure to read the article: “Understanding tracking devices”.

4.17.3 4.17.3: Configuring HMDs

4.17.3.1 4.17.3.1: Predefined configurations

The easiest way of using an HMD in MiddleVR is to simply load a predefined-configuration. There are currently predefined configurations for:

4.17.3.2 4.17.3.2: Configuring the display system

As mentioned above, the display system can be entirely configured in MiddleVR using Cameras, Screens and Viewports.

The display system will determine the field of view, the perspective but now how the head (or the eyes) are moving. This is done by the tracking system.

To properly configure the display system, you need to have all the information about the “virtual screens” that display the images in the HMD. Those information should be given by the constructor. They will help you determine how to configure MiddleVR cameras, screens and viewports.

Often the configuration is the same as for stereoscopic screen / wall. This is assuming that the HMD is designed so that the two screens are seen as one stereoscopic screen in the distance.

With more advanced HMDs, the two screens are clearly positioned differently. This is particularly obvious with the NVIS SX 111 where the two screen are rotated outwards.

With the Oculus Rift, the two screens are offset outwards, and distortion must be compensated.

See also “How to show a viewport on a specific display”.

4.17.3.3 4.17.3.3: Configuring the tracking system

Please read “Configuring a tracking system”.

4.18 4.18: Configuring a stereoscopic screen / wall

When using a 3D TV or a 3D projector, you have to make sure the configuration is perfect so that the stereoscopy and perspective perfectly match the setup.

Make sure to read the article: “Understanding head-tracking and perspective”.

4.18.1 4.18.1: Understanding what parameters to configure

The only factors affecting the perspective are:

Stereoscopy is affected by the same factors plus the inter-eye distance.

Finally, a tracker device can be used to provide head-tracking.

4.18.2 4.18.2: Configuring without head-tracking

In this configuration, the user is assumed to be at a fixed position from the screen.

There are two ways to configure this kind of VR systems in MiddleVR:

4.18.3 4.18.3: Using only a stereoscopic camera

In MiddleVR you can simply setup a stereoscopic camera.

The two important parameters here are:

4.18.4 4.18.4: Using a stereoscopic camera and a screen

The easiest way to configure such a system is to setup a stereoscopic camera linked with a screen (see also Screen parameters).

There are several predefined configurations for such systems that you should use as a basis for configuring your own VR system:

4.18.5 4.18.5: Configuring with head-tracking

Make sure to read the article: “Understanding head-tracking and perspective”.

You should first start by configuring your system with a stereoscopic camera and a screen, as described above.

Finally you should configure your head-tracking device and assign it to a node manipulating the stereoscopic camera.

4.18.6 4.18.6: Configuring viewports

See article: “How to show a viewport on a specific display”.

4.19 4.19: Configuring an immersive cube / CAVE (tm)

A CAVE is “simply” made up of multiple stereoscopic walls, so make sure to read the corresponding section.

The main difficulties are:

There are a mainly two predefined configurations for a CAVE:

4.19.1 4.19.1: Configuring the trackers

Make sure that the zero of the trackers (generally represented as the origin of MiddleVR coordinate system) corresponds to the position of the screens.

In real-life, if the zero of the tracker is set to be at the center of the floor screen, make sure the center of the floor screen is configured to be at the origin of MiddleVR’s coordinate system.

Make sure to read the article “Configuring a tracking system”.

4.19.2 4.19.2: Configuring the cameras

Make sure to have one stereo camera per wall.

4.19.3 4.19.3: Configuring the cluster

Please read the section “Configuring the cluster computers”.

4.20 4.20: VRPN

VRPN (Virtual Reality Peripheral Network) http://www.cs.unc.edu/Research/vrpn/ is a very popular, community-based, and standard software library to access a lot of VR devices.

It is used by a lot of VR commercial and free applications. It is cross-platform and runs on many different OS including Windows, Linux and MacOS. VRPN has been released to the public domain by Russel M. Taylor II from the University of North Carolina at Chapel Hill, and the VR community contributed a lot to improve the project.

The list of supported devices can be found on the VRPN home page http://www.cs.unc.edu/Research/vrpn/.

MiddleVR uses VRPN for some trackers because of its robustness and resilience, and because a lot of VR systems around the world already have a VRPN server configured for their VR devices.

Note: Where possible it is preferable to use native drivers rather than the VRPN client. This will give you more control and less latency

4.20.1 4.20.1: Understanding VRPN

VRPN “converts” data from most devices to mostly three types: Tracker, Analog (Axis in MiddleVR terminology) and Button.

The Tracker type holds a position and an orientation.

The Analog type is used for any type of axis: joystick axis, mouse axis, etc.

The Button type is used for any type of binary button: joystick button, mouse button, etc.

For example a mouse has a 2 channel Analog and a 3 channel Button.

A Wand, a typical VR device, has a Tracker, an Analog data for the joystick, and Buttons.

VRPN requires that you configure a server with the devices that you want to use. The server will then stream data coming out of your devices. Finally your program can easily connect to the server to get those data in a standardized way.

5 5: Configuring a VR system in MiddleVR

5.1 5.1: MiddleVR configuration interface

The MiddleVR configuration tool is divided into five elements:

5.1.1 5.1.1: Devices

The devices window allows you to configure which devices you want to use for the current configuration:

5.1.2 5.1.2: 3D Nodes

The 3D nodes window allows you to create a description of the real world elements that will influence the 3D rendering and interaction of your virtual world. 3D Nodes were explained above in the documentation.

For example this is where you will specify that a physical screen is two meters wide, that the head, left and right hands of a user are tracked, by which tracker etc.

In this window, you can also choose to hide or display the cameras in the 3D view by opening the “View” menu and (un)checking “Display cameras”.

In the 3D View you can move by using the right button of the mouse , you just have to maintain the right button and then move the mouse. To reset the view to it’s original position you can click on the “Reset View” button in the “View” menu. You can also center the view on a specific 3D node by selecting it in the TreeView.

5.1.3 5.1.3: Viewports

The viewports window allows you to specify where to display the rendering of your cameras. You can specify for example that a camera will only display its rendering in the top-left corner of the screen, and another camera could display its rendering only in the bottom-right corner of the screen. Or you could say that the camera rendering the left eye is displayed on your primary screen, and that the camera rendering the right eye is displayed on your secondary screen.

The viewports are displayed as red rectangles. The available displays are displayed in blue.

5.1.4 5.1.4: Cluster

The cluster window manages the configuration of the cluster nodes and specific cluster parameters:

5.1.5 5.1.5: Simulations

The simulations window allows you to manage the applications that you want to run with MiddleVR. It allows you to choose an application and then select the configuration file it should be running with.

5.2 5.2: Configuring devices

5.2.1 5.2.1: Adding a device

To add a device to your VR system description, simply go to the Devices tab, and click the ‘+’ button to display the Add Device window:

You can check all the supported devices in the section: Devices.

5.2.1.1 5.2.1.1: Keyboard, mouse, joysticks

Keyboard, mouse and all available joysticks are automatically added by the DirectInput driver when MiddleVR is initialized:

5.2.1.2 5.2.1.2: A.R.T. DTrack

The DTrack driver lets MiddleVR receive positions and orientations of A.R.T. Flysticks, body and finger targets, and also values of Flystick joysticks and Flystick button states.

The settings “Right”, “Front” and “Up” let you provide the coordinate system that your A.R.T system is calibrated with (see “Adjusting the coordinate system”).

In order to work, the DTrack driver must listen on a local port (default 5000) and create a set of MiddleVR trackers (default to 1).

The driver will try to listen on the local port immediately after its addition. It is important that you check it did not display the following error message: “[X] DTrack driver: Init error. Check that the local port ‘5000’ is available.”. As it is suggested, this port (here, 5000) is already in use by an application, so you should use another port number.

5.2.1.3 5.2.1.3: GameTrak Trackers

The GameTrak trackers are an old mechanical device that reports two absolution positions but no orientation. This device is mainly used by hobbyists and is difficult to find.

5.2.1.4 5.2.1.4: InterSense

The InterSense driver provides position and rotation of tracking stations. This information is determined by the accelerometers and gyros of those tracking stations and is being corrected by fusing the output of the inertial sensors with range measurements obtained from ultrasonic components.

A “Device” is an InterSense Processor Unit like the IS-900.

A “Tracker” is an InterSense tracked device (aka InterSense station): it gives positions and orientations from the coordinate system that is provided through the parameters named “Right”, “Front” and “Up” (see “Adjusting the coordinate system”).

Some InterSense Stations are wands (tracker, joystick and buttons) or styluses (buttons). InterSense Stations are plugged to InterSense Processor Units.

For each InterSense Station, MiddleVR provides:

Axis and buttons are updated only when wand or stylus are connected to an InterSense Processor Unit.

Note: the InterSense Driver will always assume that there are as many connected InterSense stations as the maximum number of addressable stations.

Please have a look to the InterSense documentation for hardware configuration. It is suggested that you put the “isports.ini” file (if you need to use it) in a directory specified by the “ISPORTS_INI_DIR” environment variable. This folder should not reside in the MiddleVR system folders in order to keep a clean installation.

The content of the “isports.ini” file varies according to your hardware setup:

If you encounter difficulties to use a RS-232 connection, we recommend that you check the version of the firmware and upgrade it before trying again.

5.2.1.5 5.2.1.5: Kinect trackers (Microsoft SDK)

MiddleVR will create 3D trackers based on the skeleton furnished by the Microsoft SDK. MiddleVR 1.6 supports both Kinect 1 and Kinect 2.

Note: Starting with MiddleVR 1.2, the Kinect orientations are also applied. To disable the rotations check the 3D nodes options.

You can find a ready-to-use configuration file that will automatically assign all the trackers to the corresponding 3D nodes. This configuration file is located here: data/Config/Kinect1.vrx. or data/Config/Kinect2.vrx. depending on your Kinect version.

5.2.1.6 5.2.1.6: Leap Motion

The Leap Motion is an optical tracking system. This driver will report one hand (one palm and five fingers).

5.2.1.7 5.2.1.7: Leap Motion (SDK2)

The Leap Motion SDK2 is an optical tracking system. This driver will report up to four hands data in 6 DoF for: elbows, wrists, palms, fingers (metacarpal bones, proximal bones, intermediate bones, distal bones).

This driver provides read-only properties: the maximum number of tracked hands, the current number of visible hands, whether each visible hand is left (if a hand is not visible, it will be considered as not left hand).

The best way to create a configuration for the Leap Motion (SDK2) is to start from the Leap Motion (SDK2) predefined configuration: LeapMotion-SDK2.vrx (you will get a configuration for two hands).

The property “HMDModeEnabled” should be used when the Leap Motion is attached to the front of a HMD, with its wire cable on the right. In this case, the Leap Motion will use an internal tracking mode that is optimized for HMD, and the coordinate frame will be as follows:

When a Leap Motion is in front of an Oculus Rift DK2, you should use the file HMD-Oculus-Rift-DK2-LeapMotion-SDK2.vrx: a 3D node for the Leap Motion will be parented to a 3D node for the Oculus Rift DK2 so movements of your hands will be relative to the HMD. In addition, this configuration file will activate the internal tracking mode of Leap Motion for HMDs.

5.2.1.8 5.2.1.8: Motion Analysis (beta)

The Motion Analysis driver requires you to run Cortex on the local machine.

5.2.1.9 5.2.1.9: NaturalPoint OptiTrack Trackers (NaturalPoint NatNet SDK)

OptiTrack trackers are infrared trackers that will report orientation and position. The system is intended for body motion capture with high precision and low latency.

The configuration requires an IP address of the computer that is running a NaturalPoint tracking software (e.g. Motive).

NaturalPoint OptiTrack Trackers options
Option Description
Number of trackers The maximum number of trackers to read values from.
Local IP Address Your IP address. It might be “127.0.0.1” if you run the software Motive on this computer. Otherwise, be sure that you use an IP address for the same network than the server running Motive.
Server IP Address IP address of the computer running the software Motive.
Connection type Select whether unicast or multicast transmission should be used. If the server delivers data to one computer only, both options should lead to the same performance. Note that this setting must match what is selected in the server to enable communication.
Right/Front/Up Let you provide the coordinate system that your OptiTrack system is calibrated with (see “Adjusting the coordinate system”).
Command port (Advanced) Port to send/receive commands to/from the server. By default, the server uses the port 1510 in UDP to send and receive. Your firewall must allow the communication.
Data port (Advanced) Port to receive data from the server. By default, the server uses the port 1511 in UDP to send data. Meanwhile, this computer (i.e. the local machine) also uses the port 1511 in UDP to receive data from the server. Your firewall must allow the communication.

If you encounter problems to connect to a server, or receive data please check out the following points:

  1. the server and your machine use the same connection type (i.e. unicast/multicast). Prefer unicast to ease the configuration.
  2. command and data port are correct.
  3. communication is not blocked by a firewall.
  4. the setting ‘Broadcast Frame Data’ in the server is checked.
  5. the local interface is not set to “local loopback” but to the IP of the machine if Motive and MiddleVR-Config (or your simulation) do not run on the same computer.

5.2.1.10 5.2.1.10: NaturalPoint TrackIR Tracker

The NaturalPoint TrackIR tracker is an infrared tracker that will report orientation and position. It is intended mainly for PC gaming. It requires that you have the latest version of the TrackIR software running.

There is no configuration required for this device in MiddleVR.

This requires that the latest version of the TrackIR software is installed and running: http://www.naturalpoint.com/trackir/06-support/support-download-software-and-manuals.html Make sure to update the list of supported games: MiddleVR is not supported unless you do that, even with the latest software version installed.

5.2.1.11 5.2.1.11: Oculus Rift DK1

The best way to create a configuration for the Rift is to start from the Oculus Rift DK1 predefined configurations (HMD-Oculus-Rift-DK1.vrx and HMD-Oculus-Rift-DK1-Razer-Hydra.vrx) that contain all the Rift-specific ready-to-use parameters.

Note: The FOV, aspect ratio and IPD are modified by the driver for an optimum integration of the latest Oculus SDK.

The Oculus Rift system can be divided into two parts: the tracking system and the display system.

The tracking system is an inertial tracker that only reports orientation. A basic usage doesn’t require any configuration, but if you want to activate the Magnetometer Drift Correction you need to first calibrate the magnetometer with the Oculus Configuration Utility tool. If a calibration is saved with the “Enable Mag Yaw Correction” option checked, then the drift correction will be activated in your simulation.

If you wish to start a configuration from scratch you need to add the OculusRiftDK2 Tracker in the Device panel. Note that this tracker will only add the tracker and will not configure the cameras and viewport.

To handle the Rift display system, the “OculusRiftWarping” option of the side-by-side viewport needs to be enabled. This option activates the lens deformation and chromatic aberration correction. It also forces the anti-aliasing on 2 to compensate the current low resolution and offer a good user experience.

Note: All application made using the earlier version of the Oculus Rift DK1 driver OculusRift with MiddleVR prior to 1.6.0 should be updated.

Note: The Oculus SDK does not allow the application to apply anti-aliasing directly from the application. Thus, if in your MiddleVR configuration file you set the anti-aliasing level above 1 you will not have anything rendered in the HMD.

Note: Again we really encourage you to use the predefined configurations and modify them to suit your needs.

5.2.1.12 5.2.1.12: Oculus Rift DK2

The best way to create a configuration for the Rift is to start from the Oculus Rift DK2 predefined configurations (HMD-Oculus-Rift-DK2.vrx, HMD-Oculus-Rift-DK2-Razer-Hydra.vrx and HMD-Oculus-Rift-DK2-LeapMotion-SDK.vrx) that contain all the Rift-specific ready-to-use parameters.

Note: The FOV, aspect ratio and IPD are modified by the driver for an optimum integration of the latest Oculus SDK.

The Oculus Rift system can be divided into two parts: the tracking system and the display system.

The tracking system is composed of a gyroscope, an accelerometer and a magnetometer only for the orientation of the head. For the positional tracking, the Oculus Rift DK2 is encrusted with infrared LEDs tracked by an infrared camera.

Figure 2: Coordinate system of Oculus Rift DK2 with MiddleVR (the X axis pops-out of this image). The origin is centered in the tracking camera. In this image, a user is seated in front of a screen.
Figure 2: Coordinate system of Oculus Rift DK2 with MiddleVR (the X axis pops-out of this image). The origin is centered in the tracking camera. In this image, a user is seated in front of a screen.

The center of positions is the center of the tracking camera and does not depend on the camera orientation.

Note that adding the Oculus Rift DK2 tracker in the Device panel will only add the tracker and will not configure the cameras and viewport.

To handle the Rift display system, simply select the camera stereo from which the Oculus Rift DK2 will get its images. To do so, in the “TargetCamera” property of the “Oculus Rift Driver” tracker, select the camera you want to use.

Please be aware that the support of the Oculus Rift DK2 is still currently maintained by Oculus using the runtime 1.8 but they have made it clear that the support will not last forever. In the case of the Oculus Rift DK2 not being supported anymore please use the OculusRiftDK2 Tracker in the Device panel.

Note: The Oculus SDK does not allow the application to apply anti-aliasing directly from the application. Thus, if in your MiddleVR configuration file you set the anti-aliasing level above 1 you will not have anything rendered in the HMD.

Note: Again we really encourage you to use the predefined configurations and modify them to suit your needs.

5.2.1.13 5.2.1.13: Oculus Rift CV1

The best way to create a configuration for the Rift is to start from the Oculus Rift predefined configurations (HMD-Oculus-Rift-CV1.vrx and HMD-Oculus-Rift-CV1-Razer-Hydra.vrx) that contain all the Rift-specific ready-to-use parameters.

Note: The FOV, aspect ratio and IPD are modified by the driver for an optimum integration of the latest Oculus SDK.

The Oculus Rift system can be divided into two parts: the tracking system and the display system.

The tracking system is composed of a gyroscope, an accelerometer and a magnetometer only for the orientation of the head. For the positional tracking, the Oculus Rift is encrusted with infrared LEDs tracked by an infrared camera.

Figure 3: Coordinate system of Oculus Rift with MiddleVR (the X axis pops-out of this image). The origin is centered in the tracking camera. In this image, a user is seated in front of a screen.
Figure 3: Coordinate system of Oculus Rift with MiddleVR (the X axis pops-out of this image). The origin is centered in the tracking camera. In this image, a user is seated in front of a screen.

The center of positions is the center of the tracking camera and does not depend on the camera orientation.

Note that adding the Oculus Rift tracker in the Device panel will only add the tracker and will not configure the cameras and viewport.

To handle the Rift display system, simply select the camera stereo from which the Oculus Rift will get its images. To do so, in the “TargetCamera” property of the “Oculus Rift” select the camera you want to use.

Regarding the support of the Oculus Touch: under the “OculusRift SDK 1.4 Driver” you will find the left and right hands trackers under the name of “OculusRift0.LeftHandTracker” and “OculusRift0.RightHandTracker”. In the HMD-Oculus-Rift-CV1.vrx predefined configuration those trackers are already binded to the “HandNode” and the “LeftHandNode” as well as the two Wands (one for each hand).

Note: The Oculus SDK does not allow the application to apply anti-aliasing directly from the application. Thus, if in your MiddleVR configuration file you set the anti-aliasing level above 1 you will not have anything rendered in the HMD.

Note: In case you do not have access to a pair of Oculus Touch the OculusRift Driver has a native support for the XBox controller so the inputs will be received as if you had an Oculus Touch. In this case you won’t have any hand tracking.

Note: Again we really encourage you to use the predefined configurations and modify them to suit your needs.

5.2.1.14 5.2.1.14: HTC Vive

The best way to create a configuration for the HTC Vive is to start from the HTC Vive predefined configuration (HMD-HTC-Vive.vrx) that contain all the Vive-specific ready-to-use parameters.

Note: The FOV, aspect ratio and IPD are modified by the driver for an optimum integration of the latest OpenVR SDK.

The HTC Vive can be divided into two parts: the tracking system and the display system.

The tracking system is composed of a gyroscope, an accelerometer and a magnetometer only for the orientation of the head. For the positional tracking, the HTC Vive is encrusted with photo-sensors that detect the positions of the LightHouses relative to itself.

The center of positions and of orientation is specified by the users when they configure their virtual space.

Note that adding the HTC Vive tracker in the Device panel will only add the trackers and will not configure the cameras and viewport.

To handle the HTC Vive display system, simply select the camera stereo from which the HTC Vive will get its images. To do so, in the “TargetCamera” property of the “OpenVR Driver” select camera you want to use.

HTC Vive options
Option Description
ControllerTrackersNb The maximum number of controllers to read values from.
TrackingReferenceNb The maximum number of Lighthouses to read values from.
TargetCamera The camera stereo from which the HTC Vive will get its images.
PlayAreaWidth The width in meters of the play area specified by the user in SteamVR. (read-only)
PlayAreaLength The length in meters of the play area specified by the user in SteamVR. (read-only)

Note: Again we really encourage you to use the predefined configurations and modify them to suit your needs.

It is possible to make the controllers vibrate through the use of vrCommand.

Here is a short example:

// The parameters for the "vrDriverOpenVRSDK.TriggerHapticPulse" vrCommand are:
// - ControllerId: int
//   It is the controller we want to make vibrate. The first controller is
//   the controller 0. If ControllerId is -1 then all the
//   connected controllers will receive the haptic pulse.
// - Axis: uint
//   It is the axis we want to make vibrate on the controller. Controllers
//   usually have only one axis but they can have more. The first
//   axis is the axis 0.
// - VibrationTime: uint
//   It is the time in microseconds the pulse will last. It can last
//   up to 3 milliseconds.

// Note that after this call the application may not trigger another haptic
// pulse on this controller and axis combination for 5 ms.
var value = vrValue.CreateList();

value.AddListItem(new vrValue(-1));
value.AddListItem(new vrValue(0));
value.AddListItem(new vrValue(3000));

MiddleVR.VRKernel.ExecuteCommand("vrDriverOpenVRSDK.TriggerHapticPulse", value);

An extensive example is given in the sample “TriggerHapticPulseOnVRAction”.

5.2.1.15 5.2.1.15: Razer Hydra trackers

The Razer Hydra has two magnetic trackers as well as a joystick and several buttons on each trackers.

5.2.1.16 5.2.1.16: SpaceMouse

A SpaceMouse is a hardware device that lets a user translate and rotate objects in 3D. As the name suggests, it also provides buttons to trigger actions.

There is no configuration required for this device.

The SpaceMouse tracker is a tracker that can be translated and rotated in 3D by a SpaceMouse device. The configuration fields are described below.

SpaceMouse tracker options
Option Description

Tracker translation speed

A factor to be applied on the translation that comes from the SpaceMouse. This factor depends on time, hence it is expressed as linear-movement unity per second. The SpaceMouse sends arbitrary quantities so it is impossible to talk about meter per second (m/s) for example. However, the SpaceMouse control panel can also be tweaked to increase or decrease the translations that the SpaceMouse sends to software including MiddleVR, see its settings.

Tracker rotation speed

A factor to be applied on the rotation that comes from the SpaceMouse. This factor is similar to the translation speed but is expressed as an angular velocity, in degrees. The SpaceMouse control panel can also be tweaked to increase or decrease the angle values it sends to software including MiddleVR, see its settings.

Tracker movements in local space

When checked, tracker movements are expressed in the local space of the tracker.

Assuming a 3D space with X to right, Y pointing forward and Z to the top, let’s now consider that it is checked and that you rotated the tracker around the X axis by 45 degrees. If you then move the tracker forward, it will go along the Y axis that was rotated by 45 degrees. Said differently, the tracker will “climb”.

If we now consider that we are working in global space, then the Y axis remains flat (i.e. aligned with the global Y axis of the world). Said differently, you are moving the rotated tracker along the flat Y axis and you look towards the top. In addition, rotations are relative to the world axis but around the object pivot.

5.2.1.17 5.2.1.17: SpacePoint Fusion Tracker

The PNI SpacePoint Fusion tracker is a inertial tracker that will report an absolution orientation in space, but no position.

There is no configuration for this device.

5.2.1.18 5.2.1.18: Triviso Colibri

The Trivisio Colibri is an inertial sensor. It only reports orientation. It doesn’t require any configuration.

5.2.1.19 5.2.1.19: Vicon Trackers (Vicon DataStream SDK)

Vicon trackers are infrared trackers that will report orientation and position. The system is intended for body motion capture with high precision and low latency.

The configuration requires an IP address of the computer that is running a Vicon tracking software (e.g. Vicon Tracker).

Vicon Trackers options
Option Description
Number of trackers The maximum number of trackers to read values from.
Remote Address IP address of a computer running a Vicon’s software to read data from.
Remote Port Port to be used on the computer with a running Vicon’s software.
Use multicast connection Select whether unicast or multicast transmission should be used. If the server delivers data to one computer only, both options should lead to the same performance. Note that another connected client must have turned on before this type of connection.
Right/Front/Up Let you provide the coordinate system that your Vicon system is calibrated with (see “Adjusting the coordinate system”).

MiddleVR Trackers can only use sets of labeled Vicon’s markers.

So how to get sets of labeled markers? Indeed, MiddleVR trackers directly match to what Vicon names Segments. A Segment is for example a tracked forearm and is always made up of markers. Thus sets of labeled markers are automatically obtained from Segments, or as Objects (which is the another name used by Vicon in its Tracker software to designate Segments).

The left column contains a list of objects, which are themselves named containers of markers.
The left column contains a list of objects, which are themselves named containers of markers.
An example with one tracker.
An example with one tracker.

5.2.1.20 5.2.1.20: Vuzix Tracker

The Vuzix tracker is an inertial tracker. It only reports orientation. It doesn’t require any configuration.

Note that adding the Vuzix tracker only add the tracker and will not configure the cameras and viewport.

5.2.1.21 5.2.1.21: zSpace

A zSpace is a hardware device providing a 24" stereo display, passive stereo glasses and a stylus. Glasses and stylus are tracked by infrared cameras mounted on the screen in order to track their positions and rotations. The stylus is able also to vibrate and turn on its LED. It furnishes buttons and tapping events on its tip.

Several trackers are provided. They all work in the tracker space that is explained in the schema below for side and front views.

The available settings are presented below.

zSpace options
Option Description
Stylus LED color The color of the stylus LED, in RGB. Each color component can be set to 0 or 1 only. Any value in the opened range ]-1,1[ will be reinterpreted as 0. Above 1 or below -1, the value will be reinterpreted as 1. Note that the LED will not light if the color is black.
Stylus LED turned on As the name suggest, check it to turn on the stylus LED. However you must set a LED color that is not black to see a result on the stylus.
Stylus default duration vibration Defines the duration in seconds of the vibration of the stylus. This value is used by the setting “Fire vibration”.
Stylus default duration between vibrations Defines the duration in seconds between vibrations of the stylus. This value is used by the setting “Fire vibration”.
Default number of vibrations Defines the number of vibrations for the stylus. If 0 is given, the stylus will not vibrate. With -1, the stylus will not stop vibrations until the user ask for it explicitly. This value is used by the setting “Fire vibration”.
Intensity of the vibrations From 0.0 to 1.0. Currently only intensities at 0.1 intervals are supported by the zSpace SDK (i.e. 0.1, 0.2, …, 0.9, 1.0). Intensity values not specified at a valid interval will be rounded down internally to the nearest valid interval.
Fire vibration Fire a vibration of the stylus. The values to be used are the values defined above with the name “default”: about the vibration duration, the duration between two vibrations and the number of vibrations. Please note that this checkbox does not tell whether the stylus is vibrating but only indicates that the user fired the vibration system. However unchecking will stop the vibration immediately (if the stylus is vibrating).

It is possible to make the stylus vibrate (with non-default parameters) or to change color of its LED (once turned on) through the use of vrCommand. An extensive example is given in the sample “VRZSpaceSample”; it also indicates how to track visibility of head or zSpace stylus.

5.2.1.22 5.2.1.22: Tracker Simulator - Gamepad

MiddleVR is able to simulate a 3D tracker with an official Microsoft XBOX 360 gamepad. This is useful if you don’t have a real 3D tracker available, or simple if you want to navigate with a Gamepad.

For the moment only an official XBOX 360 gamepad will work.

5.2.1.23 5.2.1.23: Tracker Simulator - Mouse

MiddleVR can simulate a 3D tracker with a three button mouse.

5.2.1.24 5.2.1.24: Tracker Simulator - Keyboard

MiddleVR can simulate a 3D tracker with key presses.

5.2.1.25 5.2.1.25: VRPN Tracker

VRPN is an open-source project that handles lots of different VR devices. VRPN requires that you configure a server with the devices that you want to use. Read more about VRPN.

MiddleVR can handle VRPN trackers, axis and buttons. This section describes the configuration of trackers. For axis and buttons, see below.

Once the VRPN server is up and running, you must specify in MiddleVR the address and optionally the port of this server, the number of trackers that you want to use, and optionally modify the way axis are applied.

VRPN Tracker options
Option Description
Address Address of the VRPN server, plus the name of a particular device on this server. Examples: Tracker0@localhost, Tracker1@192.168.1.99, Kinect@LabPC.Moulinsart.fr. You can also specify the port of the server: Tracker0@localhost:3884, Tracker1@192.168.1.99:3886. The default VRPN port is 3883.
Index A device, such as a Polhemus 3D tracker, can send data of multiple 3D trackers through one device. For example Tracker0@localhost can represent 10 different trackers, also named channels. Index specifies the starting index of the device.
Number of trackers After specifying the starting index, you can also specify the number of trackers (devices) to use. For example, an index of 0 and a Number of trackers of 3 will result in the usage of channels 0,1,2. A starting index of 4 and a 2 trackers will result in the usage of trackers 3 and 4.
Name Name prefix for each tracker.
Right/Front/Up Axis coordinate system (see “Adjusting the coordinate system”).
Scale A factor to be applied on each tracker value. This value must be not null and positive. One can find it useful to adapt VRPN values with the dimensions of the current virtual world.
Wait for data Should the driver wait until a new data arrives? This can be useful in case your tracker sends updates at the same refresh rate as your display. It means that for each frame you know you’ll have a new update, and not miss an update.

Once the VRPN trackers are added, you will immediately be able to see if the data are streamed correctly:

If the VRPN server is not reachable, you will probably see the following error in the log window:

For more information about troubleshooting VRPN, see Troubleshooting VRPN on the knowledge base.

5.2.1.26 5.2.1.26: VRPN Axis

VRPN Axis can represent joystick axis, sliders or other analog information.

VRPN Buttons options
Option Description
Address Address of the VRPN server, plus the name of a particular device on this server. Examples: Joystick@localhost, Mouse0@192.168.1.99, Sliders1@LabPC.Moulinsart.fr. You can also specify the port of the server: Joystick@localhost:3884, Mouse0@192.168.1.99:3886. The default VRPN port is 3883.
Number of axis Number of axis on this device.
Name MiddleVR device name.

5.2.1.27 5.2.1.27: VRPN Buttons

VRPN Buttons represent a button with a value of true or false.

VRPN Axis options
Option Description
Address Address of the VRPN server, plus the name of a particular device on this server. Examples: Joystick@localhost, Mouse0@192.168.1.99, Sliders1@LabPC.Moulinsart.fr. You can also specify the port of the server: Joystick@localhost:3884, Mouse0@192.168.1.99:3886. The default VRPN port is 3883.
Number of buttons Number of buttons on this device.
Name MiddleVR device name.

5.2.2 5.2.2: Configuring the Wand

As mentioned previously, the Wand is composed of three parts:

You have to manually add and configure the three devices that will make up the wand. In the screenshot above, we have added a VRPN Tracker, VRPN Axis for the joystick axis, and VRPN Buttons for the buttons.

You can also use a simple joystick for the axis and buttons. You can also use mouse buttons.

You then have to specify in the Wand section, which axis and buttons devices you want to use, and the ordering of the axis and buttons.

You also have to assign the tracker to the HandNode.

Finally you have to configure the usage of the Wand in Unity.

Wand options
Option Description
Device for wand navigation (axis) The device that will be used to get the wand joystick axis values
Horizontal axis index Index of the horizontal axis of the joystick
Horizontal axis scale Scale factor that should be applied to the value of the horizontal axis
Horizontal axis value Display value of the computed horizontal axis after scaling
Vertical axis index Index of the vertical axis of the joystick
Vertical axis scale Scale factor that should be applied to the value of the vertical axis
Vertical axis value Display value of the computed vertical axis after scaling
Device for wand interaction (buttons) The device that will be used to get the wand buttons states
Button 0 index Index of the primary button
Button1 index Index of the secondary button

Sometimes, only one wand is not enough. In these cases, you will need to add and configure additional wands to your configuration. This way, you will be able to retrieve the wand you want from the device manager and get it’s tracker, axis and buttons data.

5.3 5.3: Configuring 3D Nodes

5.3.1 5.3.1: 3D Node

Most of the time 3D nodes represent real world objects, like a screen, or a body part, like a user’s head, hand, or eyes (i.e. cameras).

In the 3D nodes view, you can configure what your VR system looks like “in the real world”: what body parts of your user you’re tracking, where are the screens located, how they’re seeing the virtual world.

3D nodes are stored as a tree structure. Each node can have many children but only one parent. This hierarchy can be used to represent a user’s body and the natural relationship between its body parts. If I move my body, all my body parts are moving with me. If I move my arm, my hand will also move, etc.

Each node has a position and orientation in space with respect to its parent, represented by their X/Y/Z and Yaw/Pitch/Roll properties in Local and World space.

The world position and orientation of a node are computed by combining the transformation (position and orientation) of all its parents. This result is represented as the PositionWorld and OrientationWorld properties. The OrientationWorld read-only property is represented as a Quaternion, whereas the editable orientations are represented as Euler angles. Internally MiddleVR uses quaternions.

3D Nodes can also be assigned a tracker. For more information see the Tracker property below.

There are several types of 3D nodes.

5.3.1.1 5.3.1.1: Creating nodes

You can create 3D nodes by clicking on the ‘+’ button. You will be presented with a window to choose which 3D node you want to create:

Choose the type of node you want to add and press Add.

Note: The new node will automatically be added as a child of the currently selected node.

5.3.1.2 5.3.1.2: Properties

A 3D node has several properties that you can modify in the configuration tool.

3D Nodes properties
Option Description
Name The name of the 3D node. The name can be used to find a particular node.
Tag A tag is a word representing a semantic information. For example a node can have a “Hand” tag. Another node can also have the same “Hand” tag. You can then find all the nodes that have this particular tag, or behave in a certain way when you find a node with this tag. For example some objects would only react if they are touched by a 3D node that has the “Hand” tag.
Parent The parent is another 3D node. The parent can not be a child of the node.
Tracker When a 3D node has a tracker assigned to it, the position and orientation data of the tracker will automatically be applied to the local position and orientation of the node.
X,Y,Z Local The local position of the node with respect to its parent coordinate system. In meters. Disabled if the node has a Tracker.
PositionWorld The world position, taking into account all the parents’ cumulated transforms. In meters.
Yaw,Pitch,Roll Local The local orientation of the node with respect to its parent coordinate system. In degrees. Disabled if the node has a Tracker.
OrientationWorld The world orientation, taking into account all the parents’ cumulated transforms, represented as a quaternion.

A 3D node also has some advanced properties.

Note: The UseTrackerX/Y/Z/Yaw/Pitch/Roll options are only available when the Tracker of the node is not Undefined.

3D Nodes advanced properties
Option Description
IsFiltered Filters the data applied from the tracker by using the “One Euro Filter”. Enabled only if the node has a tracker. More info: http://www.lifl.fr/~casiez/1euro/.
CutOffFrequency Appears only if the filter is active. The “One Euro Filter” is mainly a low-pass filter, which means it reduces the movement jitter of frequencies higher than the cutoff frequency. The default value is 1Hz. The closer you go to 0 Hz, the smoother the movements are, and the higher the lag is.
Reactivity Appears only if the filter is active. Depending on the “Cut Off Frequency” value, the “One Euro Filter” can add some lag to the movement. The “Reactivity” parameter is a scalar value to reduce this lag. You can find more details about this parameter in the filter calibration procedure below. Start from ‘0’ where no reactivity is added and make this value slowly grow to have the best results without bringing back too much jitter.
X,Y,Z World The world position of the node. In meters. Disabled if the node has a Tracker.
Yaw,Pitch,Roll World The world orientation of the node. In degrees. Disabled if the node has a Tracker.
UseTrackerX Applies the X information from the tracker to the node if checked.
UseTrackerY Applies the Y information from the tracker to the node if checked.
UseTrackerZ Applies the Z information from the tracker to the node if checked.
UseTrackerYaw Applies the Yaw information from the tracker to the node if checked.
UseTrackerPitch Applies the Pitch information from the tracker to the node if checked.
UseTrackerRoll Applies the Roll information from the tracker to the node if checked. (Did you guess?)

5.3.1.3 5.3.1.3: Calibration

You can calibrate a 3D node to have a neutral position and/or orientation. What does it mean?

It means that:

Note: Only a node without a tracker can have its transformation calibrated. If your node has a tracker, you can either use “Calibrate Parent”, or create a child node that you can calibrate

MiddleVR offers different ways of calibrating a node:

The Reset Transform will reset the local transformation of the selected node.

The “Calibrate Parent” action is particularly useful when the parent represents the origin of a tracker (tracker’s base), for example the Razer Hydra’s base, a Kinect or a camera.

If you want the nodes to be correctly positioned in world space, you have to place the camera node correctly. This can be a tedious task to do manually: you have to measure the distance from the center of world space as well as the correct orientation.

If you set a tracked object at the center of the physical world, with a neutral orientation in the real world, you will notice in MiddleVR that the corresponding node is probably not at a neutral place/orientation.

You can simply select this node in MIddleVR and choose “Calibrate Parent”. This will automatically set the transformation of the parent so the selected node gets a neutral transformation. You will also notice that the parent now has a position/orientation corresponding to the physical position/orientation of the tracker’s base in world space.

5.3.1.4 5.3.1.4: Filter Calibration

Here is a simple two-step procedure proposed by the authors of the “One Euro Filter” to set the two filter parameters to minimize jitter and lag when tracking human motion:

Note that parameters ‘CutOffFrequency’ and ‘Reactivity’ have clear conceptual relationships: if high speed lag is a problem, increase ‘Reactivity’; if slow speed jitter is a problem, decrease ‘CutOffFrequency’.

5.3.2 5.3.2: Camera

A camera is a 3D node, so it inherits all the 3D node properties: tracker, local position, local orientation…

Regular cameras in 3D engines are said to be symmetrical: to compute a correct perspective, they suppose that the viewer is always exactly in front of the center of the screen:

The particularity of a VR camera is that it can be assigned a screen.

A screen is exactly like a window to the virtual world. Once the camera is associated with a screen, its view frustum (pyramid of vision) is always totally constrained by this screen. This means that the view is dependent on the position of the camera but also on the position of the screen. At this point, if the user is not exactly facing the center of the screen, the view frustum will not be symmetrical: the camera is said to be asymmetrical.

In the following pictures, the screen is represented as a gray rectangle. Notice how the camera frustum always matches the screen:

If the camera or the screen moves, the camera frustum will always match the screen.

This is exactly as if you’re looking through a window: if the camera is close to the screen, it’s like when you stand close to a window: you see a lot of the outside world.

When you go away from the window, your field of view gets narrower. When you move left or right, you see a different part of the world.

Note: When a camera is assigned a screen, its orientation is always constrained to face the normal of the screen.

A screen can be assigned to multiple cameras.

5.3.2.1 5.3.2.1: Properties

Camera properties
Option Description
VerticalFOV Vertical field of view, in degrees.
Near The near clipping plane distance.
Far The far clipping plane distance.
Screen Assign a screen if you want asymmetrical cameras.
UseViewportAspectRatio Does this camera use its own AspectRatio or should it use the aspect ratio of its viewport? Disabled if the node has a Screen.
AspectRatio The aspect ratio of the camera. Disabled if the node has a Screen or if UseViewportAspectRatio is true.

5.3.3 5.3.3: Stereoscopic camera

To get a stereoscopic (S3D) rendering, you need to have two views of the virtual world, like your two eyes do for the real world.

Stereoscopic camera automatically create two cameras, a left and a right camera, as children. Those cameras will act as if there was a screen placed at a distance specified by the “Screen Distance” property. The size of the screen is determined by the screen distance, the field of view of the camera, and its aspect ratio.

Stereoscopic cameras inherit all camera’s properties, and they add the Screen Distance, and the Inter-Eye Distance.

5.3.3.1 5.3.3.1: Properties

Stereoscopic camera properties
Property Description
ScreenDistance The zero-parallax distance. Disabled if the camera has a Screen.
InterEyeDistance The distance between the left and right cameras.

The two cameras are represented as follow:

5.3.4 5.3.4: Screen

The two main attributes of a screen are its position in space and its size.

Screens are assigned to cameras.

A screen is a 3D node, so it inherits all its properties. It simply adds a Width and a Height.

This also means that a screen can be assigned a tracker!

A screen can be assigned to multiple cameras.

5.3.4.1 5.3.4.1: Properties

Screen properties
Property Description
Width Width of the screen.
Height Height of the screen.

5.4 5.4: Configuring viewports

5.4.1 5.4.1: Viewport

A viewport is a rectangular area on your desktop in which your camera will display its rendering. A viewport must be assigned a camera.

If you want to display a stereoscopic picture on a particular viewport, you must assign a Stereoscopic Camera.

This will enable the stereoscopic options, such as StereoMode or StereoInvertEyes.

There are two stereo modes:

You can also manually create a passive stereo viewport by creating two distinct viewports and assign them the left and right cameras.

5.4.1.1 5.4.1.1: Properties

Viewport properties
Property Description
Name Name of the viewport
Left The left pixel coordinate of the viewport. Can be negative, to display on a secondary monitor for example.
Top The top pixel coordinate of the viewport. Can be negative, to display on a secondary monitor for example.
Width Width of the viewport in pixels.
Height Height of the viewport in pixels.
Camera The camera assigned to this viewport. Required.
StereoMode The stereoscopic mode of the viewport. Will be enabled if the Camera is a Stereoscopic camera.
CompressSideBySide In case your display scales the side-by-side image horizontally, use this option to compress it.
StereoInvertEyes Reverse the left-right eye rendering.
OculusRiftWarping Used by the Oculus Rift configurations to provide proper warping. Make sure to use a predefined configuration: the correct warping is also dependent on the proper viewport size, aspect ratio, cameras field of view etc.

5.4.2 5.4.2: Window

MiddleVR will create a single window that contains all your viewports. You can control the behavior with the following properties:

Window properties
Property Description
Fullscreen Will the window go into fullscreen mode?
AlwaysOnTop Will the window remain above all other windows?
WindowBorders If the window is not fullscreen, keep the borders if set to true, remove them if set to false.
ShowMouseCursor Hide the mouse cursor if set to false.
VSync Will the window wait for vertical synchronization? This will be forced to true when an active stereoscopy viewport is detected.
GraphicsRenderer Force use of a graphics renderer.
Anti-Aliasing Sets the anti-aliasing level. Currently only available in Forward Rendering.
ChangeWorldScale (Advanced) Enables the World Scale option. When turned off, the world scale value here after is ignored.
WorldScale (Advanced) Scales positions of the VR nodes so that the virtual world appears bigger or smaller. The result is that the physical user becomes scaled indirectly compared to the virtual scene. For example, setting the value to 2 will make the physical user twice taller, or said differently the size of the virtual world will be divided by 2.

If your viewports span across multiple displays, you shouldn’t use the Fullscreen mode, since it’s only able to use your primary display. If you want to have viewports on several displays that look like fullscreen, disable Fullscreen and disable Borders.

5.4.3 5.4.3: Homography

In some cases, a homography transformation is needed on the displayed image to straighten the final visual result. This technique is particularly helpful to calibrate projector displays.

This feature is handled by the “Advanced” section of the viewport parameters:

Homography parameters
Property Description
UseHomography Will this viewport use homography transformation?
HomographyTopLeftCornerOffsetX Horizontal position offset of the top left corner destination point. This parameter is in pixels. Zero makes the corner be at it’s original position.
HomographyTopLeftCornerOffsetY Vertical position offset of the top left corner destination point. This parameter is in pixels. Zero makes the corner be at it’s original position.
HomographyTopRightCornerOffsetX Horizontal position offset of the top right corner destination point. This parameter is in pixels. Zero makes the corner be at it’s original position.
HomographyTopRightCornerOffsetY Vertical position offset of the top right corner destination point. This parameter is in pixels. Zero makes the corner be at it’s original position.
HomographyBottomRightCornerOffsetX Horizontal position offset of the bottom right corner destination point. This parameter is in pixels. Zero makes the corner be at it’s original position.
HomographyBottomRightCornerOffsetY Vertical position offset of the bottom right corner destination point. This parameter is in pixels. Zero makes the corner be at it’s original position.
HomographyBottomLeftCornerOffsetX Horizontal position offset of the bottom left corner destination point. This parameter is in pixels. Zero makes the corner be at it’s original position.
HomographyBottomLeftCornerOffsetY Vertical position offset of the bottom left corner destination point. This parameter is in pixels. Zero makes the corner be at it’s original position.

There, by default, the image corners coordinates are set to the standard positions. Change these corners coordinates will stretch the viewport to fit the geometry described by the four points. Here is an example showing the same view with no homography and with homography using the viewport coordinates with a ( 100, 100 ) pixels offset for the top left corner and a ( -100, 100 ) pixels offset for the top right corner.

Without homography:

With homography:

You can correct a basic keystone, but you can go even further in the correction:

We offer a visual configuration tool for the homography. You can see it in action here: http://www.youtube.com/watch?v=_eZ5LKQqtjU.

You can download it on the Download page of our website.

5.4.4 5.4.4: Debug information

MiddleVR has a few options to further investigate issues:

Debug properties
Option Description
LogLevel The level of logs that will be printed in log files when the application runs.
LogInSimulationFolder To write the logs in the .exe folder (in a new folder: MiddleVRLogs/). Note: when used in a cluster system and if cluster clients use a shared network folder, all clients will be writing the logs over the network. Depending on the loglevel, this can slow the application down significantly.
EnableCrashHandler For more serious crashes, this option will enable even more information to be logged. Warning: this option can render Unity more sensitive. If an exception is raised in Unity, the Crash Handler will catch it and quit immediately.

The log level can be from 0 to 7:

5.5 5.5: Running simulations

The “Simulations” window allows you to manage the simulations that you want to run, and the different configurations that can be used with them and the quick links.

The goal is to simplify the management of your applications and VR systems. You can simply chose to run an application with different configurations, or run different applications on the same VR system, or create the association of a simulation and a specific configuration allowing you to gain time.

The current command line that will be executed is displayed in “Current command line”.

Pressing “Run” or using the “Ctrl+R” shortcut will run the current command line.

If the configuration is a cluster configuration, it will send the command to all the cluster daemons, including the master.

Note: This means that you need to have the cluster daemon running on the server as well.

The Simulations tab is split into two views:

The main list manages the quick links. A quick link is the composition of a simulation and a configuration plus a custom argument. By clicking the “+” or “-” you can add or remove quick links from the list. By clicking the “+” button the “Add Quick Link” window will appear.

You can sort the list as you wish by dragging and dropping the quick link where you want in the list.

In this window you can create quick links. To create a quick link in this view, you only need to:

By right clicking on a quick link a menu lets you choose between removing the item from the list or renaming it.

Note: You can also edit the name of the selected quick link by pressing “F2”.

Double clicking on a quick link will load its configuration and execute the linked simulation using the custom arguments it contains.

5.5.2 5.5.2: The Simulations view

The left list manages the simulations, the right list manages the configurations. In the simulations list next to each simulations you can see it’s path in your computer. Each configuration is within a category (HMD, Cube, Cluster, etc.). You can add or remove simulations and configurations by pressing the “+” or “-” button. You can remove custom configuration categories by pressing “-”. Removing a simulation or a configuration will only remove it from the list, not from the computer. Removing a configuration category will remove all it’s configurations from the list. You can sort the simulations list as you wish by dragging and dropping the simulation where you want in the list.

Pressing the “New category” button will create a new custom category.

Double-clicking on an application in the simulations list will open a file explorer in the folder containing it.

By right clicking on a simulation a menu lets you choose between removing the item from the list or show it in the explorer.

Double-clicking on a configuration in the configurations list will load it.

Right clicking on a configuration category a menu lets you choose between removing it or renaming it. Right clicking on a configuration a menu lets you choose between removing the item from the list, renaming it, duplicating it or showing it in the explorer. Renaming a configuration will also renaming the file. Duplicating a configuration will duplicate its file on the disk.

Note: You can also edit the name of the selected configuration or the selected category the by pressing “F2”.

Note: Configurations and categories marked by a “*” are read-only, this means that you cannot rename them. You cannot remove a read-only category.

You can add a custom argument to the command line in “Custom arguments”.

By pressing the “Add to quick links” button a new quick link will be created in the “Quick Links” view. This quick link will have a name composed from the simulation and the configuration (e.g. “Shadow_Forward - HMD-Oculus-Rift-DK2”). You can rename it as you like afterward in the “Quick Links” view. It will be linked to the selected simulation and the selected configuration and may contain a custom argument to be executed by the command line. This custom argument is the one in “Custom arguments”.

6 6: MiddleVR for Unity

6.1 6.1: Introduction

6.1.1 6.1.1: Integration

The core of MiddleVR has no knowledge of a particular 3D engine.

This means that for each 3D engine, a small interface has to be created. This interface will make the bridge between MiddleVR on one side, and the 3D engine on the other side. It will configure 3D nodes, viewports, and give access to devices.

Typically, this bridge is based both on the MiddleVR API and the host 3D engine API. It will gather information from MiddleVR and use that to configure the 3D engine.

For example, the interface will load a particular configuration file, ask MiddleVR for the number of nodes, their properties, and create those nodes as 3D nodes from the 3D engine. In Unity, this will translate MiddleVR 3D Nodes as GameObjects.

The interface will also read information about viewports, cameras, and everything needed to create a VR experience.

Each frame, MiddleVR will then update all the nodes and camera it has created inside Unity with the values from the devices and the computations of the camera’s projection matrix.

Note: By default MiddleVR will disable all your cameras and only work with the cameras you’ve defined in MiddleVR.

Note: Before exporting your application to a standalone player, make sure to read the “Exporting to a standalone player” section below.

6.1.2 6.1.2: Unity coordinate system

As said previously, MiddleVR uses a right-handed coordinate system, where X is pointing to the right, Y is pointing away from the user towards the screen and Z pointing up:

Unity’s coordinate system is left-handed, with X pointing to the right, Y pointing up, and Z pointing away from the user, towards the screen:

When updating Unity’s nodes and cameras, MiddleVR will automatically convert the 3D information from one coordinate system to the other.

But when you read the information of a MiddleVR node or of a 3D tracker directly from MiddleVR from a Unity script, it will be in MiddleVR’s coordinate system. You then have to convert this 3D information into Unity’s coordinate system. MiddleVR provides methods to do exactly that. See section “Input devices”.

6.2 6.2: Adding MiddleVR to your Unity application

6.2.1 6.2.1: Import the MiddleVR package

(If you are upgrading an old Unity project, make sure to read this article: Upgrade MiddleVR Unity Package.)

MiddleVR is split in two parts:

To import the MiddleVR package, open the Asset menu, then Import package and Custom package:

You will find the MiddleVR UnityPackage in the data folder of your MiddleVR installation, typically C:\Program Files (x86)\MiddleVR\data:

Open the MiddleVR.unitypackage file.

This will open a new Unity window:

Simply click “Import”.

The package will then be imported and is now available in your project.

6.2.2 6.2.2: Add the VR manager to your project

Importing the package is not sufficient, you need to add an important component to your project that will manage all the VR aspects: the VR manager.

Open the MiddleVR folder in the Project tab.

Drag and drop the VRManager prefab to the Hierarchy tab of your project:

6.3 6.3: The VR manager

6.3.1 6.3.1: Introduction

The VRManager is simply a Unity GameObject with several scripts attached to it:

Those script handle all the management of 3D nodes, cameras, viewports, devices, clustering.

The VRManager will initialize MiddleVR with the specified options, especially the configuration file. It will create the 3D hierarchy of nodes that you’ve specified in the configuration tool and the cameras with their respective viewports.

It will then automatically update MiddleVR, and then reflect all the updates of 3D nodes and cameras to Unity.

All the devices data will also be updated so you have the latest information about your input devices.

Note: The VR Manager will not have any effect on your application before you press play. More precisely, no object of the VR hierarchy (3D nodes, cameras, screens) will be created unless you run your application.

MiddleVR automatically disables any existing camera of your scene for performance reason. Indeed, the more cameras render their view, the slower your application might be. You can change this behavior with the “Disable Existing Cameras” option.

6.3.2 6.3.2: VR Manager properties

VR Manager properties
Property Description
Config File Specifies the path to the configuration file that should be used. The path can be absolute or relative. In the editor, the path will be relative to the project folder. In the player, the path will be relative to the .exe file.
VR System Center Node Specify the GameObject that will be used as VRSystemCenterNode for the VR hierarchy.
Template Camera Camera to duplicate instead of creating new cameras for each VR camera. If you set the Template Camera option to an existing camera in your scene, this camera will be duplicated for each VR camera instead of creating a new one. This is useful if you want to have on all VR cameras scripts (like image effects [SSAO, Blur…] ), parameters (clear color), or any other component like Flare Layers, GUILayer etc.
Show Wand Show the Wand geometry. Pressing Shift-W will toggle wand on/off.
Use VR Menu Activate/disable availability of VR menus.
Navigation Choose the default navigation method you want to use. Pressing Shift-N will switch between the three navigation modes.
Manipulation Choose the manipulation method you want to use.
Virtual Hand Mapping Choose the way the VRWand will move. “Direct” will use the direct tracking data. “Gogo” will upscale the hand movements as in the Gogo interaction technique.
Show Screen Proximity Warnings Show proximity visual warning when the head or another watched node is too close to a screen.
Fly Allow a free navigation (Fly enabled) or keep the current height (Fly disabled).
Navigation Collisions Enable/disable navigation collisions.
Manipulation Return Objects Enable/disable automatic return of a manipulated object to its original position when it is released.
Show FPS Display the number of frames per second. Pressing Shift-D (like “D”ebug) will toggle display on/off.
Disable Existing Cameras Will parse the scene to find existing cameras that don’t belong to the MiddleVR hierarchy and disable them. This is mainly done for performance reasons.
Grab Existing Nodes Will parse the scene to find existing nodes that match a node name in the MiddleVR hierarchy. The existing node will then be inserted as part of the MiddleVR hierarchy.
Debug Nodes Will display the nodes of the MiddleVR hierarchy as transparent blue cubes. This allows for easy debugging of their position and orientation.
Debug Screens Will display the screens of the MiddleVR hierarchy as transparent blue rectangles. This allows for easy debugging of their position, orientation and size.
Logs To Unity Console Will redirect MiddleVR logs to the Unity console. Works only in Editor mode. This setting is useful when you are working with a high log level but you want to temporarily turn-off log redirection to the Unity console in order to get better performances in Unity Editor.
Quit On ESC When in a standalone player, will exit the application if the Escape key is pressed.
Don’t Change Window Geometry MiddleVR will not try to change the player’s window size, position or resolution.
Simple Cluster Enable the Simple Cluster option. See section “Clustering” for more information.
Simple Cluster Particles Enable/disable synchronization on a cluster for particles.
Force Quality Force a specified Player Quality. See below for more information.
Force Quality Index Index of the Player Quality to force. See below for more information.

6.3.3 6.3.3: Force Player Quality

Usually when you start a Unity application / player a startup window pops up asking for the resolution you want to use and the player quality that should be set. MiddleVR deactivates this window because it will automatically handle the resolution. When exporting a Unity application, you can select what should be the default quality:

In this example, the default player quality will be “Good”, because the second column (Windows build) is checked green.

There is currently a bug in Unity where the quality currently selected (here in blue) in the Unity editor will set the quality of the next Unity player that will be run. In cluster mode, there bug can also appear and select a random quality.

The “Force Quality” option from the VR Manager will allow you to force the specified “Force Quality Index” to the quality you want to be applied. The index is starting at 0 for the first quality (here Fastest).

6.4 6.4: Running your application in the Unity editor

As soon as you have configured the VR Manager, you are ready to run your application.

If the “Disable Existing Cameras” option is set, the VR Manager will start by disabling all the existing cameras of your application.

It will then create the whole VR hierarchy by creating a Game Object for each 3D node, including Screens. A regular Unity camera will be created for each MiddleVR camera found in the VR hierarchy.

Note: It is important to understand that the VR hierarchy are objects created only when you press play. As soon as you press stop, those objects will be deleted. The VR hierarchy is recreated each time you press play after having stopped the application. Pausing the application will not destroy the VR hierarchy.

If you change the configuration file, the VR hierarchy will be recreated as specified in this updated configuration file. This will ensure that your application is always up to date with your VR system.

Note: As the viewport geometry cannot be programmatically changed while in the Unity Editor, the geometry and aspect ratio of the viewports created by MiddleVR will look different than what they will be in the player.

Note: Active stereoscopy will not be displayed while in Unity editor. Only one eye will be displayed in monoscopy.

6.5 6.5: Exporting to a standalone player

You can choose to export either in 32-bit (x86) or 64-bit (x86_64):

Note: starting with MiddleVR 1.2, you shouldn’t have to modify the following options manually, they should be modified automatically when you drag and drop the VRManager in your project. It might still be interesting to understand what is happening behind the scenes.

Before exporting your application as a standalone player, you also have to make sure Unity is correctly configured to not get in the way of MiddleVR.

Go in the player settings (Edit > Project Settings > Player) and make sure the following parameters match the screenshot:

The “Default is Full Screen” option will make sure that Unity does not override MiddleVR’s window configuration.

The “Display Resolution Dialog” option will also make sure that Unity does not override MiddleVR’s window configuration. This is especially important in cluster mode: you probably don’t want to close the resolution dialog on each cluster node each time you run your application.

The “Player Log” might slow down the cluster if a slave is trying to write its log through the network.

6.5.1 6.5.1: Additional parameters for active stereo (OpenGL Quad-Buffer)

If you’re running in active stereo (OpenGL Quad-Buffer), you should also have the vertical synchronization (VSync) de-activated in the menu Edit > Project Settings > Quality.

When importing the MiddleVR.unitypackage, VSync is automatically disabled for all Qualities(Fast, Simple, Good, Beautiful, Fantastic…).

MiddleVR will internally handle the VSync.

Note: Make sure that the Quality line selected (above in blue) is the one you want to use with MiddleVR. It seems Unity will use this selection as the default quality setting for the player.

6.6 6.6: Running your application as a standalone application

The favorite option to run your VR application is to use the Simulations window from the configuration editor:

You can also just have to run it by double-clicking on the generated exe file.

Note: With applications generated by Unity 4, MiddleVR can’t automatically remove the border of the window anymore. You need to add the -popupwindow argument on the command line. This is automatically done when launching your application from the configuration editor. Note: If you want to copy your application on another computer, MiddleVR will first have to be installed on the system because MiddleVR is not embedded in the data folder of the application.

Note: If you want to copy your application on another computer, MiddleVR will first have to be installed on the system because MiddleVR is not embedded in the data folder of the application.

Note: The first time you run your application, if you’re running in OpenGL quad-buffer (active stereo) mode, MiddleVR will copy an important file next to the application: the d3d9.dll which allows MiddleVR to get important information from your application. After the file is copied, MiddleVR will exit the application. Simply run it again and the application will run in stereo.

You can override the configuration that was specified in Unity by adding the command line option: --config c:\my_folder\my_config.vrx.

6.7 6.7: How to attach your nodes in the VR hierarchy?

Once you have configured that the hand of your user is moving with a tracker, you might want that a 3D object that you’ve created moves with the user’s hand.

For example you might have added a tennis racket to your project and want it to be attached to the user’s hand.

There are two options for that:

6.7.1 6.7.1: Grab existing nodes

The easiest option is to give your object the same name as a node in the VR hierarchy and check the “Grab Existing Nodes” option of the VR manager.

If this option is activated, when the VR manager initializes, it will parse the scene for nodes that have the same name as the nodes in the VR hierarchy, and insert those instead of creating an empty GameObject.

For example if in the VR hierarchy there is a node named HandNode, and you named the parent of your 3D racket model also HandNode, the VR manager will simply “grab” this node and insert it in the VR hierarchy. The VR manager will then use this node as a parent for the rest of the sub-hierarchy.

The children of your 3D model will all be moved with its parent.

6.7.2 6.7.2: Attach to node

The second option is simply to attach the “VR Attach to Node" script to the node that you want to insert in the VR hierarchy.

You can find this script in the “Scripts/Interactions” folder of the MiddleVR package.

Parameter Description
VRParent Node Name of the parent vrNode3D. For example “HandNode”, “HeadNode”.
Keep Local Position/Rotation/Scale In Unity by default when parenting an object to another, the local transform of an object is modified so that it keeps its world transform. This means the object is not moved when it changed its parent. If you want to use the local transform to specify an offset from the VRParent Node to the object, enable “Keep local Position/Rotation/Scale”

6.8 6.8: Using a template camera

A template camera can be configured as an option of the VR Manager.

If you set the Template Camera option to be an existing camera in your scene, this camera will be duplicated for each VR camera instead of creating a new one. This is useful if you want to have on all VR cameras scripts (like image effects [SSAO, Blur…]), parameters (clear color), or any other component like Flare Layers, GUILayer etc.

Note: Some post-processing effects might not work when using active stereoscopy / clustering. See “VR-compliant post-processing effects in Unity”.

6.9 6.9: Wand interactions

When you have correctly configured your Wand in MiddleVR, you can use standard interactions like:

Here’s the 3D representation of the wand:

The first thing you have to do is enable the Show Wand option of the VRManager:

The wand is visible and active by default.

The VRWand node has several scripts attached to it that handle and configure interactions.

Interactions configured by the VRWand.
Interactions configured by the VRWand.

You can safely deactivate the scripts that you don’t need.

As you can see in the parameter of the script VRAttach To Node, by default the Wand is attached to the HandNode. You can change the name of the node to HeadNode for example if you wanted to navigate in the direction where you look.

Note that despite the fact that MiddleVR supports multiple wands, the 3D representation and the interactions will still only use the first wand from the configuration. The other wands signals are accessible through the device manager and you can use them to make your own interaction scripts.

The navigation is handled by one of the Navigation Interaction scripts attached to the VRWand. The VRManager parameter “Navigation Method” allows you to choose the navigation mode for your simulation.

Some of these navigation methods have a “Fly” property to lock/unlock vertical translations. This parameter can be directly checked/unchecked on the interaction script before building, but you can also press Shift-F to toggle the fly mode when the simulation is running.

When the simulation is running, it is also possible to switch the current navigation method to another by pressing Shift-N .

Some navigations techniques need a navigation button to be used. By default, this button is the wand button 1. This way, in the MiddleVR configuration, you can map your chosen device button to the Wand button 1 to make it be the navigation button.

Standard Navigation Methods
Navigation Description
Joystick Use the Wand’s joystick to navigate in the direction pointed by the Wand.
Elastic Stretch a virtual elastic to the direction you want to move. The more your stretch, the faster you go.
Grab World Grab the air with the Wand and drag the world towards you.

If you want to create your own navigation, simply select “None” in Navigation Method of the VRManager.

6.9.1.1 6.9.1.1: Joystick Navigation Method

You can navigate using the axis of the Wand joystick. The navigation will move the selected node (Navigation Node) in the direction that the Reference Node is pointing.

By default, the navigation is moving the VRSystemCenterNode in the direction pointed by the HandNode. It will also rotate left or right if you move the horizontal axis of theWand.

If you want to sidestep, point your hand to the right or to the left and press forward.

By default you will be able to fly in the scene. If you wand to stick to the current height, uncheck the Fly option.

Joystick Navigation options
Option Description
Direction Reference Node Move in the direction pointed by this node. By default it will move in the direction of “HandNode”.
Turn Around Node The rotation will occur around the given node. By default it will rotate around the “HeadNode”.
Translation Speed Translation speed, in meters per second.
Rotation Speed Rotation speed, in degrees per second.
Fly Allow a free navigation (Fly) or keep the current height (Fly disabled).

6.9.1.2 6.9.1.2: Elastic Navigation Method

You can navigate by pressing the navigation button of the Wand joystick and stretching the appearing elastic gizmo that binds the Wand to it’s stretch start position. The navigation will move the VRSystemCenterNode in the direction that the elastic gizmo is pointing.

By default, the navigation is moving the CenterNode in the direction pointed by the elastic gizmo. The more the elastic is stretched, the faster the navigation is. The same concept applies to rotation: the rotation difference between now and the moment you pressed the navigation button will make the VRSystemCenterNode rotate the same way.

If you want to sidestep, press the navigation button and stretch the elastic to the right or to the left.

By default you will be able to fly in the scene. If you wand to stick to the current height, uncheck the Fly option.

Elastic Navigation options
Option Description
Reference Node The node that defines the start and end positions of the elastic. By default it will use the node “HandNode”.
Translation Speed Translation speed, in meters per second.
Rotation Speed Rotation speed, in degrees per second.
Distance Threshold The minimum elastic stretch distance to start moving.
Angle Threshold The minimum rotation angle between the start and end orientations to make a rotation movement.
Use Translation X Is translation along the X axis allowed?
Use Translation Y Is translation along the Y axis allowed?
Use Translation Z Is translation along the Z axis allowed?
Use Rotation Yaw Is horizontal rotation allowed?
Fly Allow a free navigation (Fly) or keep the current height (Fly disabled).

6.9.1.3 6.9.1.3: Grab World Navigation Method

You can grab a position in the air with the navigation button and drag yourself in the virtual space. The navigation will move the VRSystemCenterNode to make the Reference Node stay static as if your hand was grabbing a handle fixed to the world.

By default, the navigation is moving the CenterNode in reference of the grabbing HandNode. It will also rotate left or right around the Reference Node if you rotate it.

If you want to sidestep, grab the world and drag yourself to the right or to the left.

Grab World Navigation options
Option Description
Reference Node Move relatively to this node when grabbing. By default it will be “HandNode”.

You can activate the simple VRNavigationCollision script to add collisions to the navigation interaction. This script is attached to the “Wand” node. If it is enabled, VRNavigationCollision searches for an activated navigation interaction and makes sure that the specified collision node will not penetrate the scene colliders and slide over them.

VRNavigationCollision script options
Option Description
Collision Node Name The name of the 3D node that will not be allowed to penetrate scene colliders. By default we use the “HeadNode” to do the collisions.
Collision Distance The minimum distance allowed between the collision node position and the collider surface.

6.9.1.5 6.9.1.5: Screens Proximity Warning

You can activate the parameter “Show Screen Proximity Warnings” in the VRManager to make screens visually warn you when you get too close to them. This will activate the VRInteractionScreenProximityWarning script that is attached to the “Wand” node. When it is enabled, VRInteractionScreenProximityWarning watches if your head (by default) is too close to a screen. If yes, it makes a physical warning screen appear in the place of the virtual screen. This way, you are remembered of the real screen presence when you are just about to physically hit it.

VRInteractionScreenProximityWarning script options
Option Description
Nodes To Watch A list of the 3D nodes’ names (declared in your configuration file) that will be used to check the distance from the screens. By default we only use the “HeadNode” to prevent the user’s head to hit a screen.
Warning Distance When the distance between the watched node and the screen is lower than “Warning Distance”, the visual warning appears.

6.9.2 6.9.2: Manipulation

The manipulation is handled by one of the Manipulation Interaction scripts attached to the VRWand. The VRManager parameter “Manipulation Method” allows you to choose the manipulation mode for you simulation.

Standard Manipulation Methods
Option Description
Ray Use the Wand’s ray to manipulate the object as if it would be put on a spit.
Homer The object moves like your hand from it’s position, but the translations and rotation are upscaled.

If you want to create your own manipulation, simply select “None” in Manipulation Method of the VRManager.

6.9.2.1 6.9.2.1: Ray Manipulation Method

The object you grabbed with the Hand Node is attached to the wand ray, this way you can manipulate it as if it would be put on a spit.

By default, the Hand Node is set to “HandNode”.

Ray Manipulation options
Option Description
Hand Node This node is used as a reference for the object’s movements. By default it will move in the direction of “HandNode”.

6.9.2.2 6.9.2.2: Homer Manipulation Method

The object you grabbed starts moving like the Hand Node, but with larger translations and rotations.

By default, the Hand Node is set to “HandNode”.

Homer Manipulation options
Option Description
Hand Node This node is used as a reference for the object’s movements. By default it will move in the direction of “HandNode”.
Translation Scale The factor used to upscale the Hand Node translations.
Rotation Scale The factor used to upscale the Hand Node rotations.

6.9.3 6.9.3: Virtual Hand Mapping

The mapping between the Hand tracking input and the hand node can be handled by one of the Virtual Hand Mapping scripts attached to the VRWand. The VRManager parameter “Virtual Hand Mapping” allows you to choose the way to move the Hand in you simulation.

The “Direct” mapping doesn’t change the Hand behavior.

The “Gogo” mapping will increase your hand movements when it is far from your Head to help you (for example) reach far items.

Virtual Hand Gogo options
Option Description
Hand Node The actual Hand Node. By default it is set to “HandNode”.
Head Node The Head Node. Used to get the distance between the head and the hand. By default it is set to “HeadNode”.
Gogo Start Distance In meters. When the distance between Hand Node and Head Node is under this value, the movements are scale 1. When it is over this value, the further you go, the higher is the translation scale.
Real Distance Max Maximum distance in meters that the user can put between his hand and his head. This value is used with Virtual Distance Max. When the user reaches his maximum physical hand movement, the Hand Node is at Virtual Distance Max meters from the Head Node.
Virtual Distance Max Maximum distance in meters that the user wants to be able to reach between the Hand Node and the Head Node. This value is used with Real Distance Max. When the user reaches his maximum physical hand movement, the Hand Node is at Virtual Distance Max meters from the Head Node.

6.9.4 6.9.4: Interaction

The Wand can be used to select any 3D object in your scene. You can then notify this object that a button of the wand has been pressed. You can also grab this object to move it around in a different place.

The VRWand Interaction script handles the selection. It has several options:

VRWand Interaction options
Option Description
Ray Length Length of the ray. Any object further than this distance will not be selected.
Highlight Highlight the wand when an object is selected?
Highlight Color Highlight color of the wand when an object is selected.
Grab Color Highlight color of the wand when an object is grabbed.
Repeat Action When this option is enabled, will send the VRAction message every frame. If the option is disabled, will send the VRAction message only when the button is toggled.

6.9.4.1 6.9.4.1: Action

When the object is selected, it can be notified that a button is pressed on the wand. This is useful if you want to perform an action when a particular node is selected.

If you want a node to be able to be notified, you have to attach the VRActor scripts to it. This script will enable notifications on this object. You can find the VRActor script in the MiddleVR/Scripts/Interactions folder:

VR Actor options

Grabable

Can this object be grabbed?

SyncDirection

The Unity GameObject movements will be synced with its corresponding MiddleVR internal 3D node (called a vrNode3D, more information about vrNode3D in section: Programming interactions ). This parameter specifies which node sets its position and orientation to the other:

  • NoSynchronization: This is the default value. The VRActor stays a standard GameObject, no corresponding vrNode3D is created.
  • MiddleVRToUnity: The vrNode3D moves the GameObject just after the MiddleVR kernel update. Only the vrNode3D should be moved from script.
  • UnityToMiddleVR: The GameObject moves the vrNode3D just before the MiddleVR kernel update. Only the GameObject should be moved from script.
  • BothDirections: Both previous actions are applied.

When the Wand intersects an object that has a VRActor attached to it, it will change its color to the highlight color defined in the VRWandInteraction script options.

When an object is selected and the main button of the Wand is pressed, the Wand will send the VRAction message to it. To react to this message, the only thing you have to do is create a method called VRAction on any script attached to the object. This method will be called every time the VRAction message is sent.

You can find such a sample script in: MiddleVR/Scripts/Samples/VRActionSample.

The Unity GameObject with a VRActor script will be synced with a MiddleVR vrNode3D. This way, the Unity and MiddleVR representations of this node will always have the same position and orientation in space. It is possible to configure which one will move the other with the SyncDirection parameter. Please refer to the table above for a more precise understanding of the possible values and effects of SyncDirection.

6.9.4.2 6.9.4.2: Grabbing

If you want an object to be grabable, simply add a VRActor script to it and enable the Grabable option. When the object is selected and the main button is pressed, the object will be grabbed by the Wand, and released when the button is released.

6.9.5 6.9.5: VR First Person Controller

Unity’s First Person Controller is a handy way of having a nice navigation. You can control this First Person Controller using the Wand axis and buttons.

First you have to import the MiddleVR_FPS package. You can find it in your MiddleVR/data installation folder.

This will import a new script in your project: VRFPSInputController:

Drag this script on your First Person Controller.

You can specify which node to use for direction. By default, it’s the HandNode.

Note: If you want the VR hierarchy to follow the First Person Controller, you must set the VR System Center Node parameter of the VR Manager to a node that belongs the First Person Controller hierarchy.

Beware: The First Person Controller object’s center is not on the ground, it is located 0.5m above. This means that if you set the VR System Center Node to the First Person Controller directly, the VR hierarchy will be 0.5m too high. One solution is to create a child node of the First Person Controller that is simply offset by -0.5 on the Y axis and set the VR System Center Node to this offset object.

VRFPSInputController options
Option Description
Reference Node Node that will be used to determine the forward direction.

6.9.6 6.9.6: Improving the portability of your application

Being able to use your application on your VR system is a good thing, but it would be even better if you could use it on all the other VR systems!

Your application is said to be portable when it can

Here are a few guidelines.

6.9.6.1 6.9.6.1: Rely on 3D nodes instead of trackers

Read more about accessing 3D nodes in the “Programming interactions”.

If you don’t want to look for a 3D node by its name, you can also assign tags to them. You can find the tag parameters just under the name of the 3D node in the configuration editor. You can them parse all nodes to find a node with a particular tag. We will add methods to better handle this later.

6.10 6.10: Programming interactions

6.10.1 6.10.1: Introduction

In this section we will cover the basics of programming interactions from within a Unity C# script.

MiddleVR handles all aspects of your VR simulation:

For example, how to react when the user presses a button on a joystick? Or a certain key on a keyboard?

Check Appendix 2 - Class Hierarchy for an overview of the relationship between classes.

For a complete class reference, check the MiddleVR class reference:

6.10.2 6.10.2: Creating an interaction script

First create a script and attach it to an active object: go into the Assets menu, then Create > C Sharp script.

Drag and drop the script to an active object.

Double-click on the script to edit it, and add: using MiddleVR_Unity3D;.

using UnityEngine;
using MiddleVR_Unity3D;

public class Example : MonoBehaviour
{
    void Start()
    {
    }

    void Update()
    {
    }
}

6.10.3 6.10.3: Input devices

MiddleVR has an object that manages all the devices: the device manager.

You can query the device manager for the keyboard and mouse states:

if (MiddleVR.VRDeviceMgr != null)
{
    // Testing mouse button
    if (MiddleVR.VRDeviceMgr.IsMouseButtonPressed(0))
    {
        MVRTools.Log("Mouse Button pressed!");
        MVRTools.Log("VRMouseX: " + MiddleVR.VRDeviceMgr.GetMouseAxisValue(0));
    }
    // Testing keyboard key
    if (MiddleVR.VRDeviceMgr.IsKeyPressed(MiddleVR.VRK_SPACE))
    {
        MVRTools.Log("Space!");
    }
}

Note: The Unity package of MiddleVR contains this sample script: MiddleVR/Scripts/Samples/VRAPISample.

The device manager holds the reference to all declared devices. If you want to get access to trackers’ data, or the state a joystick, you will first have to ask a reference of the corresponding object to the device manager:

if (MiddleVR.VRDeviceMgr != null)
{
    // Testing mouse button
    if (MiddleVR.VRDeviceMgr.IsMouseButtonPressed(0))
    {
        MVRTools.Log("Mouse Button pressed!");
        MVRTools.Log("VRMouseX: " + MiddleVR.VRDeviceMgr.GetMouseAxisValue(0));
    }

    // Testing keyboard key
    if (MiddleVR.VRDeviceMgr.IsKeyPressed(MiddleVR.VRK_SPACE))
    {
        MVRTools.Log("Space!");
    }
}

vrTracker tracker = null;
vrJoystick    joy = null;
vrAxis       axis = null;
vrButtons buttons = null;

// Getting a reference to different device types
if (MiddleVR.VRDeviceMgr != null)
{
    tracker = MiddleVR.VRDeviceMgr.GetTracker("VRPNTracker0.Tracker0");
    joy     = MiddleVR.VRDeviceMgr.GetJoystickByIndex(0);
    axis    = MiddleVR.VRDeviceMgr.GetAxis("VRPNAxis0.Axis");
    buttons = MiddleVR.VRDeviceMgr.GetButtons("VRPNButtons0.Buttons");
}

// Getting tracker data
if (tracker != null)
{
    MVRTools.Log("TrackerX: " + tracker.GetPosition().x());
}

// Testing joystick button
if (joy != null && joy.IsButtonPressed(0))
{
    MVRTools.Log("Joystick!");
}

// Testing axis value
if (axis != null && axis.GetValue(0) > 0)
{
    MVRTools.Log("Axis Value: " + axis.GetValue(0));
}

// Testing button state
if (buttons != null)
{
    if (buttons.IsToggled(0))
    {
        MVRTools.Log("Button 0 pressed!");
    }
    if (buttons.IsToggled(0, false))
    {
        MVRTools.Log("Button 0 released!");
    }
} 

Note: The Unity package of MiddleVR contains this sample script: MiddleVR/Scripts/Samples/VRAPISample.

6.10.4 6.10.4: Accessing wand data

It is very easy to access the axis values and buttons states of the Wand through the device manager:

if (MiddleVR.VRDeviceMgr != null)
{
    // Getting wand horizontal axis
    float x = MiddleVR.VRDeviceMgr.GetWandHorizontalAxisValue();
    // Getting wand vertical axis
    float y = MiddleVR.VRDeviceMgr.GetWandVerticalAxisValue();

    // Getting state of primary wand button
    bool wandButtonPressed0 = MiddleVR.VRDeviceMgr.IsWandButtonPressed(0);

    // Getting toggled state of primary wand button
    // bool wandButtonToggled0 = MiddleVR.VRDeviceMgr.IsWandButtonToggled(0);

    if (wandButtonPressed0)
    {
        // If primary button is pressed, display wand horizontal axis value
        MVRTools.Log("WandButton 0 pressed! HAxis value: " + x + ", VAxis value: " + y + ".");
    }
}   

Note: The Unity package of MiddleVR contains this sample script: MiddleVR/Scripts/Samples/VRAPISample.

You can also access the Wand data from a JavaScript:

Note: In order for the JavaScript to correctly compile, you have to move the MiddleVR folder into the “Standard Assets”, “Pro Standard Assets” or “Plugins” so that the VRManager script is compiled before the JavaScript. Otherwise the JavaScript will complain that VRManagerScript is an unknown type. For more information see Unity Script Compilation (Advanced).

function Update() {
    var VRMgrObject : GameObject = GameObject.Find("VRManager");
    var VRMgr : VRManagerScript;

    if (VRMgrObject != null) {
        VRMgr = VRMgrObject.GetComponent(VRManagerScript);
    } else {
        print("Couldn't find VRManager object.");
    }

    if (VRMgr != null) {
        var x = VRMgr.WandAxisHorizontal;
        var y = VRMgr.WandAxisVertical;

        var wandButtonPressed0 = VRMgr.WandButton0;

        if (wandButtonPressed0) {
            VRMgr.Log("WandButton 0 pressed! HAxis value: " + x + ", VAxis value: " + y + ".");
        } else {
            print("Couldn't access VRManagerScript: " + VRMgrObject);
        }
    }
}

Note: Since MiddleVR 1.4.2 f1, it is possible to add multiple wands in a configuration. To access their inputs, you simply need to retrieve the wand you wish by calling:

vrWand myWand = MiddleVR.VRDeviceMgr.GetWand("MyWandName");

6.10.5 6.10.5: The display manager

The display manager (MiddleVR.VRDisplayMgr) is responsible for 3D nodes, cameras, viewports and display management:


// 3D nodes
vrNode3D      node  = null;
vrCamera     camera = null;
vrCameraStereo scam = null;
vrScreen     screen = null;
vrViewport       vp = null;

if (MiddleVR.VRDisplayMgr != null)
{
    node   = MiddleVR.VRDisplayMgr.GetNode("HeadNode");
    if (node != null) { MVRTools.Log("Found HeadNode"); }

    camera = MiddleVR.VRDisplayMgr.GetCamera("Camera0");
    if (camera != null) { MVRTools.Log("Found Camera0"); }

    scam   = MiddleVR.VRDisplayMgr.GetCameraStereo("CameraStereo0");
    if (scam != null) { MVRTools.Log("Found CameraStereo0"); }

    screen = MiddleVR.VRDisplayMgr.GetScreen("Screen0");
    if (screen != null) { MVRTools.Log("Found Screen0"); }

    vp     = MiddleVR.VRDisplayMgr.GetViewport("Viewport0");
    if (vp != null) { MVRTools.Log("Found Viewport0"); }
}

Note: The Unity package of MiddleVR contains this sample script: MiddleVR/Scripts/Samples/VRInteractionTest.

6.10.6 6.10.6: Converting data from MiddleVR to Unity

As explained in the Unity coordinate system section, MiddleVR and Unity use a different coordinate system. When you get a 3D coordinate (3D vector, quaternion, matrix) from a tracker or a 3D node of MiddleVR, you have to convert it before using it to Unity:

vrNode3D node = MiddleVR.VRDisplayMgr.GetNode("HeadNode");

transform.position = MVRTools.ToUnity(node.GetPositionVirtualWorld());
transform.rotation = MVRTools.ToUnity(node.GetOrientationVirtualWorld());

6.10.7 6.10.7: Converting data from Unity3D to MiddleVR

On the opposite way, when you get a 3D coordinate (3D vector, quaternion, matrix) from a Unity GameObject and want to set it on a MiddleVR node or tracker, you first need to convert it from Unity to MiddleVR coordinate system:

vrNode3D node = MiddleVR.VRDisplayMgr.GetNode("HeadNode");

node.SetPositionVirtualWorld(MVRTools.FromUnity(transform.position));
node.SetOrientationVirtualWorld(MVRTools.FromUnity(transform.rotation));

6.10.8 6.10.8: Debugging with MonoDevelop

You can use MonoDevelop to build applications using MiddleVR.

You just need go in the MonoDevelop menu Tools > Preferences > Unity > Debugger and disable “Build project in MonoDevelop”:

6.10.9 6.10.9: Troubleshooting

What to do if things don’t work as expected? Check the online Knowledge Base.

6.11 6.11: Some useful MiddleVR sample scripts

6.11.1 6.11.1: Shortcut to invert eyes at runtime

By dropping the sample script MiddleVR/Scripts/Samples/VRShortcutInvertEyes.cs in your scene, you will be able to switch the left and right eyes when pressing Shift-I.

6.11.2 6.11.2: Shortcut to reload level at runtime

By dropping the sample script MiddleVR/Scripts/Samples/VRShortcutReload.cs in your scene, you will be able to reload the current level when pressing Shift-R and load the level 0 when pressing Control-Shift-R.

6.12 6.12: Upgrade the MiddleVR Unity Package

6.12.1 6.12.1: Clean an old Unity project before upgrading it

If you already have a Unity project using an old version of MiddleVR, you may want to upgrade it. In MiddleVR 1.6, some files of the UnityPackage moved. This should be done automatically. In case this fails, you should clean the old files before importing the new MiddleVR package. Here are the files to remove:

7 7: Cluster

7.1 7.1: Introduction

Clustering allows you to use multiple computers to drive a VR system than cannot be used with only one computer. The main issue then becomes how to synchronize all those computers? MiddleVR is able to synchronize multiple instances of itself running on different computers.

One particular computer, the cluster server, will synchronize its information with the other computers, the cluster clients. There are multiple levels of synchronization: framelock, swaplock and genlock. See section “Cluster Concepts” for more details.

Important note: The usage of clusters is complex, and MiddleVR will help you as much as possible in this task. But MiddleVR cannot do everything for you: you will have to understand the mechanisms of cluster and adapt your application accordingly.

Currently MiddleVR will automatically synchronize the state of all its devices. You can also choose which of your own data to synchronize, see below.

7.2 7.2: Concepts

7.2.1 7.2.1: Cluster nodes

A cluster is a collection of computers that are used together to drive one VR system. Typically, one computer drives one or more projector/screen. Each computer is called a “Cluster Node”. There is a primary computer that is called the “Cluster Server” and acts as the master for the other computers, which are called “Cluster Clients”.

The server will gather all the information needed to synchronize all the cluster nodes and send it to them.

7.2.2 7.2.2: Synchronization

In a cluster, the main issue is to correctly synchronize all the nodes, otherwise there will be discrepancies at the junction of projectors.

There are several layers of synchronization.

7.2.2.1 7.2.2.1: Framelock

Framelock makes sure that the simulated world is the same on all cluster nodes. It makes sure that everything is at the same place and the same state on all computers, so the distributed simulation is coherent. Perfect framelock is hard to achieve unless you are writing the 3D engine yourself, and even then, it’s not always possible to synchronize everything perfectly.

MiddleVR does its best to automatically synchronize scenes by using heuristics to synchronize only the necessary states and thus saving network bandwidth and CPU resources.

Note: Framelock is also sometimes another name for Genlock.

7.2.2.2 7.2.2.2: Swaplock

Swaplock makes sure that all computers swap their double buffers at the same time, meaning that each new picture is displayed at the same time on every computer.

Otherwise you can have one computer displaying a previous frame while another computer displays the current frame.

This swaplock can be done by software (SoftSwapLock), but is more precise when doubled by a hardware swaplocked. This hardware swaplock is generally handled by the graphics cards through an external synchronization card, such as a GSync card with NVidia Quadros. The same option is also possible with ATI cards. This option is called NVSwapLock.

7.2.2.3 7.2.2.3: Genlock

Genlock is a hardware option that also requires a GSync card with NVidia Quadros and ATI cards. When using stereoscopy, Genlock makes sure that all the computers display the same eye (left or right) at the same time. If you’re not using it, a computer might be displaying the right eye while another one displays the left eye. Headache guaranteed!

7.3 7.3: Configuring

7.3.1 7.3.1: Summary

First you have to prepare all the cluster nodes:

On the server: - Create a network shared folder named for example “MiddleVRDemo” - Copy the Shadow demo and the configuration files in MiddleVRDemo

On the server and each other computer:

Note: Unity applications can’t be run from a windows shared folder directly, you will have to mount the shared folder as a network drive. See section “Creating a shared folder”.

7.3.2 7.3.2: Troubleshooting

The knowledge base has valuable articles: http://www.middlevr.com/kb/troubleshooting-the-cluster-setup.

If you have any issue, don’t hesitate to contact support.

7.3.2.1 7.3.2.1: VRDaemon

Once the MiddleVR package is installed, you must launch the VRDaemon on all the nodes, including the master, standing in C:\Program Files (x86)\MiddleVR\bin\VRDaemon.exe.

The result is a DOS window that should always stay opened:

If you need you cluster nodes computers to automatically run the VRDaemon at Windows startup, you can follow the steps of our knowledge base here: http://support.middlevr.com/hc/en-us/articles/204367495

7.3.2.2 7.3.2.2: Creating a shared folder

Locate the folder of your Unity application and share it so that it is visible by all the computers on the network. You can achieve this by right-clicking on your folder and select “Share with”:

Once this is done,you have to mount this network path by right clicking on your computer in the file explorer and selecting “Map a network driver”:

Make sure to mount the folder with the same drive on all computers, including the server:

7.3.2.3 7.3.2.3: Local copy

Local copy consists in copying your application to every computer’s hard drive. The goal is mainly to speed-up the loading process. The improvements are generally very good. Several solutions like Microsoft SyncToy, SpiderOak, Dropbox or other synchronization services can do the trick.

7.3.3 7.3.3: Configuring the cluster

In the Cluster tab, you need to create one server and as many clients (+Client) as needed.

This is the window to configure cluster options and cluster nodes.

Cluster options
Property Description
NVidiaSwapLock Hardware swaplock. See section “Concepts - Cluster Synchronization”.
Disable VSync On Server If VSync is enabled in the Viewports configuration, disable VSync only on Server. This is useful if the master uses a different refresh rate than the rest of the cluster. Often the master only has a mono display at 60hz while the other nodes have 120hz displays. In this case, disabling the VSync on the master gives better performances.
Force DirectX > OpenGL conversion When not using active stereo, you can still force the display of the DirectX rendering in an OpenGL window. This is particularly useful when using a cluster: if your master is not in active stereo but the rest of the cluster nodes are, you should activate this option. This is also required if your cluster is not using active stereo.
Multi-GPU When enabled, MiddleVR will find the GPUs that are connected to a display. By this way, DirectX and OpenGL (if the conversion above is activated) will render only on the GPU on which the display is connected. This setting is very interesting for computers with several GPUs because it enables the use of all of them instead of only one by default.

Here’s the configuration for the cluster server:

Cluster server options
Property Description
Address Specify the hostname IP address of the cluster server. Should be reachable by all cluster clients. Note: If you specify “localhost” or “127.0.0.1”, the clients will not be able to find the server, unless they all run on the same machine.
Viewports Specify the viewports used by the server.
CPU Affinity Specify the CPU cores to be used. For example with 4 cores, activating only the first two ones will be done by setting this value to 0,1. Note that MiddleVR relies on the number cores/CPUs as reported by Windows. The value does not always reflect the real number of physical CPUs because of technologies such as Intel Hyper-Threading, and because a CPU can be made up of several cores. Note that the activity of each CPU can be seen in the Windows task monitor. It is suggested that you try this feature only with multi-physical-CPUs (not simply-multi-cores CPUs) because Windows is already able to distribute very well threads on a same CPU equipped with several cores (you will just get bad performances). For example, try to use the 4 cores of a 2nd CPU but not the 4 cores of the 1st and measure performance differences.

Here’s the configuration for all cluster clients:

Cluster client options
Property Description
Address Specify the hostname or IP address of the cluster client.
ClusterID You can specify a specific cluster identification name for readability or better debugging.
Viewports Specify the viewports used by the client.
CPU Affinity Specify the CPU affinity settings used by the client.

7.3.3.1 7.3.3.1: Multi-GPU Vs. Nvidia Mosaic

According to Nvidia, Mosaic is a technology that lets the system views multiple displays as a single unified desktop environment without software customization or performance degradation. However our in-house tests showed that one GPU only was used with Mosaic whereas our Multi-GPU option used significantly every GPU. With deferred-rendering, we even reached x2 better performances between Multi-GPU and Mosaic with two graphics cards.

Nevertheless the Multi-GPU option comes at a price because it forces the use of clustering and so to manage its difficulties (see the cluster section).

7.3.4 7.3.4: Starting a cluster application from the Simulations window

The easiest way to run your cluster application is to use the Simulations window.

If you are using a network drive, make sure to add the application from this network drive. MiddleVR will tell all nodes to use the exact same command line, so if you’re adding your application from a local folder that does not exist on the cluster nodes, the VRDaemon will not be able to start it.

Simply choose your application from the network driver and cluster configuration and hit Run. Make sure you have the VRDaemon running on all machines, including the master.

7.3.5 7.3.5: Stopping a cluster application

The easiest way to stop an application is by pressing the Escape key on the server’s keyboard.

If the application is frozen, you can also use the “Kill All Cluster Nodes” option in the simulations window:

Pressing this button will send a message to all VRDaemons to kill the last applications that they started.

7.3.6 7.3.6: Manually starting a cluster application

You can also manually start the application without the graphical tool. On the server, you can execute the application by double-clicking on it or create a .bat file that contains the right command line.

MiddleVR will then tell the VRDaemons of all the configured cluster clients to run the same application with the exact same path and command line arguments.

Make sure to use all the required command line arguments. You can find them in the Simulations tab, after clicking on the requested application and configuration file.

7.4 7.4: Synchronization

There are a few things to understand when you want to create a cluster application. The root issue is that MiddleVR will run a Unity instance (player) per computer that make up the VR system. This requires multiple levels of synchronization, which we defined in section “Cluster - Concepts” of the user guide.

On each cluster node, the Unity player will be running. This means that all your scripts will still be running on all cluster nodes.

The theory is the following: if your application is deterministic (“whose resulting behavior is entirely determined by its initial state and inputs, and which is not random”), its state at the end of each frame will be the same on all machines.

This means that the initial state and inputs for each frame must be the same on all machines.

Inputs include:

For the parts that have a random component, you will need to synchronize the state manually. For example in Unity, rigid bodies (physics objects) don’t behave the same on all computers. At the end of the frame, a cube might be at different positions on different cluster nodes. This is why MiddleVR has the VRManagerPostFrame script that will synchronize the state of those objects.

7.4.1 7.4.1: Sequence diagram

Here’s how the synchronization works:

7.4.2 7.4.2: Simple cluster

You can first try to use the SimpleCluster option of the VRManager.

This will try to automatically determine which objects need their position/orientation to be synchronized by the VRManagerPostFrame script. For the moment this is all the objects that react to physics, i.e. objects that have a Rigid Body component and are not Kinematic, and all First Person Controllers.

MiddleVR will automatically add the VRClusterObject script that will synchronize their position/orientation. Note that this will only work if all your objects are already created and not dynamically added later on in the application. You might then need to manually add the VRClusterObject script. You can also manually add the MiddleVR/Scripts/Cluster/VRClusterObject.cs script to the nodes that you want to synchronize.

Note: The Simple Cluster option this will only work if all your objects are already created and not dynamically added later on in the application. For objects dynamically created, make sure to manually add the VRClusterObject script (on all cluster nodes).

7.4.3 7.4.3: Inputs

If you have a script that gets inputs from the keyboard, the mouse or a joystick, this script will probably only work with the master because no physical key/button from the keyboard/mouse/joystick on the clients is pressed!

This is why you have to get input events (keyboard / mouse / wand / trackers) from MiddleVR. MiddleVR will synchronize the state of all the devices it handles from the master to all clients.

So instead of: Input.GetKeys(...), you need to use: MiddleVR.VRDeviceMgr.IsKeyPressed and other methods to managed the different devices.

See section “Input devices” in the User Guide.

Note: Make sure that your scripts always execute *after* the VRManagerScript.

7.4.4 7.4.4: Random

MiddleVR automatically gets during start-up the seed of Unity for pseudo-random values and synchronizes it across the cluster.

You can safely use the Random functions of Unity after the first call to VRManager.Update on all cluster nodes. The seed being synchronized on all cluster clients, the Random functions will output the same sequences on all of them.

7.4.5 7.4.5: Physics

Unity’s physics gives random events, so you have to synchronize the position and orientation of those nodes. If you’re using the SimpleCluster option from the VRManager, MiddleVR will automatically add the VRClusterObject script to objects that react to physics (i.e. that have a Rigid Body component and are not kinematic).

Note: The Simple Cluster option this will only work if all your objects are already created and not dynamically added later on in the application. You might then need to manually add the VRClusterObject script.

7.4.6 7.4.6: Time / Delta time

If one of your script uses Unity time (Time.time) for some computation, be aware that this time might not be the same on all cluster nodes. Instead you can use MiddleVR.VRKernel.GetTime() which gives the elapsed time since the server started and is correctly synchronized on the cluster.

The delta time (time since the last frame) is used to get a time-dependent translation/rotation speed. When you create your first 3D application, you will say that you want an object to move 1 meter each frame. If the framerate changes, your object’s speed will change.

The issue is that the delta time may be different on all the different computers of the cluster. Not every computer will take the same time to complete the rendering of a frame. So if any of the script of your application is using Unity’s delta time (Time.deltaTime), it should instead use the MiddleVR’s delta time, which is synchronized among the cluster: MiddleVR.VRKernel.GetDeltaTime().

7.4.7 7.4.7: Particles

Starting from MiddleVR 1.6, the particles are correctly synchronized on the cluster. The random seed of the particles is automatically synchronized from the cluster server to the cluster clients. This behavior can be disabled by disabling the VRManager option SimpleClusterParticles.

7.4.8 7.4.8: Skyboxes

Skyboxes don’t correctly work with asymmetric cameras (stereo cameras or cameras used for head tracking in front of a VR wall), so you should replace them by a big sphere or cube geometry.

7.4.9 7.4.9: Shaders

Some shaders, like the water shader, don’t get along with asymmetric cameras. They need to be adapted.

7.4.10 7.4.10: Random objects

If as a result of your scripts, objects have random positions/orientations, you can apply to them the VRClusterObject which will synchronize their position/orientation at each end of frame.

7.4.11 7.4.11: Sharing custom data

MiddleVR provides the VRSharedValue<T> type to share any serializable C# object across the cluster.

Creating a VRSharedValue<T> requires a unique sharing name and an initial value:

using MiddleVR_Unity3D;

var mySharedBool = new VRSharedValue<bool>("MySharedBool", false);

Setting and getting its value is done through the value property:

// On the server
mySharedBool.value = true;

// On all nodes
if (mySharedBool.value)
{
    // Do something
}

Note: Changing the value is an asynchronous action: the real change will happen on all cluster nodes upon the next Update() of VRManagerScript or VRManagerPostFrame, even on the server node! If you are not sure, refer to the script execution order.

Note: Changing a VRSharedValue only works on the server node.

7.4.11.1 7.4.11.1: Sharing Unity types

As of Unity 4.x, Unity struct types such as vectors are not directly serializable. MiddleVR provides several wrapper classes and implicit conversion operators to enable seamless serialization of those types. You only have to create a VRSharedValue of one of the following types:

using MiddleVR_Unity3D;

// Creation
var mySharedVector = new VRSharedValue<SerializableVector3>("MySharedVector", new Vector3(0.0f, 0.0f, 0.0f));

// Setting
mySharedVector.value = new Vector3(0.0f, 0.0f, 0.0f);

// Getting
Vector3 vec = mySharedVector.value;

7.4.12 7.4.12: Sharing events

See the Commands on a cluster section.

7.5 7.5: Testing your cluster application on a single computer

You can run your cluster application on one single computer for easier testing. You can simulate a cluster by running two Unity instances by running the MiddleVRDaemon on your computer and using the same IP address (or localhost). See VirtualCluster.vrx in the installation folder of MiddleVR: C:\Program Files (x86)\MiddleVR\data\Config\Cluster\VirtualCluster.vrx.

7.6 7.6: Converting existing applications

7.6.1 7.6.1: Converting ShadowDemo

You can download the shadow demo from here: https://unity3d.com/showcase/live-demos#shadows.

7.6.2 7.6.2: Converting Car tutorial

You can download the car tutorial from here: https://www.assetstore.unity3d.com/en/#!/content/10.

We will have the car move with the configured Wand device.

throttle = Input.GetAxis("Vertical");
steer = Input.GetAxis("Horizontal");

With:

var vrmgr : GameObject;
vrmgr = GameObject.Find("VRManager");
var script : VRManagerScript;
script = null;

if (vrmgr != null) {
    script = vrmgr.GetComponent("VRManagerScript");
}

if (script != null) {
    // Wand data throttle = script.WandAxisVertical;
    steer = script.WandAxisHorizontal;
    // Left
    if (script.IsKeyPressed(0xCB)) { steer = -1; }
    // Right
    if (script.IsKeyPressed(0xCD)) { steer = 1; }
    // Up
    if (script.IsKeyPressed(0xC8)) { throttle = 1; }
    // Down
    if (script.IsKeyPressed(0xD0)) { throttle = -1; }
} else {
    throttle = Input.GetAxis("Vertical");
    steer = Input.GetAxis("Horizontal");
}

When you get back to Unity, it should complain that: “The name ‘VRManagerScript’ does not denote a valid type (‘not found’).”

This means that the MiddleVR scripts are compiled after this JavaScript. You simply have to move the MiddleVR folder in the “Pro Standard Assets” folder. If you can’t, close your script editor (Visual Studio or MonoDevelop). See for more info: http://docs.unity3d.com/412/Documentation/ScriptReference/index.Script_compilation_28Advanced29.html.

7.6.3 7.6.3: Converting AngryBot

You can download AngryBot from here: https://www.assetstore.unity3d.com/en/#!/content/12175.

This is a quick conversion that does not involve yet a first person perspective:

7.7 7.7: Optimization

7.7.1 7.7.1: Objects sync

Try to synchronize the minimum number of objects with VRClusterObject. The more objects you synchronize manually, or the more number of physic objects that you use (which will be automatically synchronized), the slower the network will be.

7.7.2 7.7.2: Master display

It’s better to display a small viewport on the master and to disable its VSync if it’s not part of the actual display system so it runs faster and doesn’t slow down the rest of the cluster.

Also if the master has a different refresh rate than the rest of the cluster, use the Disable VSync on Master option.

7.7.3 7.7.3: Logs

Disable Unity’s output logs (in the Player Settings, Use Player Log) otherwise Unity will write its logs through the network and slow the application down.

7.7.4 7.7.4: CPU Intensive tasks

It’s better to put all the CPU intensive tasks before the VRManagerScript::Update executes: we use a thread at the end of a frame which will handle the frame synchronization, but Unity can start to work before the frame sync is actually over, so we get some parallelization.

Just be aware that MiddleVR will update the devices state a bit later, when the VRManagerScript executes.

7.8 7.8: Limitations

It is possible that some parts of the simulation can’t be synchronized by MiddleVR. This includes:

Internally MiddleVR uses a video synchronization mechanism that could be applied to all videos. Contact support for more information.

8 8: Advanced Programming

8.1 8.1: Commands and values

8.1.1 8.1.1: Introduction

Commands are objects that represent a named callback. They can be used as simple handlers, for example in the case of MiddleVR’s GUI Widgets, but they are also used to transmit events and data across different languages (for example in the case of HTML user interfaces) or cluster nodes.

8.1.2 8.1.2: Values

Commands use the vrValue type to pass data around. Instances of vrValue are simple data containers that can hold the following types:

vrValues in C# are created like this:

// Creating vrValues using the new operator...
vrValue val = new vrValue(true);                // boolean
vrValue val = new vrValue(0);                   // number
vrValue val = new vrValue("MiddleVR");          // string
vrValue val = new vrValue(new vrVec2(0.0, 0.1)) // vec2

//... or implicit conversions
vrValue val = true;                // boolean
vrValue val = 0;                   // number
vrValue val = "MiddleVR";          // string
vrValue val = new vrVec2(0.0, 0.1) // vec2

Lists and maps are created special static methods:

vrValue list = vrValue.CreateList(); // list
list.AddListItem(1);
list.AddListItem(2);

vrValue map = vrValue.CreateMap();  // map
map["MiddleVR"] = true;
map["X Axis"] = new vrVec3(1.0, 0.0, 0.0);
map["The answer to life, the universe, and everything"] = 42;

Testing types and getting values is done with methods vrValue.Is* and vrValue.Get*:

// Getting a boolean value
if (val.IsBool())
{
    print(val.GetBool());
}

// Getting a number value
if (val.IsNumber())
{
    // Internally, all numbers are stored as doubles
    print(val.GetInt());
    print(val.GetFloat());
    print(val.GetDouble());
}

// Iterating over a list
if (val.IsList())
{
    // GetList() returns an IEnumerable<vrValue>
    foreach (vrValue item in val.GetList())
    {
       ...
    }
}

// Iterating over a map
if (val.IsMap())
{
    // GetMap() returns an IEnumerable<KeyValuePair<string,vrValue>>
    foreach (KeyValuePair<string, vrValue> item in val.GetMap())
    {
       ...
    }
}

8.1.3 8.1.3: Commands

Creating a command from C# in Unity is done by instantiating a vrCommand with a unique name and delegate as an argument. The delegate must be of the form vrValue handler(vrValue), meaning you have to return a vrValue, even if it is null.

Here is a short example:

private vrValue MyCommandHandler(vrValue iValue)
{
    return null;
}

vrCommand myCommand = new vrCommand("MyCommandName", MyCommandHandler);

// Call the command directly
myCommand.Do( "MyValue" );

// Or Call the command by name
MiddleVR.VRKernel.ExecuteCommand( "MyCommandName", "MyValue" );

// Any vrValue can be passed to a command:
vrValue list = vrValue.CreateList();
list.AddListItem( 1 );
myCommand.Do( list );

Commands cannot be called recursively.

8.1.4 8.1.4: Commands on a cluster

When running on a cluster, all commands are synchronized by default.

Commands synchronized on the cluster behave differently than when running on a single computer in a few ways:

If you don’t want a command to be synchronized on the cluster and always behave like it’s running on a single computer, you have to pass the VRCommandFlags_DontSynchronizeCluster flag when creating your command:

vrCommand myCommand = new vrCommand(
    "MyCommandName",
    MyCommandHandler,
    (uint)VRCommandFlags.VRCommandFlags_DontSynchronizeCluster);

The VRClusterCommandSample script is a simple example of command usage on a cluster.

9 9: Graphical User Interfaces

9.1 9.1: Introduction

Since version 1.6, MiddleVR includes graphical user interface (GUI) capabilities based on web standards.

This allows you to:

9.2 9.2: Web views

9.2.1 9.2.1: Introduction

Web views offer a way to display web pages directly into an immersive 3D experience.

Make sure to start with those tutorials: Creating a graphical user interface in HTML5

9.2.2 9.2.2: Creating a web view

Creating a web view is as simple as using a Unity prefab, located in the Scripts\Samples\GUI directory in MiddleVR’s Unity package:

Web view prefabs
Web view prefabs
VRWebSample2D
VRWebSample2D
VRWebSample3D
VRWebSample3D

Alternatively, simply add the VRWebView script to any GameObject. The script will change the GameObject’s material and texture.

Note: the mesh doesn’t have to be a plane, it can be of any shape.

Web view properties
Web view properties
VRWebView properties
Property Description
Width Width (in pixels) of the web page texture.
Height Height (in pixels) of the web page texture.
URL URL of the web page. http://, https:// and file:// protocols are supported. The web view script assumes URLs without any protocol specified are file URLs. Absolute paths and paths relative to your Assets folder are supported for files.
Zoom Web page zoom. The default value of 1.0 means a normal element size.

9.2.3 9.2.3: Web resources in Unity projects

9.2.3.1 9.2.3.1: Storing web resources in a Unity project

MiddleVR web views can reference web pages located in the Assets folder by using a relative path. For example, WebContents/webpage.html will point at the webpage.html from the WebContents folder in the Unity project Assets.

However, storing web resources directly into the Assets folder may cause errors because Unity recognizes all files with the .js file extension as Unity scripts.

There are two ways to work around this issue:

9.2.3.2 9.2.3.2: Building a Unity standalone player with web resources

When building a player, MiddleVR automatically copies the contents (except .meta files) of the following folders, if they exist, to the data folder of the player:

You should put any web-related file (HTML, CSS, JavaScript, images and web fonts) into one of these two folders. If you want to put web resources into another folder, you will have to copy them yourself.

Web pages can then be used in the VRWebView script with relative paths to the Assets folder: WebAssets/MyFolder/index.html or .WebAssets/MyHiddenFolder/index.html.

WebAssets folder
WebAssets folder

Note: To create a .WebAssets folder from the Windows File Explorer, you will have to type “.WebAssets.”. The additional dot at the end is necessary, and will be removed by the File Explorer.

Note: The Assets/MiddleVR/WebAssets and Assets/MiddleVR/.WebAssets are also automatically copied but are reserved for MiddleVR files and must not be used for your files.

9.2.4 9.2.4: Web view capabilities and limitations

As of MiddleVR 1.6, web views use Chromium as the underlying web engine. Web pages will work as in a standard Google Chrome browser, but with the following limitations:

Note: The web engine used by MiddleVR might change in future versions. We strongly suggest using standard HTML5, CSS or JavaScript code and not using browser-specific extensions when you design custom pages.

Note: To get the best performance out of web views we recommend using Unity Pro. Web views are much slower in the free version of Unity due to the restrictions on rendering plugins. Setting a large size on a web view will have a significant impact on performance when using the free version of Unity.

9.2.5 9.2.5: Web Views on a Cluster

Synchronizing the internal state of a web rendering engine across a cluster is a complex problem that is beyond the scope of MiddleVR. What MiddleVR provides, though, is image synchronization: web views are only rendered on the server node and the resulting image is distributed on the network.

Image synchronization uses TCP port 9996. Refer to the clustering documentation for more information regarding clusters.

By default, synchronized images are compressed with the JPEG algorithm to save network bandwidth.

9.2.6 9.2.6: Calling JavaScript code from C#

Web views have a ExecuteJavascript method that can be used to execute arbitrary JavaScript code.

VRWebView webViewScript = GetComponent<VRWebView>();
if( webViewScript.webView.IsReady() )
{
    webViewScript.webView.ExecuteJavascript("MyJavaScriptFunction();");
}

See also tutorial “Creating a HTML GUI” for another example of calling the JavaScript of a webpage from Unity’s C#.

9.3 9.3: VR menus

Whether you don’t know HTML or you simply want to create basic user interfaces, MiddleVR provides widget classes to design simple hierarchical menus.

You can:

9.3.1 9.3.1: MiddleVR default VR menu

Make sure to first read tutorial “MiddleVR VR Menu”.

MiddleVR offers an immersive menu that you can customize to include your own menu items. The default menu allows you to change the navigation scheme, the manipulation scheme and various other options.

By default you activate the menu by pressing button 3 of your Wand. This can be changed on the VRMenu GameObject:

You interact with the menu by pressing the button 0 of your Wand.

You can deactivate the menu by disabling the option “Use default menu” in the VRManager options:

9.3.1.1 9.3.1.1: Widget types

Here is the list of available widget types that you can use in a VR menu:

Web menu Widgets
Web menu Widgets
List of available widgets
Class Description
vrWidgetMenu Menu
vrWidgetSeparator Menu separator
vrWidgetButton Simple button
vrWidgetToggleButton Checkbox (Two-state button)
vrWidgetRadioButton Radio button
vrWidgetSlider Slider
vrWidgetList Single-selection list
vrWidgetColorPicker Color Picker

The class reference has information about all the methods of these widgets.

The script MiddleVR/Scripts/Samples/GUI/VRGUIMenuSample shows an example of using all those widgets.

9.3.1.2 9.3.1.2: Extending the default VR menu

9.3.1.2.1 9.3.1.2.1: Introduction

First make sure to read tutorial “MiddleVR VR Menu”.

The MiddleVR package provides a default menu that is activated with a wand button. You can customize this menu by retrieving its menu widget:

VRMenu MiddleVRMenu = FindObjectOfType(typeof(VRMenu)) as VRMenu;
new vrWidgetButton("VRMenu.MyMenuItem", MiddleVRMenu.menu, "My Menu Item", m_MyItemCommand);

The script MiddleVR/Scripts/Samples/GUI/VRCustomizeDefaultMenu.cs shows how to:

private vrCommand m_MyItemCommand;
vrValue MyItemCommandHandler(vrValue iValue)
{
    print("My menu item has been clicked");
    return null;
}

private void AddButton(VRMenu iVRMenu)
{
    // Add a button at the start of the menu
    m_MyItemCommand = new vrCommand(
        "VRMenu.MyCustomButtonCommand", MyItemCommandHandler);

    vrWidgetButton button = new vrWidgetButton(
        "VRMenu.MyCustomButton", iVRMenu.menu, "My Menu Item", m_MyItemCommand);
    iVRMenu.menu.SetChildIndex(button, 0);

    // Add a separator below it
    vrWidgetSeparator separator = new vrWidgetSeparator(
        "VRMenu.MyCustomSeparator", iVRMenu.menu);
    iVRMenu.menu.SetChildIndex(separator, 1);
}

private void RemoveItem(VRMenu iVRMenu)
{
    // Remove "Reset" submenu
    for (uint i = 0; i < iVRMenu.menu.GetChildrenNb(); ++i)
    {
        vrWidget widget = iVRMenu.menu.GetChild(i);
        if( widget.GetLabel().Contains("Reset"))
        {
            iVRMenu.menu.RemoveChild(widget);
            break;
        }
    }
}

private void MoveItems(VRMenu iVRMenu)
{
    // Move every menu item under a sub menu
    vrWidgetMenu subMenu = new vrWidgetMenu(
        "VRMenu.MyNewSubMenu", null, "MiddleVR Menu");

    while (iVRMenu.menu.GetChildrenNb() > 0)
    {
        vrWidget widget = iVRMenu.menu.GetChild(0);
        widget.SetParent(subMenu);
    }

    subMenu.SetParent(iVRMenu.menu);
}
9.3.1.2.2 9.3.1.2.2: Using other widgets
Web menu Widgets
Web menu Widgets

The script MiddleVR/Scripts/Samples/GUI/VRGUIMenuSample shows an example of using all those widgets:

private vrGUIRendererWeb m_GUIRendererWeb;
private vrWidgetMenu m_Menu;
private vrWidgetButton m_Button1;
private vrWidgetToggleButton m_Checkbox;
private vrWidgetMenu m_Submenu;
private vrWidgetRadioButton m_Radio1;
private vrWidgetRadioButton m_Radio2;
private vrWidgetRadioButton m_Radio3;
private vrWidgetColorPicker m_Picker;
private vrWidgetSlider m_Slider;
private vrWidgetList m_List;

private vrCommand m_ButtonCommand;
private vrCommand m_CheckboxCommand;
private vrCommand m_RadioCommand;
private vrCommand m_ColorPickerCommand;
private vrCommand m_SliderCommand;
private vrCommand m_ListCommand;

private vrValue ButtonHandler(vrValue iValue)
{
    m_Checkbox.SetChecked(! m_Checkbox.IsChecked());
    print("ButtonHandler() called");
    return null;
}

private vrValue CheckboxHandler(vrValue iValue)
{
    print("Checkbox value: " + iValue.GetBool().ToString());
    return null;
}

private vrValue RadioHandler(vrValue iValue)
{
    print("Radio value: " + iValue.GetString());
    return null;
}

private vrValue ColorPickerHandler(vrValue iValue)
{
    vrVec4 color = iValue.GetVec4();
    print("Selected color: " + color.x().ToString() + " " +
        color.y().ToString() + " " + color.z().ToString());
    return null;
}

private vrValue SliderHandler(vrValue iValue)
{
    print("Slider value: " + iValue.GetFloat().ToString());
    return null;
}

private vrValue ListHandler(vrValue iValue)
{
    print("List Selected Index: " + iValue.GetInt());
    return null;
}

// Use this for initialization
protected void Start()
{
    // Create commands

    m_ButtonCommand = new vrCommand("GUIMenuSample.ButtonCommand", ButtonHandler);
    m_CheckboxCommand = new vrCommand("GUIMenuSample.CheckboxCommand", CheckboxHandler);
    m_RadioCommand = new vrCommand("GUIMenuSample.RadioCommand", RadioHandler);
    m_ColorPickerCommand = new vrCommand("GUIMenuSample.ColorPickerCommand", ColorPickerHandler);
    m_SliderCommand = new vrCommand("GUIMenuSample.SliderCommand", SliderHandler);
    m_ListCommand = new vrCommand("GUIMenuSample.ListCommand", ListHandler);

    // Create GUI

    m_GUIRendererWeb = null;

    VRWebView webViewScript = GetComponent<VRWebView>();

    if (webViewScript == null)
    {
        MVRTools.Log(1, "[X] VRGUIMenuSample does not have a WebView.");
        return;
    }

    m_GUIRendererWeb = new vrGUIRendererWeb("", webViewScript.webView);

    m_Menu = new vrWidgetMenu("GUIMenuSample.MainMenu", m_GUIRendererWeb);

    m_Button1 = new vrWidgetButton(
        "GUIMenuSample.Button1", m_Menu, "Button", m_ButtonCommand);

    new vrWidgetSeparator("GUIMenuSample.Separator1", m_Menu);

    m_Checkbox = new vrWidgetToggleButton(
        "GUIMenuSample.Checkbox", m_Menu, "Toggle Button", m_CheckboxCommand, true);

    m_Submenu = new vrWidgetMenu("GUIMenuSample.SubMenu", m_Menu, "Sub Menu");
    m_Submenu.SetVisible(true);

    m_Radio1 = new vrWidgetRadioButton("GUIMenuSample.Radio1", m_Submenu, "Huey", m_RadioCommand, "Huey");
    m_Radio2 = new vrWidgetRadioButton("GUIMenuSample.Radio2", m_Submenu, "Dewey", m_RadioCommand, "Dewey");
    m_Radio3 = new vrWidgetRadioButton("GUIMenuSample.Radio3", m_Submenu, "Louie", m_RadioCommand, "Louie");

    m_Picker = new vrWidgetColorPicker(
        "GUIMenuSample.ColorPicker", m_Menu, "Color Picker", m_ColorPickerCommand, new vrVec4(0, 0, 0, 0));

    m_Slider = new vrWidgetSlider(
        "GUIMenuSample.Slider", m_Menu, "Slider", m_SliderCommand, 50.0f, 0.0f, 100.0f, 1.0f);

    vrValue listContents = vrValue.CreateList();
    listContents.AddListItem("Item 1");
    listContents.AddListItem("Item 2");

    m_List = new vrWidgetList(
        "GUIMenuSample.List", m_Menu, "List", m_ListCommand, listContents, 0);
}

9.3.2 9.3.2: Creating a custom VR menu

You can also create your own menu from scratch. The easiest way to get started is to modify the VRGUIMenuSample3D sample prefab.

Here are the steps to create a web menu from scratch:


protected void Start()
{
    VRWebView webViewScript = GetComponent<VRWebView>();

    if(webViewScript == null)
    {
        MVRTools.Log(1, "[X] Custom VR menu does not have a WebView.");
        return;
    }

    m_GUIRendererWeb = new vrGUIRendererWeb("MyMenuRenderer", webViewScript.webView);

    // Now we can create widgets
    m_Menu = new vrWidgetMenu("MyMenu", m_GUIRendererWeb);

    ...

All widgets are constructed using a MiddleVR object name, a parent widget, and a label. Most widgets will also take a reference to a command and additional initialization parameters.

For example, a button widget can be created that way:


// Create a button
vrWidgetButton button = new vrWidgetButton(
    "MyMenu.MyButton", parentWidget, "My Button Label", command);

// Alternatively
vrWidgetButton button = new vrWidgetButton("MyMenu.MyButton");
button.SetParent(parentWidget);
button.SetLabel("My Button Label");
button.AddCommand(command);

Note: The MiddleVR Class Reference provides additional details regarding constructor arguments and widget methods.

Please refer to the VRGUIMenuSample2D and VRGUIMenuSample3D prefabs to see a complete example.

9.4 9.4: HTML graphical user interface (GUI)

Web views can be used for more than simple web pages. MiddleVR allows to create complex UIs based on open web standards by making communication possible between C# scripts and JavaScript code in your web pages.

Make sure to read tutorial “Creating a HTML GUI” for a good introduction.

9.4.1 9.4.1: Communication between web pages and C# code

JavaScript code:


// The second argument can be any JavaScript value.
// It will be made available as a vrValue in C\#
MiddleVR.Call('ButtonCommand', 42);

C# code:


private vrCommand m_ButtonCommand;

private vrValue ButtonHandler(vrValue iValue)
{
    // Do something with the value
    print( iValue.GetInt() );
    return null;
}

protected void Start()
{
    m_ButtonCommand = new vrCommand("ButtonCommand", ButtonHandler);
}

Note: The second argument of MiddleVR.Call can be any valid JavaScript value! Note: For more information about the vrValue, read section “vrValue”.

9.4.2 9.4.2: Example

The VRGUIHTMLBasicSample2D and VRGUIHTMLBasicSample3D prefabs show basic interaction between C# and JavaScript code. These prefabs must be used with the data/GUI/HTMLBasicSample/index.html file in your MiddleVR installation folder.

10 10: Haptics

10.1 10.1: Haption haptics

This section presents how to use Haption’s haptic devices within MiddleVR.

10.1.1 10.1.1: Haptic features

MiddleVR provides an access to the main concepts of the Haption SDK, plus it tries to embrace the current concepts that are spread along the main physics engines for video games such as Bullet and PhysX. Hence, we chose to use the names that most people will recognize if they had to use physics engines.

We are going to present the Haption SDK concepts and also give differences with some usual physics engines. It is important to note that the internal name of the Haption SDK is IPSI (Interactive Physics Simulation Interface). Since MiddleVR is built upon IPSI, messages that contains “IPSI” can be displayed by a simulation.

10.1.2 10.1.2: Main physics notions for haptics with IPSI

10.1.2.1 10.1.2.1: Gravity?

A very important difference between the IPSI engine and other physics engine is that it does not rely on the concept of “gravity”. It means that every object is just floating “in the air”. However, there is one very important exception: gravity is applied on an object that is being manipulated by a Haption device.

Let’s rephrase the last sentence. A virtual physics object that is not manipulated by a Haption device just floats in the air. However a manipulated virtual object receives the gravity force. So the haptic device will go down if gravity is oriented toward the ground, because of the gravity applied on the manipulated virtual physics object.

Common physics engines define a global gravity vector and a mass value per physics object. We have decided to use these solution.

Gravity defaults to the value (0,0,-9.81) which corresponds to the average gravity vector on Earth expressed in the MiddleVR base (right-handed, +X to the right, +Y to the front, +Z to the top) in m.s − 1.

Mass is a float value given per physics object. IPSI obtains weight thanks to the usual formula:

weight = mass × gravity

10.1.2.2 10.1.2.2: Rigid bodies are physics objects

IPSI manipulates rigid bodies. As the name suggests, a rigid body is a physics object that is not deformable. Each rigid body is given a position, an orientation, a mass (that will be renders through a manipulation device), a physics geometry that approximates its shape, damping factors for linear and angular displacements.

10.1.2.3 10.1.2.3: Types of actions on physics bodies

IPSI supports two types of objects: static and movable ones.

As the name suggests, a static object cannot be moved, never: its position and orientation are set once and for all during its creation.

A movable object can be manipulated by two means:

  1. Applying forces and torques on it.
  2. Coupling, associating with a manipulation device (i.e. a haptic device).

The first case is a moving object that will apply a force during collision on another one.

The second case stands for the problem of bringing a haptic device into the physics simulation. The virtual object that is connected to a manipulation device will be directly moved by it, and any force applied on the virtual object will be render to the user through the manipulation device.

10.1.2.4 10.1.2.4: Managing collisions between physics bodies

IPSI lets the user activate or deactivate every collision in the simulation or per pair of rigid bodies (for example, it is possible to limit collisions to only two specific rigid bodies in a complex physics simulation).

Three goals for this feature:

10.1.2.5 10.1.2.5: Adding constraints between physics bodies

Constraint are a way to remove degrees-of-freedom of rigid bodies or to limit their movements according to another rigid body.

IPSI supports several constraints that we expose in MiddleVR. However we sometimes decided to rename them in favor of a widespread vocabulary that we found in physics engines:

  1. Ball-and-socket: also called a spherical joint, 3 degrees-of-freedom, this constraint allows rotations to vary freely around a position.
    Wikipedia: http://en.wikipedia.org/wiki/Ball_joint.
  2. Cylindrical: this constraint, 2 degrees-of-freedom, combines properties of a hinge constraint and a prismatic constraint: free rotations around an axis that can slides.
    Wikipedia: http://en.wikipedia.org/wiki/Cylindrical_joint.
  3. Fixed: also called a weld joint, it links rigidly two bodies.
  4. Helical: also called a screw joint, 1 degree-of-freedom, this constraint provides axis translation by utilizing the threads of the threaded rod.
    Wikipedia: http://en.wikipedia.org/wiki/Screw_joint.
  5. Hinge: also called a revolute joint, 1 degree-of-freedom, this constraint allows to rotate around an axis.
    Wikipedia: http://en.wikipedia.org/wiki/Revolute_joint.
  6. Planar: this constraint, 3 degrees-of-freedom, allows to freely rotate around 2 axis while keeping the constrained body parallel to a planar surface.
  7. Prismatic: also called a slider joint, 1 degree-of-freedom, this constraint keeps orientations identical but allows to slide along an axis.
    Wikipedia: http://en.wikipedia.org/wiki/Prismatic_joint.
  8. Universal-joint: this constraint, 2 degrees-of-freedom, allows to rotate around 2 axis so that two rods connected this way would be seen as one ‘bent’ rod.
    Wikipedia: http://en.wikipedia.org/wiki/Universal_joint.

10.1.3 10.1.3: Adding Haption devices to the system

The following steps are not strictly needed by MiddleVR but remain mandatory for every use of Haption devices. So if the configuration of the IPSI server is not done yet, let’s do it.

Run the “DEVICE_CONFIGURATOR.exe” program (it may be located in the “C:\Program Files\HAPTION\IPSI\Server\V2.20\bin” folder depending on the version of the IPSI server you use).

Figure 4: Configuring Haption haptic devices globally.
Figure 4: Configuring Haption haptic devices globally.

The available settings for an haptic device to be added are:

Property Description
Kind Specify the kind of device.
Name Specify an arbitrary name for the device.
Address Specify the network address of the device.
Position and Orientation Specify the base position and orientation of the haptic device. Indeed, you should keep these values to zero and manipulate the “Observation Node” in the “Add Device” window of MiddleVR-Config.

Once you provided settings, click on “Add” to add the device.

Note: if you encounter a crash – such as an exception, when you try to add a device, please check that the folder “%programdata%\HAPTION\IPSI” is writable.

10.1.4 10.1.4: Configuring Haption devices

In the Devices part of MiddleVR-Config, click on the “+” button. In the “Add Device” window that popped-up, select the “Haption” item.

Figure 5: Adding an Haption haptic device.
Figure 5: Adding an Haption haptic device.

The Haption driver manages several settings:

Property Description
Server Address Specify the address of the computer where the IPSI server is running.
Mode Specify the connection mode defines the kind of devices to be used (see details below).
Time Step Specify the simulation execution time step, in seconds (default value: 0.003).
Resolution Specify the tessellation parameter, in meters (default value is 0.003). This parameter gives the simulation precision.
Observation Node Specify a MiddleVR 3D-Node that will define the base of the haptic devices. This setting aims at always placing the haptic devices in front of the user’s body to ease its gestures. The list of nodes is populated by the 3D-nodes that were found
Enable Collisions (*) Specify whether the simulation should start with collisions enabled.

(*) Disabling collisions at start-up and enabling them later can lead to a faster start-up of the simulation.

The available connection modes are:

Mode Description
Desktop Includes all SpaceMouses, Virtuose 15/25 and Virtuose Desktop.
Powerwall Includes Virtuose 6D - 35/45 and Flysticks from motion capture.
Immersive Includes Virtuose devices 35/45, Virtuose Inca and all compatible motion tracking systems.

Each recognized haptic device will be listed under the “Haption.Driver” line and prefixed by the word “HaptionX.” where X stands for the number of this device. In addition, note that we always present at least one device, so “Haption0.” is always available.

Figure 6: A SpaceMouse added as Haption device.
Figure 6: A SpaceMouse added as Haption device.

The presented lines have these meanings for the corresponding device:

Property Description
.PowerOn Tells whether the device is turned-on. Note that it will remain turned-off when a SpaceMouse is used.
.UserDetected Tells whether the user is detected (e.g. the user handled the Virtuose wrench. It can also be seen as the opposite of the “dead-man status”).
.EmergencyStop Tells whether the device was stopped.
.VirtualTracker Provides a MiddleVR Tracker object which is a sort of virtual 3D cursor that can be moved and oriented in the virtual environment. The cursor can me translated indefinitely.
.SystemTracker Similar to “Virtual Tracker” but relies only on the physical positions/orientations of the haptic device. No information is provided for a SpaceMouse (because such a device does not “move”).
.Buttons Provides the pressed status of the buttons.
.Wrench.Force (*) Provides the force applied by the user through the wrench.
.Wrench.Torque (*) Provides the torque applied by the user through the wrench.

(*) Notes for wrench force or torque:

1. no force or torque is computed from a SpaceMouse.

2. this value is refreshed at the frequency of devices in MiddleVR, which is based on the update frequency of Unity. Consequently, the frequency will never be up to usable values for haptic rendering. Please limit the use of this value to debugging or displaying of visual feedback for the user.

10.1.4.1 10.1.4.1: What to do if haptic devices are not displayed?

Please verify the following points:

  1. Check that cables are correctly connected.

  2. Check that you added Haption haptic device to the system (except for a SpaceMouse because it does not need to be added).

  3. Remove the Haption driver in MiddleVR-Config by clicking on the “-” button, and then re-add it with “+” so the list of devices will be refreshed.

  4. Verify the mode you use because it will hide not corresponding devices.

  5. Check out logs that MiddleVR prints. For example with only a connected SpaceMouse, we get:
    [ ] The Haption driver will use '1' manipulation devices.
    [ ] + Manipulation device 'SpaceNavigator[0]'.
    [<] The Haption driver connection is ended.
     
    With the Virtuose Simulator, we obtain a second Haption device:
    [ ] The Haption driver will use '2' manipulation devices.
    [ ] + Manipulation device 'SpaceNavigator[0]'.
    [ ] + Manipulation device 'Virtuose 6D Desktop'.
    [<] The Haption driver connection is ended.

Figure 7: The Virtuose Simulator provides a second haptic device.
Figure 7: The Virtuose Simulator provides a second haptic device.

It is suggested to close MiddleVR-Config when playing a simulation with a Haption haptic device in Unity because the IPSI server cannot handle two instances of itself. To avoid crashes, MiddleVR-Config gets the exclusive control on IPSI when its window gets focus. Hence if you click on MiddleVR-Config while a demo is running, the demo will loose access on IPSI…

10.1.5 10.1.5: Properties of physics elements

10.1.5.1 10.1.5.1: Rigid body

In Unity, a rigid body will be simply a GameObject equipped with the “VRPhysicsBody” script.

Please note that the physics system that is shipped with Unity is based on Nvidia PhysX whilst MiddleVR implementation of physics relies on Haption physics. You should therefore avoid to mix the physics systems. So, do not add a Unity rigid body and a MiddleVR rigid body (VRPhysicsBody) on a same Unity GameObject. Do not expect to get Unity Colliders react with MiddleVR physics system.

IPSI works only with meshes so you will provide one for a rigid body via a Unity MeshFilter component.

The rigid body inspector provides the following settings:

Property Description
Static (*) Tells that this rigid body is static.
Mass Sets the mass in kg. Note that this setting is ignored for static objects.
Margin (**) Sets a factor to enlarge or shrink the physics geometry.
Rotation damping A factor to damp rotations.
Translation damping A factor to damp translations.
Merge Child Geometries Finds a geometry in this object or in its children and merge their geometries to build only one. Inactive GameObjects will be ignored.

(*) When marked as static, an object will never be moved: it remains static for all the duration of the simulation. Internally, IPSI uses this property to place static objects in specific data structures and speedup computations. Such an object participates only to collisions.

(**) the mesh used for physics is built from the mesh given by the Unity Mesh Filter.

Position and orientation are given by the Transform part of the inspector and values are expressed in Unity base (left-handed, +X to the right, +Y to the top, +Z to the front). Scale will be applied at mesh loading. Please note that coordinates are given in meters therefore you must avoid to create very big objects if not necessary.

10.1.5.2 10.1.5.2: Associating with a manipulation device

A “Body manipulator IPSI” lets the user set an association with a manipulation device.

The “Manipulation Device Id” matches the number of a present manipulation device. The first one is numbered 0, the second 1, and so on according to the configuration authored in MiddleVR-Config.

Activating/deactivating this component automatically manipulate/un-manipulate the physics body.

Figure 8: Setting a manipulation device.
Figure 8: Setting a manipulation device.

The entry “Attach Point Type” let you select how to attach a manipulation device to this object:

When selecting ABITRARY_POINT, an offset must be given to locate the arbitrary point relatively to the object frame: “Offset Translation” and “Offset Rotation” move the arbitrary point whilst a Unity gizmo shows its location if the object is selected. If the gizmo is not visible, it may be worth to increase its size with the parameter “Gizmo Sphere Radius”.

10.1.5.3 10.1.5.3: Constraints for rigid bodies

10.1.5.3.1 10.1.5.3.1: Important notes for the usage of constraints

Like rigid bodies, constraints cannot be added or removed dynamically. In addition, they cannot be deactivated. One exception however: the “fixed” constraint!

It is not possible to define constraints with limitless values. For example, you cannot define a sliding constraint that allow any translation: it must be limited to a range.

Constraints link two objects together but when the “connected body” is null, the constraint is linked with the world.

Values that define constraint, such as positions or axis, are defined in the frame of the first rigid body (i.e. the body that owns the constraint).

You might need to disable collisions between constrained bodies to allow their interpenetration.

10.1.5.3.2 10.1.5.3.2: Ball-and-socket constraint

Create a ball-and-socket constraint between the “GameObject” this component belongs to and another rigid body.

Figure 9: Setting a ball-and-socket constraint.
Figure 9: Setting a ball-and-socket constraint.

The available settings are:

Property Description
Connected body The peer object to create the constraint with.
Anchor Position of the ball (in this object frame).
Gizmo sphere radius The radius of the gizmo sphere. Raise this value until you see a green sphere at the anchor position.

Gizmos will be displayed only if the “GameObject” this component belongs to is selected in Unity and this component is not collapsed.

10.1.5.3.3 10.1.5.3.3: Cylindrical constraint

Create a cylindrical constraint between the “GameObject” this component belongs to and another rigid body.

Figure 10: Setting a cylindrical constraint.
Figure 10: Setting a cylindrical constraint.

The available settings are:

Property Description
Connected body The peer object to create the constraint with.
Anchor A point crossed by the axis of the cylinder (in this object frame).
Axis The axis of the cylinder (in this object frame).
Angular limits Minimum and maximum values of the rotation (in degrees).
Angular zero position The angle value at start (in degrees).
Linear limits Minimum and maximum values of the linear displacement.
Linear zero position The position value at start.
Gizmo sphere radius The radius of the gizmo spheres. Raise this value until you see a green sphere at the anchor position and roughly at linear limits.
Gizmo line length The length of the line drawn along the axis. Raise this value until you see a line.

Gizmos will be displayed only if the “GameObject” this component belongs to is selected in Unity and this component is not collapsed.

10.1.5.3.4 10.1.5.3.4: Fixed constraint

Create a fixed constraint between the “GameObject” this component belongs to and another rigid body.

Figure 11: Setting a fixed constraint.
Figure 11: Setting a fixed constraint.

Simply one parameter is required:

Property Description
Connected body The peer object to create the constraint with.

Gizmos will be displayed only if the “GameObject” this component belongs to is selected in Unity and this component is not collapsed.

10.1.5.3.5 10.1.5.3.5: Helical constraint

Create a helical constraint between the “GameObject” this component belongs to and another rigid body.

Figure 12: Setting a helical constraint.
Figure 12: Setting a helical constraint.

The available settings are:

Property Description
Connected body The peer object to create the constraint with.
Anchor A point crossed by the axis of the screw (in this object frame).
Axis The axis of the screw (in this object frame).
Limits Minimum and maximum values of the rotation (in degrees).
Zero position The angle value at start (in degrees).
Gizmo sphere radius The radius of the gizmo sphere. Raise this value until you see a green sphere at the anchor position.
Gizmo line length The length of the line drawn along the axis. Raise this value until you see a line.

Gizmos will be displayed only if the “GameObject” this component belongs to is selected in Unity and this component is not collapsed.

10.1.5.3.6 10.1.5.3.6: Hinge constraint

Create a hinge constraint between the “GameObject” this component belongs to and another rigid body.

Figure 13: Setting a hinge constraint.
Figure 13: Setting a hinge constraint.

The available settings are:

Property Description
Connected body The peer object to create the constraint with.
Anchor A point crossed by the axis of the hinge (in this object frame).
Axis The axis of the hinge (in this object frame).
Limits Minimum and maximum values of the rotation (in degrees).
Zero position The angle value at start (in degrees).
Gizmo sphere radius The radius of the gizmo sphere at the anchor position. Raise this value until you see a sphere.
Gizmo line length The length of the line drawn along the axis. Raise this value until you see a line.

Gizmos will be displayed only if the “GameObject” this component belongs to is selected in Unity and this component is not collapsed.

10.1.5.3.7 10.1.5.3.7: Planar constraint

Create a planar constraint between the “GameObject” this component belongs to and another rigid body.

Figure 14: Setting a planar constraint.
Figure 14: Setting a planar constraint.

The available settings are:

Property Description
Connected body The peer object to create the constraint with.
Axis 0 and Axis 1 Two axis (in this object frame) that define a plane.
Gizmo line length The length of the lines drawn along the two axis. Raise this value until you see a line.

Gizmos will be displayed only if the “GameObject” this component belongs to is selected in Unity and this component is not collapsed.

10.1.5.3.8 10.1.5.3.8: Prismatic constraint

Create a prismatic constraint between the “GameObject” this component belongs to and another rigid body.

Figure 15: Setting a prismatic constraint.
Figure 15: Setting a prismatic constraint.

The available settings are:

Property Description
Connected body The peer object to create the constraint with.
Axis Limits The axis to slide along (in this object frame). Minimum and maximum values of the translation.
Zero position The translation value at start.
Gizmo sphere radius The radius of the gizmo spheres: at the center of this object and roughly at the translation limits. Raise this value until you see spheres.
Gizmo line length The length of the line drawn along the axis. Raise this value until you see a line.

Gizmos will be displayed only if the “GameObject” this component belongs to is selected in Unity and this component is not collapsed.

10.1.5.3.9 10.1.5.3.9: Universal-Joint constraint

Create a universal-joint constraint between the “GameObject” this component belongs to and another rigid body.

Figure 16: Setting a universal-joint constraint.
Figure 16: Setting a universal-joint constraint.

The available settings are:

Property Description
Connected body The peer object to create the constraint with.
Axis 0 A first axis (in this object frame) for this U-joint constraint.
Axis 1 A second axis (in this object frame) for this U-joint constraint.
Gizmo sphere radius The radius of the gizmo sphere at the anchor position. Raise this value until you see a green sphere.
Gizmo line length The length of the lines drawn along the two axis. Raise this value until you see a line.

Gizmos will be displayed only if the “GameObject” this component belongs to is selected in Unity and this component is not collapsed.

10.1.5.4 10.1.5.4: Managing and visualizing collisions

10.1.5.4.1 10.1.5.4.1: Managing collisions

It may be useful to disable collisions for pairs of objects: to enable some movements with constraints or to ease object manipulations in crowded zones.

We provide the “VRPhysicsDisableCollisions” component.

Figure 17: Setting a universal-joint constraint.
Figure 17: Setting a universal-joint constraint.

Simply one parameter is required:

Property Description
Connected body (*) The peer object to disable collision with.

(*) If the “connected body” is null, this body will disable every collision with every rigid body.

10.1.5.4.2 10.1.5.4.2: Visualizing collisions

We provide the “VRPhysicsShowContacts” component that can display collisions between two rigid bodies, it needs to be part of a “GameObject”. So create one “GameObject” and this component to it.

Figure 18: Show-contacts inspector.
Figure 18: Show-contacts inspector.

The available settings are:

Property Description
Object at contact (*) A GameObject that will be instantiated at each contact point to show it.
Max contacts nb (**) Sets the maximum number of contact points to be displayed.
Translation and Rotation (***) Apply a translation and rotation to the instantiated GameObjects.
Ray debug (****) A green line that origins from the contact point position and follows its normal.

(*) It is suggested to use a prefab here. For example, we created a prefab called MVR_PhysicsContactPoint that simply displays a red cylinder. In order to keep good performances, you should employ objects with a low-poly resolution and basic shading.

(**) Note that the value is limited internally to 512 in IPSI.

(***) These settings can be very helpful because the contacts points provide two values only: a position and a normal. We try to compute a rotation between the Y axis (i.e. ‘Vector.up’) and the normal so you should build your contact-point mesh along the Y axis or use the rotation setting.

(****) As the name suggests, this setting is mainly intended for debugging. Internally, it uses Unity gizmos so it will be displayed only in the Unity Scene view and will very likely induce a performance penalty.

Figure 19: Contacts between two cubes instantiate many GameObject (then they will remain in a pool for activation/deactivation according to contacts).
Figure 19: Contacts between two cubes instantiate many GameObject (then they will remain in a pool for activation/deactivation according to contacts).

Figure 20: Several red cylinders were created at contact points.
Figure 20: Several red cylinders were created at contact points.

Note that contacts are expressed with two points: one for each involved object (because normals are different and IPSI also send contacts that are going to happen because of surface proximity). We arbitrary chose to get values from the first body only for displaying.

Finally since contact detection implies a performance penalty, it is possible to deactivate every detection by the use of the VRDeactivateAllPhysicsContacts script. It has to be added to only one “GameObject”.

10.1.6 10.1.6: Sample scripts

The folder “Assets/MiddleVR/Scripts/Samples/Physics” contains samples that can be used to write your own scripts.

10.1.6.1 10.1.6.1: “Change Attached Physics Body IPSI”

Show how to change the physics body that is manipulated by a manipulation device. To use this sample, press the keyboard key H (i.e. “haptics”) and C (i.e. “change”) to iterate through all the rigid bodies. Each one will become alternately manipulated and a message will be output to logs.

Note that static or frozen rigid bodies cannot be manipulated.

This sample also illustrates how to use several vrCommands:

Figure 21: The sample to change the manipulated body.
Figure 21: The sample to change the manipulated body.

Simply one parameter is required:

Property Description
Manipulation Device Id Id of the manipulation device that will alternately manipulate rigid bodies.

10.1.6.2 10.1.6.2: “Apply force/torque sample”

Show how to apply a force or a torque on the physics body this component is member of. To use this sample, press the keyboard key H (i.e. “haptics”) and F (i.e. “force”) or T (i.e. “torque”). In addition, pressing a SHIFT key will apply the opposite force or torque.

Figure 22: The sample to apply force or torque.
Figure 22: The sample to apply force or torque.

The available settings are:

Property Description
Force The force to be applied.
Torque The torque to be applied.

10.1.6.3 10.1.6.3: “Device buttons status sample”

Show how to track buttons states: a message is printed when a button of an Haption device is pressed or released.

10.1.7 10.1.7: Tips and tricks

10.1.7.1 10.1.7.1: Errors messages!

The following message is printed in logs:

[X] PhysicsBody: No PhysicsManager found.

It indicates that the physics manager was not loaded. It probably appears because you did not load the Haption driver of MiddleVR. Check that you provided the right configuration file and that it contains a Haption haptic device.

10.1.7.2 10.1.7.2: Haptics do not work anymore after a click on MiddleVR-Config

MiddleVR-Config gets the exclusive control on IPSI when its window gets focus. Hence if you click on MiddleVR-Config while a demo is running, the demo will loose access to IPSI.

10.1.7.3 10.1.7.3: MiddleVR-Config window takes time to get focus

A connection to IPSI is attempted when the window gets focus, which it may be slow. When focus is lost, a disconnection is done.

10.1.7.4 10.1.7.4: The simulation takes a long time to start

Big geometries (such as big cubes) will take more time for loading than small ones. You can try to shrink their size (use the scale factor of the FBX importer).

10.1.7.5 10.1.7.5: Huge memory usage

Do you respect scales of real objects in your simulation? Coordinates in Unity are expressed in meters so you must ensure that your objects are not too big compared to reality.

Another solution is to raise the value of “Resolution” (in the driver Haption) to get a coarser tessellation because the physics world is discretized by IPSI.

Note that a huge memory consumption might be the culprit of a slow-starting simulation.

10.1.7.6 10.1.7.6: Poor performances

Try to disable collision detection between objects where collisions are not worth to track.

10.1.7.7 10.1.7.7: The manipulated rigid body seems to stick with collided objects

The collisions of the manipulated body can lead indeed to a high amount of CPU computation which freeze the simulation during a short period of time.

This case can appear when the manipulated body is a big cube and you are trying to make one of its face collide with another cube. It is due to their large colliding surface. Prefer to use the cube corners to collide with objects. So, more generally, use smaller colliding surfaces if possible.

11 11: Advanced topic - Configuring a tracking system

11.1 11.1: Introduction

Make sure to read the article “Understanding tracking devices”.

11.2 11.2: VR system origin

The difficult part of understanding the configuration of a tracking system is understanding how the data from the tracker are related to the real world.

Position

The first thing to decide is where, in the real world, are want to set the origin of your VR system. This is an arbitrary point in space that has (0,0,0) as coordinates.

This could be a point on the floor of your CAVE, the center of the 3DTV, a point on your desk, or any point in space.

“Default” origin

If you are using only one tracking system, it is often easier to just use the default origin of the trackers. For example for the Kinect, the origin is the position of the 0. If a user is standing exactly on the Kinect, its position will be zero. For a Razer Hydra, the origin is the base. If the Hydra trackers could come exactly inside the base, their position would be exactly (0,0,0).

For any given tracking system, there is a position in space where the tracker will report (0,0,0).

You can decide to keep this point in space as the origin of your VR system, or decide to set the origin somewhere else because it is more convenient.

Neutral orientation

You also have to decide what will be the “neutral orientation”, that is an arbitrary rotation in space where Yaw=Pitch=Roll = 0.

It is often easier to think about your “neutral orientation” in terms of the Front, Right and Up vectors (represented in MiddleVR respectively as +Y, +X, +Z ).

The Up vector is generally easy to set, it’s the opposite of the gravity.

But the Front and Right vectors can be arbitrarily set. There is usually a natural orientation and it shouldn’t be hard to decide on one.

Just make sure to be consistent during the whole configuration process.

“Default” orientation

As for the position, there is a default orientation inherent to your tracking system. You can decide to keep this neutral orientation as the neutral orientation of your VR system, or decide to change it because it is more convenient.

11.3 11.3: Moving the origin of a tracker

There are two reasons why you might want to modify the origin of a tracker:

There are two ways to modify the origin of a tracker:

Using MiddleVR 3D nodes to move the origin of a tracker

MiddleVR offers a quick and easy way to move the origin of a tracker.

You simply have to create a 3D node that will represent the origin of your tracker, and place it with respect to the origin of your VR system.

Then all objects that are tracked by this particular tracker should be represented as 3D nodes that are children from this origin.

For example if you open the “Kinect” predefined configuration, you will see the following hierarchy:

If you move the Kinect0.RootNode, all the other nodes will be offset in too.

Calibrating the tracker origin

You can either manually move the tracker origin (Kinect0.RootNode in the example above), or use one of the calibration features of MiddleVR: “Calibrate Parent”.

Suppose you are using a Razer Hydra. You could create the following hierarchy:

Now let’s say that your Hydra base is not placed at the origin of your VR system. For example you decided that the origin of your VR system is the middle of a table, and this is where all the interactions will happen. You then decide to put the Hydra base away so that it will not disturb the interactions. This means that if you move the hand tracker to the origin, the data reported by MiddleVR will not be (0,0,0), but the actual distance from the hand to the base.

The first option is to manually measure the distance between the Hydra base and the actual origin of the VR system and enter that manually as the coordinates of the HydraBase 3D node.

The other option is to position one of the Hydra tracker, for example the one that you assigned to the HandNode, at the origin of the VR system. You can then simply select “Calibrate Parent” in the “Calibration” options on top of the 3D nodes properties, and press “Calibrate”. This will automatically move the HydraBase 3D node so that the position is (0,0,0). The effect is that now your HandNode is positioned at the origin and the position reported by MiddleVR for the HandNode is also (0,0,0).

Offset to the tracked object

Set Neutral Transformation / Position / Orientation

12 12: Advanced topic - Understanding stereoscopy

From Wikipedia:

Stereoscopy (also called stereoscopics or 3D imaging) is a technique for
creating the illusion of depth in an image by means of stereopsis for
binocular vision. [...]

Most stereoscopic methods present two offset images separately to the left 
and right eye of the viewer. These two-dimensional images are then combined
in the brain to give the perception of 3D depth.

There are three things to consider:

12.1 12.1: How are stereoscopic images generated?

The simplest way to generate a pair of stereoscopic images is to simply create two cameras and offset them by the same distance as the distance between your two eyes.

Unfortunately things are not so simple. This would only work if you had a screen in front of each eye and only looked at infinity: this way, the axis of your eyes and the axis of the cameras would be always parallel.

12.1.1 12.1.1: 3D screens

Currently, one of the most common way of displaying a 3D image on a single 3D screen and wearing glasses to separate the images.

If you have read “Understanding head-tracking and perspective”, you already know that the 3D screen is like a window on the virtual world. This means that what your eyes see is constrained by that window. The parts of the virtual world that you can see depend on the position of your eyes with respect to this window.

There are two ways to setup stereoscopy for a 3D screen in MiddleVR:

With a stereoscopic camera, you can configure the field of view and the screen distance. This assumes that the eyes of the viewer are always exactly facing the middle of the screen and that the eyes are at the same distance as the convergence distance. The size of the screen is determined by the field of view of the camera and the “screen distance” parameter of the stereoscopic camera.

If you setup a screen for your stereoscopic camera, you can be more precise to specify the exact size and position of the screen with respect to the viewer.

You can then add a tracker on the stereoscopic camera so that when the viewer moves, the perspective is always correct.

12.1.2 12.1.2: 3D projectors

A 3D projector can be considered as a 3D screen, with the size of the screen being the size of the projected image.

12.1.3 12.1.3: HMDs

In the most simple cases, HMD display systems are considered as 3D screens (Sony HMZ-T1, T2).

In other cases, the two screens are offset: they can be offset horizontally (NVIS SX-60), or even rotated symmetrically (NVIS SX-111).

12.2 12.2: How are stereoscopic images transmitted?

Here are the most common ways of transmitting a stereoscopic image to a stereoscopic system. These different mechanisms only relate to the way the images are transmitted from the graphics cards to the display system, and are not necessarily linked to the way they will be displayed. See “Debate” below.

From Wikipedia:

There are multiple ways to provide these separate images:

    - Use dual video inputs, thereby providing a completely separate video
    signal to each eye
    - Time-based multiplexing. Techniques such as frame sequential
    combine two separate video signals into one signal by alternating
    the left and right images in successive frames.
    - Side by side or top/bottom multiplexing. This method allocated half
    of the image to the left eye and the other half of the image to the right eye.

The advantage of dual video inputs is that it provides the maximum resolution
for each image and the maximum frame rate for each eye. The disadvantage of dual
video inputs is that it requires separate video outputs and cables from the
device generating the content.

Time-based multiplexing preserves the full resolution per each image, but
reduces the frame rate by half. For example, if the signal is presented at 60 Hz,
each eye is receiving just 30 Hz updates. This may become an issue with accurately
presenting fast-moving images.

Side-by-side and top/bottom multiplexing provide full-rate updates to each eye,
but reduce the resolution presented to each eye.

12.3 12.3: Debate

12.3.1 12.3.1: The “natural” way

Most of the time, an active stereo system will use a frame sequential transmission because it is simple to use the alternate frame signal directly with the active glasses.

In the same way, a dual projector passive stereo system will naturally use a dual-input transmission because you can simply plug the left input in the left projector and do the same for the right projector.

Most 3D TVs support side-by-side and/or top-bottom input directly, which makes it the easier way to setup stereoscopy on this kind of display.

12.3.2 12.3.2: The “twisted” way

Note however that a frame sequential transmission can be split to be used as a dual-input. A dual-input transmission can also be combined in a frame sequential signal.

Basically all transmission signals can be converted to one another depending on the requirements of the VR system.

12.4 12.4: How are stereoscopic images displayed?

Once the stereoscopic images have been generated and transmitted, the display must make sure that the left image is only seen by the left eye, and the same for the right eye.

12.4.1 12.4.1: HMD

As seen in “Configuring a HMD” and “How are stereoscopic images generated”, most HMDs are made up of two screens. By design, each eye only sees one screen.

12.4.2 12.4.2: 3D Screen

On a 3D screen, both images are displayed on the same display. This means that there must be a way for each eye to see only the corresponding image.

The two most common mechanism to achieve this involve 3D glasses. These two mechanisms are commonly called “Active stereoscopy” and “Passive stereoscopy”. The name comes from the fact that in one case the 3D glasses are “active”, and in the other case they are passive.

This denomination is also often used for the way the stereoscopic images are transmitted, even though nowadays an active stereo signal can be used to feed a passive stereo system and vice-versa.

12.4.2.1 12.4.2.1: Active stereoscopy

Active stereo refers to a mechanism in which the left and right images are displayed sequentially on the 3D screen. This means that frame 0 is left eye, frame 1 is right eye etc.

This also means that when frame 0 displays the left eye, the 3D glasses must hide this image from the right eye. This is accomplished by using LCD shutter glasses that can turn from transparent to opaque very quickly. While the left LCD from the glasses is transparent, the right one is opaque, and vice versa.

Those glasses are called active because they have active elements (the LCD shutters) and require batteries.

12.4.2.2 12.4.2.2: Passive stereoscopy

Most of the time, passive stereoscopy involves polarized glasses and displays.

From Wikipedia:

A polarized 3D system uses polarization glasses to create the illusion of
three-dimensional images by restricting the light that reaches each eye,
an example of stereoscopy.

To present stereoscopic images and films, two images are projected superimposed
onto the same screen or display through different polarizing filters. The viewer
wears low-cost eyeglasses which contain a pair of different polarizing filters.
As each filter passes only that light which is similarly polarized and blocks the
light polarized in the opposite direction, each eye sees a different image. This
is used to produce a three-dimensional effect by projecting the same scene into
both eyes, but depicted from slightly different perspectives. Several people can
view the stereoscopic images at the same time.

The glasses don’t have any active element, only polarizing filters, which is why the mechanism is called “passive stereoscopy”.

12.4.3 12.4.3: 3D Projectors

The majority of 3D projectors can only do active stereoscopy. The most common way to achieve passive stereoscopy with projectors is to use two non-3D projectors, one for each eye and each correctly polarized.

12.4.4 12.4.4: How to configure MiddleVR

The way the images are displayed is completely determined by the hardware in place and is a design choice when creating the VR system.

MiddleVR can only be configured to specify how the images are generated and transmitted.

13 13: Advanced topic - How to configure a VRPN server

Make sure to read: What is VRPN?

13.1 13.1: Configuring a VRPN server

13.1.1 13.1.1: The general case

Locate the ‘vrpn_server.exe‘ file. It is typically found in:

C:/Program files (x86)/MiddleVR/bin/vrpn

In the same folder you should have the ‘vrpn.cfg‘ file. This is the configuration file that needs to be edited in order to specify which devices should be accessed and how.

Edit this file. You will notice a big file where all lines start with a ‘#’:

#vrpn_Tracker_Intersense Tracker0 AUTO IS900Time
#vrpn_Tracker_Dyna  Tracker0  1 /dev/ttyS0 19200
#vrpn_Tracker_Flock Tracker0  2 COM1 38400 1 N -x

As VRPN supports a lot of different devices, you have to identify your device in the list. When you have identified your device, uncomment the corresponding line by removing the ‘#’.

This name, typically Tracker0, can be changed to better match the semantic of your tracker. This is also required if you use different trackers with the same VRPN server. You could for example choose “HeadTracker” or “HandTracker”.

Remember this name correctly (and note it is case sensitive), it will used by the client when connecting to the server to identify the tracker (see configuring a VRPN tracker).

There is a lot of documentation inside the file that will help you identify the device and its options.

13.1.2 13.1.2: Specific devices

Here are specific articles about configuring popular devices:

13.1.3 13.1.3: Running the VRPN server

Once you have correctly configured vrpn.cfg, you just have to run the VRPN server by double-clicking on the vrpn_server.exe file.

This will start a DOS box, which might be empty or display information depending on the configured devices:

13.1.4 13.1.4: Troubleshooting VRPN

See the following article: Troubleshooting VRPN.

14 14: Advanced topic - How to show a viewport on a specific display

In a general way, every time you plug a cable in your graphics card, the graphics driver will create a new “Display”. Each display is shown in a different part of the global windows desktop.

You can check that in your NVidia drivers. For example, there are two displays on this machine:

We can see that display 1 (Apple Laptop Display) is the Main display.

Note: The main/primary display is always positioned at the origin of the desktop. This means its position is always (0,0);

Now the second display (Rift DK = Oculus Rift HMD display) is displayed here to the left of display 1. You can drag the icons in the view to change the position of any display.

Any display that is not a main display will have a position relative to the main display. For example here we can see that display 2 has an x position of -1280. The width of this display being exactly 1280, it means that the right side of display 2 touches display 1.

You can drag the display so that it is to the right of display 1:

You can notice that now its x position is 1280, which is also the width of display 1.

14.1 14.1: Displays in MiddleVR

You will find the exact same diagram in the MiddleVR configuration tool, in the Viewports tab:

You can see on the left side the “System Displays” information. You can click on each display to see its information (geometry, refresh rate etc).

14.2 14.2: Viewports in MiddleVR

Now if you want to show a viewport on the first display (Apple Laptop Display), you need to make sure that the coordinates and size of the viewport cover that display:

Notice that the Top and Left values of the viewports are 0 because “Display 1″ is the main display, and its size is 1280×800, the same size as the display.

If you want the viewport to be displayed on the second screen, you need to change the position and size to cover the second display:

The Left value of the viewport is now 1280, because that’s where the second display is on the desktop.

15 15: FAQ / Troubleshooting

The troubleshooting section has been moved to our online knowledge base: http://www.middlevr.com/kb.

16 16: Known limitations and bugs

16.1 16.1: MiddleVR for Unity

17 17: Revision history

17.1 17.1: Upgrading to MiddleVR 1.6

From MiddleVR 1.4

The applications that you built using MiddleVR 1.4 will work without any modification with MiddleVR 1.6.

If you want to benefit from the new features (new interactions etc), you will need to upgrade the MiddleVR Unity package.

From 1.0 or 1.2

You will need to upgrade the MiddleVR Unity package for each of your application and re-export it.

17.2 17.2: Version 1.6.2 changelog

17.3 17.3: Version 1.6.1 changelog

17.4 17.4: Version 1.6 changelog

This version requires new licenses! If your maintenance contract is valid, you can receive the updated license for free.

17.5 17.5: Upgrading to MiddleVR 1.4 from 1.0 or 1.2

17.6 17.6: Version 1.4 changelog

17.7 17.7: Upgrading from 1.0 to 1.2

17.8 17.8: Version 1.2.2 changelog

17.9 17.9: Version 1.2.1 changelog

17.10 17.10: Version 1.2 changelog

This new version contains many improvements in ergonomics. It also includes full cluster support.

17.11 17.11: Version 1.0

28 March 2012: Version 1.0

18 18: Devices constants

18.1 18.1: Keyboard keys

Usage example: keyboard.IsKeyPressed( MiddleVR.VRK_SPACE );

VRK_A, VRK_B,... VRK_Z
VRK_0, VRK_1,... VRK_9.
VRK_F1, VRK_F2,..., VRK_F15
VRK_ESCAPE
VRK_MINUS /* - on main keyboard */
VRK_EQUALS
VRK_BACK /* backspace */
VRK_TAB
VRK_LBRACKET
VRK_RBRACKET
VRK_RETURN /* Enter on main keyboard */
VRK_LCONTROL
VRK_SEMICOLON
VRK_APOSTROPHE
VRK_GRAVE /* accent grave */
VRK_LSHIFT
VRK_BACKSLASH
VRK_COMMA
VRK_PERIOD /* . on main keyboard */
VRK_SLASH /* / on main keyboard */
VRK_RSHIFT
VRK_MULTIPLY /* * on numeric keypad */
VRK_LMENU /* left Alt */
VRK_ALTLEFT /* left Alt */
VRK_SPACE
VRK_CAPITAL
VRK_NUMLOCK
VRK_SCROLL /* Scroll Lock */
VRK_NUMPAD0, VRK_NUMPAD1,..., VRK_NUMPAD9
VRK_SUBTRACT /* - on numeric keypad */
VRK_ADD /* + on numeric keypad */
VRK_DECIMAL /* . on numeric keypad */
VRK_OEM_102 /* <> or | on RT 102-key keyboard (Non-U.S.) */
VRK_KANA /* (Japanese keyboard) */
VRK_ABNT_C1 /* /? on Brazilian keyboard */
VRK_CONVERT /* (Japanese keyboard) */
VRK_NOCONVERT /* (Japanese keyboard) */
VRK_YEN /* (Japanese keyboard) */
VRK_ABNT_C2 /* Numpad . on Brazilian keyboard */
VRK_NUMPADEQUALS /* = on numeric keypad (NEC PC98) */
VRK_PREVTRACK /* Previous Track (VRK_CIRCUMFLEX on Japanese
keyboard) */
VRK_AT /* (NEC PC98) */
VRK_COLON /* (NEC PC98) */
VRK_UNDERLINE /* (NEC PC98) */
VRK_KANJI /* (Japanese keyboard) */
VRK_STOP /* (NEC PC98) */
VRK_AX /* (Japan AX) */
VRK_UNLABELED /* (J3100) */
VRK_NEXTTRACK /* Next Track */
VRK_NUMPADENTER /* Enter on numeric keypad */
VRK_RCONTROL VRK_MUTE /* Mute */
VRK_CALCULATOR /* Calculator */
VRK_PLAYPAUSE /* Play / Pause */
VRK_MEDIASTOP /* Media Stop */
VRK_VOLUMEDOWN /* Volume - */
VRK_VOLUMEUP /* Volume + */
VRK_WEBHOME /* Web home */
VRK_NUMPADCOMMA /* , on numeric keypad (NEC PC98) */
VRK_DIVIDE /* / on numeric keypad */
VRK_SYSRQ
VRK_RMENU /* right Alt */
VRK_ALTRIGHT /* right Alt */
VRK_PAUSE /* Pause */
VRK_HOME /* Home on arrow keypad */
VRK_UP /* UpArrow on arrow keypad */
VRK_PRIOR /* PgUp on arrow keypad */
VRK_LEFT /* LeftArrow on arrow keypad */
VRK_RIGHT /* RightArrow on arrow keypad */
VRK_END /* End on arrow keypad */
VRK_DOWN /* DownArrow on arrow keypad */
VRK_NEXT /* PgDn on arrow keypad */
VRK_INSERT /* Insert on arrow keypad */
VRK_DELETE /* Delete on arrow keypad */
VRK_LWIN /* Left Windows key */
VRK_RWIN /* Right Windows key */
VRK_APPS /* AppMenu key */
VRK_POWER /* System Power */
VRK_SLEEP /* System Sleep */
VRK_WAKE /* System Wake */
VRK_WEBSEARCH /* Web Search */
VRK_WEBFAVORITES /* Web Favorites */
VRK_WEBREFRESH /* Web Refresh */
VRK_WEBSTOP /* Web Stop */
VRK_WEBFORWARD /* Web Forward */
VRK_WEBBACK /* Web Back */
VRK_MYCOMPUTER /* My Computer */
VRK_MAIL /* Mail */
VRK_MEDIASELECT /* Media Select */

19 19: Class hierarchy