1 1: What is MiddleVR?
MiddleVR is a virtual reality (VR) middleware.
It has two main features:
- It simplifies the creation and deployment of VR applications,
- It is adaptable both to many different VR hardware and many different 3D applications.
The Wikipedia definition for a middleware is: “Middleware is computer software that connects software components or people and their applications.”
At its core, MiddleVR is a library handling all aspects of a VR application: input devices, stereoscopy, clustering, interactions. It offers C++ and C# APIs (application programming interfaces), and a graphical user interface to configure a VR system.
MiddleVR is generic: it does not depend on any particular 3D engine and it is designed to be used with many different 3D applications.
Each software that wants to take advantage of MiddleVR must be adapted. We want the host application to be VR-aware to take the full advantage of its capabilities. We have designed MiddleVR to be as easy to integrate as possible.
2 2: Installing MiddleVR
2.1 2.1: Requirements
2.1.1 2.1.1: Operating system
MiddleVR requires Vista, 7, 8, 32 or 64 bits with the latest Service Packs.
You also need:
- Microsoft Visual Studio 2012 redistributable package (x86): http://www.microsoft.com/en-us/download/details.aspx?id=30679
- DirectX: http://www.microsoft.com/download/en/details.aspx?id=35 http://www.microsoft.com/download/en/details.aspx?id=35
2.1.2 2.1.2: Devices
MiddleVR has native support for the following hardware:
- 3D Connexion SpaceMouse line of products
- A.R.T. DTrack
- GameTrak
- Haption haptic devices
Requires a specific license. Contact us for more information. - InterSense
- Leap Motion
Requires you to install the official Leap Motion driver:
https://www.leapmotion.com/setup - Leap Motion SDK2
Requires you to install the official Leap Motion SDK2 driver:
https://developer.leapmotion.com/ - Microsoft Kinect 1
Works with both the Kinect XBOX 360 and Kinect for Windows. Requires you to install the official Microsoft Kinect SDK 1.8:
http://www.microsoft.com/en-us/kinectforwindows/develop/ - Microsoft Kinect 2
Works with Kinect2 for Windows. Requires you to install the official Microsoft Kinect SDK 2:
http://www.microsoft.com/en-us/kinectforwindows/develop/ - Motion Analysis trackers
Requires you to install Cortex on the same machine. - NaturalPoint OptiTrack
- NaturalPoint TrackIR
Requires that the latest version of the TrackIR software is installed and running: http://www.naturalpoint.com/trackir/06-support/support-download-software-and-manuals.html Make sure to update the list of supported games: MiddleVR is not supported unless you do that, even with the latest software version installed. - Oculus Rift DK1
- Oculus Rift DK2
- Oculus Rift CV1
You need to install the latest Oculus Runtime for Windows: https://www.oculus.com/en-us/setup/ - HTC Vive
You need to install Steam and SteamVR: http://store.steampowered.com/about/ - PNI SpacePoint Fusion orientation tracker
- Razer Hydra (both the 3D trackers and joystick axis/buttons)
Requires you to install the official Razer Hydra driver:
http://www.razersupport.com/index.php?_m=downloads&_a=viewdownload&downloaditemid=624 - VRPN (Trackers, Analogs and Buttons)
VRPN is an opensource device server that streams data from a lot of different hardware. See the full list on the VRPN website: http://www.cs.unc.edu/Research/vrpn/. Typical drivers are Vizard PPT, Polhemus, Ascension Flock-of-birds, etc. Read more info about MiddleVR. - Vuzix trackers
- zSpace
- Keyboard, mouse and joystick support is built-in through Direct Input.
If your device it not yet supported, contact us to evaluate integration options.
2.1.3 2.1.3: Unity
MiddleVR is compatible with Unity 4.2 and above, including Unity 5.
In order to be able to use the OpenGL quad-buffer (active stereoscopy) feature, the “Force OpenGL window” mode or any HMD you need:
- Unity Pro for Unity 4.x
- No restrictions for Unity 5.x
Note: MiddleVR 1.6 does not support Unity 3.5, 4.0 and 4.1 anymore.
2.1.4 2.1.4: Requirements for using a 3D monitor or 3D projector
MiddleVR supports OpenGL quad-buffer (active stereoscopy) output. Your graphics card must support this 3D mode. This mode is only supported by professional graphics cards such as a NVidia Quadro or a ATI FireGL Pro. See section Stereoscopy.
In order to be able to use the OpenGL quad-buffer (active stereoscopy) feature, the “Force OpenGL window” mode or any HMD you need:
- Unity Pro for Unity 4.x
- No restrictions for Unity 5.x
2.1.5 2.1.5: Requirements for using a 3D TV
MiddleVR is compatible with any 3D TV supporting side-by-side 3D input.
Contact us for more information.
2.1.6 2.1.6: Requirements for using a Head-Mounted Displays (HMD)
MiddleVR is compatible with any dual-input HMD and any HMD that supports OpenGL quad-buffer stereo or side-by-side stereo.
In order to be able to use any HMD you need:
- Unity Pro for Unity 4.x
- No restrictions for Unity 5.x
Contact us for more information.
2.1.7 2.1.7: Stereoscopy - S3D
2.1.7.1 2.1.7.1: Passive stereoscopy
MiddleVR is compatible with any passive stereoscopy system.
2.1.7.2 2.1.7.2: Active stereoscopy
For active stereoscopy (Quad-Buffer) in Unity you need:
- An NVidia Quadro card with a GPU at least from generation G80
- Recent NVidia drivers ( >= 265 )
- A recent AMD FireGL Pro card.
Compatible cards:
- Quadro FX 2700M
- Quadro FX 3700, FX 3800, FX 4600, FX 4700, FX 4800, FX 5600
- Quadro 2000, 4000, 5000, 6000
- Quadro K420, K620, K2000, K2200, K4000, K4200, K5000, K5200, K6000
- A recent AMD FireGL Pro cards. Contact us for more information.
Incompatible cards:
- Quadro FX 1400, 3400, 3450, 4000, 4400, 4500 – GPUs referred to as NV4xGL Quadro NVS 285 (NV44)
- Quadro FX 350, 560, 1500, 3500, 4500, 5500 – G7xGL
If your card is not in the list, contact us. (source: http://academic.cleardefinition.com/2011/08/17/nvidia-gpus-and-product-series-cross-reference/)
Contact us for more information.
2.1.8 2.1.8: Other hardware
MiddleVR has been successfully tested with a Matrox DualHead 2GO.
2.2 2.2: Installing
Run the MiddleVR installer. The following window will appear:
Check the license agreement and Press “INSTALL”.
When installation is done, you can choose to read the “ReadMe” file, directly “Run” MiddleVR Config or close the installer by pressing the “Finish” button:
If a previous installation of MiddleVR is present, it is asked to remove the old MiddleVR license, preferences and logs. Be sure to get a copy of your license file if you choose to press “Yes”:
Note: You must restart Unity after installing MiddleVR, so it takes the new PATH into account.
Note: You must restart MiddleVR Config and Unity after manually modifying the PATH.
2.3 2.3: License
2.3.1 2.3.1: Trial
When you first start MiddleVR, you will see the following screen:
If you don’t have a valid license, you can run the Free version of MiddleVR Config by pressing the “Use ‘Free’ edition” button.
You can get a valid trial license for 30 days by pressing the “Start 30 days ‘Pro’ trial”, “Start 30 days ‘Academic’ trial” or “Start 30 days ‘HMD’ trial” button.
2.3.2 2.3.2: Installing automatically a license with an activation Key
You can download and install automatically a permanent or temporary license after having received an “activation key” from us, by pressing the “Install file license automatically…” button.
Note: Before trying to get a valid license, make sure that you have received an “activation key”.
After pressing the “Activate license automatically…” button, you will see this window:

Enter your activation key and the license should be automatically obtained. Note: If you choose to replace the existing licenses with the new one, the older licenses will be renamed but will still exist.
Do not process this way for floating/network licenses, see the dedicated section.
2.3.3 2.3.3: Getting a license file manually
If you get an error, you still have the option of trying to manually activate your key if you click on the “Get file license manually…” button in the “License” menu:

You can then get a valid license file either through the web, or via an automatic e-mail. Choose the option that best suits your situation.
The license file will then need to be opened via the “Open license file…” button as explained in the “Loading the license file” section, except for network licenses.
2.3.3.1 2.3.3.1: Getting a valid license file via the Web
When you press the “License file via Web” button, you will get the following window:

The “HostID” is a string that uniquely identifies your computer. This string will be used to generate a valid license for this particular computer.
Access the license activation website by pressing the first button or open your favorite web browser to the following address: http://license.middlevr.com/.
You should then enter your activation key:
Copy the HostID to the clipboard and add it to the website.
Once you have entered a valid activation key, you will get access to the following form:
Simply copy the “Ethernet” and “Hard Disk” contents in the relevant text inputs and press the “Activate” button. Once the license has been generated, you can download the license file and store it anywhere.
Note: You can download again the license file from the license website simply by entering the same activation key. You don’t need to enter the HostID again.
Next, read the “Loading the license file” section.
2.3.3.2 2.3.3.2: License via e-mail
After pressing the “License via e-mail” button you should get the following form:
After entering your activation key, your form should look like that:
The content of the text-box is automatically filled with the content of the e-mail that you should send to get your license file.
After pressing the “Send by e-mail” button, if you have correctly configured your default e-mail application, MiddleVR should create an e-mail that you just have to send:
You should soon after receive an e-mail containing your license. Store the attached license anywhere.
You can also manually send an e-mail, from a webmail for example. Simply copy the content of the text-box by pressing the “Only copy to clipboard” button and paste the text in the body of the e-mail. The subject of the e-mail does not matter. The e-mail recipient should be: .
2.3.3.3 2.3.3.3: Loading the license file
Once you have stored your license file, you can load it via the License > Open license file... menu.
Locate the license file that you’ve just downloaded. You should get the following message:
At this point you can remove or backup the downloaded license file:
MiddleVR will automatically copy this file to the %appdata%/MiddleVR folder, typically: C:\Users\<current user>\AppData\Roaming\MiddleVR\.
Note: If you log in as a different user, MiddleVR will not find the license. You can go in the License > Open license file... menu and load the license file again, or manually copy it to the %appdata%/MiddleVR folder corresponding to the current user.
Note: Do not process this way for floating/network licenses, see the dedicated section.
2.3.4 2.3.4: Network licenses
2.3.4.1 2.3.4.1: License server
When dealing with a floating/network license, don’t load the license file into the configuration editor. The following steps describe how to easily install a license server on Windows using the LM-X End-user Tools (Windows only).
Run
/bin64/licensing/lmx-enduser-tools_4.8.10_win64_x64.msi.During the installation process, make sure that “Install LM-X server” is checked.
Browse to find our
liblmxvendor.dllin the MiddleVR installation path/bin64/licensing(64-bit version).Keep “Install LM-X license server as a service” checked.
After the installation, open the configuration file lmx-serv.cfg from the LM-X tool installation folder (for example “C:\Program Files\ X-Formation\LM-X End-user Tools 4.8.10 x64\-serv.cfg”).
Set the value LICENSE_FILE to the path leading to your floating/network license file.
You can choose your TCP_LISTEN_PORT if needed.
Make sure to open this port (TCP and UDP) in your firewall, both on the license server machine and the client machines.
If there are any problems running the license server, the log file will indicate the cause of the problem. You can find the log file at the path indicated in the configuration file value LOG_FILE. If still not found, check to make sure that the path is correct.
You also can manage the license server through a web GUI:
Open a web browser to the right machine IP address and the right license server port, for example: http://localhost:6200/.
The administrator password is the one defined in the license server configuration file at REMOTE_ACCESS_PASSWORD.
2.3.4.2 2.3.4.2: Configuring clients
On the client machines, you can either:
use the License configuration tool
lmxconfigtool.exefrom the MiddleVR folder in the Start menu:
In the Client Application License Path tab, specify the network license server in the form of: 6200@192.168.1.1. You can also use a network name instead of an IP address.
or, set the LMX_LICENSE_PATH environment variable to the network license server, in the same form.
2.3.4.3 2.3.4.3: USB Dongle licenses
You can also use a license that is linked to a USB Dongle.
Here are the instructions:
- Install the LM-X End-user tools provided with the MiddleVR install in
MiddleVR/bin64/licensing/lmx-enduser-tools_4.8.10_win64_x64.msi - You might also need to install this package provided with the MiddleVR install in
MiddleVR/bin64/HASPUserSetup_7.51.exe - Make sure to load the license file provided by MiddleVR on all computers using the hardware dongle.
2.3.5 2.3.5: Storing your license in a different folder
You can store your license in a different folder and set the LMX_LICENSE_PATH environment variable to the full path of the license.
3 3: Tutorials
3.1 3.1: Tutorial - Running MiddleVR demo application “Shadow”
In this tutorial, you will:
- Test MiddleVR with a fun demo application
3.1.1 3.1.1: Requirements
- MiddleVR 1.6
- A mouse with three buttons
3.1.2 3.1.2: Download and run the “Shadow” demo
The “Shadow” demo can be downloaded from our website: http://www.middlevr.com/demos.
Unzip the package and run the Shadow.exe application.
If MiddleVR is successfully installed, you will be able to navigate in the scene:
- Press the middle mouse button and move forward, then left/right.
- Pressing Alt will allow you to go up or down and sidestep.
If you can’t navigate, please see the Troubleshooting installation section of our Knowledge base or contact our support team.
You can run this application in any VR system. See this tutorial.
If a Wand is correctly configured, you can grab the objects and move them around, interact with the YouTube video and the webpage in the TV on the right, and navigate to the kitchen or bathroom.
3.2 3.2: Tutorial - Using MiddleVR in Unity
In this tutorial you will learn:
- How to add MiddleVR to your Unity project
- How to export your application
- How to run your application through MiddleVR configuration tool.
3.2.1 3.2.1: Requirements
- MiddleVR 1.6
- Unity 4.2 or above
- A mouse with three buttons
3.2.2 3.2.2: Add MiddleVR to your Unity project
3.2.2.1 3.2.2.1: Import the MiddleVR package
(If you are upgrading an old Unity project, make sure to read this article: Upgrade MiddleVR Unity Package.)
MiddleVR is split in two parts:
The generic MiddleVR which can be used for different 3D engines. This part is typically installed in your system in
C:\Program Files (x86)\MiddleVR\bin. It contains all the DLLs of MiddleVR and the drivers for the devices.The 3D engine specific part that links the 3D engine to the generic MiddleVR. This part has to be added to your Unity project. It contains all the scripts and plugins that link your project to the generic part of MiddleVR to drive the cameras and 3D nodes.
To import the MiddleVR package, open the Asset menu, then Import package and Custom package:
You will find the MiddleVR UnityPackage in the data folder of your MiddleVR installation, typically C:\Program Files (x86)\MiddleVR\data:
Open the MiddleVR.unitypackage file.
This will open a new Unity window:
Simply click “Import”.
The package will then be imported and two folders are added: MiddleVR and Plugins:
3.2.2.2 3.2.2.2: Add the VR manager to your scene
Importing the package is not sufficient, you need to add an important component to your project that will manage all the VR aspects: the VR manager.
Open the MiddleVR folder in the Project tab.
Drag and drop the VRManager prefab to the Hierarchy tab of your project:
3.2.2.3 3.2.2.3: Navigate in the scene
After pressing the Unity play button, you can now navigate in the scene with your mouse:
- Press the middle mouse button and move forward, then left/right.
- Pressing Alt will allow you to go up or down and sidestep.
Note: This won’t work if you have ‘Maximize on Play’ activated. See this article in the knowledge base.
3.2.2.4 3.2.2.4: Export your application
In Unity, open the menu File > Build Settings...

x86 or x86_64 depending on your needs.Make sure that the Platform is PC, Mac & Linux Standalone, then set the Target Platform to Windows.
Press Build and choose a location for your application.
3.2.2.5 3.2.2.5: Run your application
There are two ways to run your application:
- Manually execute the
.exefile that was created: this will use the VR configuration that was specified in theVRManager. The default configuration only allows you to navigate with the mouse. - Run the
.exefile through MiddleVR configurator: this allows you to select the VR system you want to use at runtime. You can change the VR system without changing the application.
3.2.2.5.1 3.2.2.5.1: MiddleVR configurator
The MiddleVR configuration tool allows you to create the configuration for any VR system.
It also allows you to manage and run all your VR applications in the Simulations tab.


You can add your application by clicking the + button, or by a drag’n’drop of the exe.
There are a lot of predefined configurations that you can use, make sure to explore them all.
The default configuration that allows you to navigate with the mouse can be found in Misc/Default.vrx.
Select your application in the list on the left, then select any configuration on the right.
Pressing the Run button will run your application with the selected configuration by executing the Current command line.
3.3 3.3: Tutorial - Interacting with the native Unity GUI with a Wand
In this tutorial you will learn:
- How to interact with the native GUI from Unity using a Wand.
3.3.1 3.3.1: Requirements
- MiddleVR 1.7
- Unity 5.3 or above
- A Wand
3.3.2 3.3.2: Modify an already created GUI
To do so select the Canvas you want your Wand to interact with and make sure that its render mode is set to “World Space”.
Then, add the “VRCanvas” script to the Canvas.
Note: Please make sure that a Unity “EventSystem” object is present in the scene.
3.4 3.4: Tutorial - Using the Oculus Rift
In this tutorial you will learn:
- How to create an application for the Oculus Rift in Unity with MiddleVR,
- How to select a predefined configuration for the Oculus Rift,
- How to export your application,
- How to prepare your Windows desktop for the Oculus Rift,
- How to run your application.
3.4.1 3.4.1: Requirements
- MiddleVR 1.6
- Unity 4.2 or above
- An Oculus Rift DK2 or above
3.4.2 3.4.2: Add MiddleVR to your Unity project
Start by adding the MiddleVR to your Unity project as described in a previous tutorial.
Note: You should not import the official Oculus Rift Unity package.
3.4.3 3.4.3: Change the configuration file
The next step is to specify a different configuration file to the VR Manager. The default configuration simply allows you to navigate in the scene with the mouse.
MiddleVR comes with a lot of predefined configurations for multiple VR systems (Oculus Rift, Microsoft Kinect, Leap Motion, zSpace, immersive cubes, 3DTVs, etc.).
You can also create your own VR system configuration.
Here we will simply use the predefined configuration for the Oculus Rift.
Click on the VRManager in your Unity project to display its information in the Unity inspector:
Notice that the configuration file is currently set to
C:/Program Files (x86)/MiddleVR/data/Config/Misc/Default.vrx
All predefined configurations are in the folder:
C:/Program Files (x86)/MiddleVR/data/Config/
Now click the Pick configuration file button:
Go in the HMD folder and select the configuration depending on you hardware: HMD-Oculus-Rift-CV1.vrx if you have an Oculus Rift CV1, HMD-Oculus-Rift-DK2.vrx if you have an Oculus Rift DK2, etc.
Press play: you should be able to look around using your Oculus Rift DK2!
If you want to play your application after you have built/exported it, make sure to configure your Windows desktop as described in the section “Oculus Rift DK2”.
3.5 3.5: Tutorial - Create a basic configuration
In this tutorial you will learn:
- How to create a simple MiddleVR configuration
- How to simulate a 3D tracker with the mouse
- How to move a camera with a tracker
- Use the MiddleVR package in Unity
3.5.1 3.5.1: Requirements
- MiddleVR 1.6
- Unity 4.2 or above
- A mouse with three buttons
3.5.2 3.5.2: Creating a mouse-simulated 3D tracker
We will start by creating a fake 3D tracker. This 3D tracker will be simulated by a mouse with three buttons. Later you will be able to replace this fake 3D tracker with an actual 3D tracker.
We will then specify that this 3D simulated tracker, representing a position and orientation in space, will move a 3D camera.
To create this mouse-simulated tracker, go into the Devices window, press the ‘+’ button to add a device, and select the “Tracker Simulator - Mouse” device in the 3D Trackers section.
Moving the simulated tracker
As specified in the Help section of the Mouse Tracker Simulator, the virtual tracker is moved by pressing the middle mouse button.
If you go forward or backward, you’re moving the tracker forward or backward. You will see the Y value of the tracker increase or decrease.
If you move the mouse left or right you will rotate the tracker left or right. You will see the Yaw value change accordingly.
You can reset its values by pressing both the left and right button at the same time.
3.5.3 3.5.3: Moving the camera with the 3D tracker
Then go to the 3D nodes window. There you can see that a predefined user-description has been created:
To specify that you want to animate the HeadNode with the fake tracker, simply click on the HeadNode in the hierarchy to display its properties. In the Tracker property select the MouseTracker.
This simply assigns the MouseTracker to this 3D node. Now the HeadNode will follow the 3D tracker!
Try moving the HeadNode by pressing the middle button of your mouse, or Ctrl on your keyboard, and moving it.
You can translate up, down, left and right by adding the Alt key.
Note: You can reset the mouse tracker values by pressing both left and right mouse buttons at the same time.
Save the configuration file and make sure you remember the full path.
3.5.4 3.5.4: Testing in Unity
Start by importing the MiddleVR Unity package in your Unity project as seen in a previous tutorial.
Then specify the full path to the configuration file you’ve created:
Press Play!
You will notice that the hierarchy that was described in your configuration is automatically re-created in Unity:
You should be able to move the camera around by pressing the middle button of your mouse.
Note: If the viewport you have defined in MiddleVR and the viewport in the Unity editor don’t have the same aspect ratio, the view will appear distorted. As soon as you run your application with the standalone player the view will have the right aspect ratio.
Note: The TrackerSimulatorMouse might get stuck if you’re using the “Maximize on Play” option from Unity’s game viewport.
3.5.5 3.5.5: Have fun!
Now you can go back in the configuration tool and modify the hierarchy, add cameras, change the viewports layout. Save this description and simply press play again in Unity. MiddleVR will automatically reconfigure your application to match your configuration.
3.6 3.6: Tutorial - Using and extending MiddleVR VR menu
In this section you will learn:
- How to use MiddleVR immersive menu
- How to add your own menu items
3.6.1 3.6.1: Requirements
- MiddleVR 1.6
- Unity 4.2 or above
- A Wand
3.6.2 3.6.2: Introduction
MiddleVR offers an immersive menu that you can customize to include your own menu items. The default menu allows you to change the navigation scheme, the manipulation scheme and various other options. See section VR menu.
The menu can contain many different types of entries: 
See section VR widgets.
By default you activate the menu by pressing button 3 of your Wand. This can be changed on the VRMenu GameObject:
You interact with the menu by pressing the button 0 of your Wand.
You can deactivate the menu by disabling the option “Use default menu” in the VRManager options:
3.6.3 3.6.3: Extending the menu
3.6.3.1 3.6.3.1: Add a command
The easiest way to start extending the menu is to add the MiddleVR/Scripts/Samples/GUI/VRCustomizeDefaultMenu component to any Unity GameObject. It adds a simple command to the menu: when the menu item is clicked, it will display “My menu item has been clicked” in Unity console.
Here is how it works:
The first thing to do is to create a method that will be called when the item is clicked:
[VRCommand]
void MyItemCommandHandler()
{
print("My menu item has been clicked");
}
Then we must register the vrCommands defined in the MonoBehaviour script:
MVRTools.RegisterCommands(this);
After that we must get a reference to the existing menu:
VRMenu MiddleVRMenu = null;
while (MiddleVRMenu == null || MiddleVRMenu.menu == null)
{
// Wait for VRMenu to be created
yield return null;
MiddleVRMenu = FindObjectOfType(typeof(VRMenu)) as VRMenu;
}
This will then be passed to the constructor of a Widget which is called vrWidgetButton. This widget will simply call the method held by the vrCommand when it is clicked. The third argument is the label of the button:
vrWidgetButton button = new vrWidgetButton("", MiddleVRMenu.menu, "My Menu Item", MVRTools.GetCommand("MyItemCommandHandler"));
// By default the widget is added at the end of the menu
// We can position it at the top by using SetChildIndex:
MiddleVRMenu.menu.SetChildIndex(button, 0);
Then we will register the button created so the garbage collector does not collect it to soon. The object will be disposed when the GameObject is destroyed:
MVRTools.RegisterObject(this, button);
Now when you activate the menu you will see your item at the top of your menu:
3.6.4 3.6.4: Going further
You can also move or remove existing items or create sub-menus, create a menu from scratch, or create a graphical user interface in HTML5.
See sections “VR menu”, “Custom VR menu” and tutorial “Creating a graphical user interface in HTML5”.
3.7 3.7: Tutorial - Creating a graphical user interface in HTML5
In this section you will learn:
- How to create a graphical user interface using HTML5 / CSS3 / JavaScript
- How to communicate from the GUI in JavaScript to Unity / C#
- How to communicate from Unity / C# to the GUI’s JavaScript.
3.7.1 3.7.1: Requirements
- MiddleVR 1.6
- Unity 4.2 or above
- A Wand
3.7.2 3.7.2: Introduction
Using standards HTML5 / CSS3 / JavaScript you can create great graphical user interfaces. You can use any template that you can find on the internet, or that any webdesigner can create. Your application can now use GUIs with buttons, sliders, tabs, fancy animations, etc.
Here are a few great examples of what you can achieve:


You can buy the source of this GUI here: http://www.cssflow.com/ui-kits/clarity-ios7.

This website contains a lot of great samples:
http://www.multyshades.com/2012/03/45-best-ui-web-elements-with-source-files/.
The Zebra-UI, an open-source HTML5 toolkit, is also a great way to achieve even more complex GUIs using HTML5:

3.7.3 3.7.3: Creating a simple GUI
3.7.3.1 3.7.3.1: HTML GUI
We will start with a very simple GUI:

This is a sample that you can find in C:\Program Files (x86)\data\GUI\HTMLBasicSample\index.html:
<html>
<head>
<script>
function OnClick()
{
MiddleVR.Call("MyVRCommand");
}
function AddResult(text)
{
document.getElementById('result').innerHTML += text + '<br>';
}
</script>
</head>
<body style="background-color: white;">
<button onclick="OnClick()">Click Me!</button>
<p>Result:</p>
<div id="result"></div>
</body>
</html>
You can see:
- two JavaScript functions
OnClick: Calls a MiddleVR command: this communicates from JavaScript to MiddleVR / Unity / C#AddResult: Modifies theresultHTML element below. This is the function that we want to call from Unity / C#- one button that calls the
OnClickfunction, - one “result” element that is modified by the
AddResultfunction.
3.7.3.2 3.7.3.2: C# code
The JavaScript in your HTML webpage needs to transmit information to your C# code in Unity, so that your application can react to events triggered when the user interacts with the webpage.
Here is a C# sample reacting to a JavaScript call. You can find it the MiddleVR Unity package here: MiddleVR/Scripts/Samples/GUI/VRGUIHTMLBasicSample:
public class VRGUIHTMLBasicSample : MonoBehaviour
{
[VRCommand]
public void MyVRCommand()
{
print("HTML Button was clicked");
CallJavascript();
}
protected void Start()
{
MVRTools.RegisterCommands(this);
}
protected void CallJavascript()
{
vrWebView webView = GetComponent<VRWebView>().webView;
webView.ExecuteJavascript("AddResult('Button was clicked!')");
}
}
3.7.4 3.7.4: Adding the HTML files to your project
There are two things to do to embed the HTML files in your project:
copy the HTML files and associated JavaScript / CSS files in your project. Section “Web assets” has more information on this process.
add a
VRWebViewin your project to display the webpage.
3.7.4.1 3.7.4.1: Copy the HTML files
Copy the “C:\Program Files (x86)\data\GUI\HTMLBasicSample\” folder which contains the HTML file into your project’s folder “Assets/.WebAssets”. You should now have a “Assets/.WebAssets/HTMLBasicSample” folder.
Note: To create a “.WebAssets” folder from the Windows File Explorer, you will have to type “.WebAssets.”. The additional dot at the end is necessary, and will be removed by the File Explorer.
3.7.4.2 3.7.4.2: Add a VRWebView
A VRWebView is a component that will actually display the webpage in your virtual world. See section VRWebView for more information.
The simplest way to add a VRWebView is simply to drag a prefab called VRGUIHTMLBasicSample3D from “MiddleVR/Scripts/Samples/GUI”.
Notice the VRWebView script attached. The URL is currently set to the original HTML file. Modify it to point to “.WebAssets/HTMLBasicSample/index.html”.
Also notice the VRGUIHTMLBasicSample script. That’s the C# script described above that will communicate to and from the webpage’s JavaScript.
Press Play:
The webpage is displayed and when you click on the “Click Me!” button with the wand, the Unity console displays:
"HTML Button was clicked"
If you uncomment the CallJavascript call in the VRGUIHTMLBasicSample C# script, it will also modify the “Result:” in the webpage.
3.7.5 3.7.5: Communication from JavaScript to C#
Communicating from JavaScript to C# is very easy: simply use the MiddleVR.Call function. For example:
MiddleVR.Call("MyVRCommand");
Will call the vrCommand registered by:
MVRTools.RegisterCommands(this);
Which is a pointer to the CommandHandler method, as passed as the second argument of the vrCommand constructor.
Note: You can pass arbitrary arguments from JavaScript to C#. See section MiddleVR.Call.
3.7.6 3.7.6: Communication from C# to JavaScript
Communicating from C# to JavaScript is easy: simply use the webView.ExecuteJavascript function after:
vrWebView webView = GetComponent<VRWebView>().webView;
webView.ExecuteJavascript("AddResult('Button was clicked!')");
Note: You can pass arbitrary arguments from C# to JavaScript. See section vrWebView.ExecuteJavascript.
3.7.7 3.7.7: Going further
There is another HTML sample using JQuery for tabs, sliders and buttons in “C:\Program Files (x86)\data\GUI\HTMLJQuerySample\”.
The corresponding C# can be found in: “MiddleVR/Scripts/Samples/GUI/VRGUIHTMLJQuerySample”.
Then make to read the following section: Graphical User Interfaces.
3.8 3.8: Tutorial - Creating a multi-user application
In this section you will learn:
- How to create a simple multi-user application
- How to synchronize the position and orientation of a 3d object on all network clients
- How to execute a simple remote function call on all network clients
3.8.1 3.8.1: Requirements
- A basic understanding of MiddleVR (having completed tutorials 1 to 4)
- MiddleVR 1.7
- Unity 5.3 or above
- A Wand
- If possible two (or more) computers
3.8.2 3.8.2: Introduction
MiddleVR multi-user networking feature allows you to simply create powerful multi-user applications that work on hardware supported by MiddleVR, including clusters. Its API is based on Unity’s Networking.
You can find a sample of this tutorial on MiddleVR website: VRShadow MultiUser Tutorial. Feel free to modify it to suit your needs!
3.8.3 3.8.3: Run the sample
First, make sure ports 7777 and 7778 are open if you have a firewall.
On the first PC, run the sample from MiddleVR Configuration without the following custom argument: --mvr-start-host (please check the previous tutorials on how to run a VR application from MiddleVR configuration).
Note: You can also omit this custom argument and start the network host from the MiddleVR menu by selecting the first option.

On the second PC, run the sample with the following custom argument: --mvr-client-connect=ADDRESS_OF_FIRST_PC

You should now be able to see the other person’s head and hand, and manipulate the white cube. If you have microphones and speakers you should also be able to communicate via the voice chat.
3.8.4 3.8.4: Create your own networked application
3.8.4.1 3.8.4.1: Step 1. Networking Setup
For this section of the tutorial, you can simply use the ShadowScene_Base scene in the sample Unity project. Each step has its own scene in the tutorial.
Make sure to import the MiddleVR package and add the VRManager to your application.
Then add the VRDefaultNetworkManager prefab to your scene (located in MiddleVR/Scripts/Networking folder)

This network manager object comes pre-configured with:
- A default player with scripts handling interactions and avatar spawning
- The default head and hand prefabs registered
- Additional items for MiddleVR’s immersive menu
- A voice chat (if you have a microphone)
Build your application, then run it once as a host and once as a client like in the section above. You should be able to see other users move around.

3.8.5 3.8.5: Step 2. Adding a networked object
First we will simply synchronize the position and orientation of a simple GameObject:
Add a Cube to the scene.
Add a “NetworkTransform” component to the Cube to synchronize its position and orientation. Note: Networked objects are enabled when connected to a server.

- Also add a MiddleVR “Actor” so that you can manipulate the object with the Wand.
Run your application in a networked setup. When you move the cube the other connected users will see the cube move.

3.8.5.1 3.8.5.1: Step 3. Changing the color of an object with a remote function call (RPC)
3.8.5.1.1 3.8.5.1.1: What is a RPC
A RPC (remote procedure/function call) allows a network application to call a function on another computer.
In the case of UNet (Unity’s networking module since Unity 5) and MiddleVR Networking, a client can only call a function on the server, not another client directly. To communicate between clients, you must call a function on the server (in Unity terms, this is called a “Command”), which in turn will call a function on any other client (which is called a “ClientRpc”).
The RPC is linked to a particular instance of an object. This brings another question: How does the network know an instance is the same on different computers?
This is solved through the “Network Identity” component. When added to an object, it gives the object a unique Id number that is the same on all the computers. For example when we created the cube, the “Network Identity” component was automatically added on the object when we added the “Network Transform” component. Thus the object now has a unique “identity card” so we can match the object on all instances of the application, whatever computer they run on.
3.8.5.1.2 3.8.5.1.2: RPC Example
We will change the cube object so that it turns red when interacting with it instead of moving.
First, uncheck the “Grabable” property of the “VR Actor” component of the cube.
Then, we will have to create two scripts.
To understand these scripts, it is important to note that a network client MUST have authority on an object to call a server command on it. A network client only has authority on its local player by default. This is why to keep this example simple, we will put the networking code on a script attached to the player object.
The first script is TurnRedAction.cs and must be added to the cube object. The TurnRedAction class handles the interaction between our wand and the object.
using System;
using UnityEngine;
public class TurnRedAction : MonoBehaviour
{
private void VRAction()
{
// List all TurnRed scripts
var allTurnRedScripts = FindObjectsOfType<TurnRed>();
// Find the TurnRed script that is on our local player object
// (upon which we have network authority)
var myTurnRedScript = Array.Find(allTurnRedScripts, p => p.isLocalPlayer);
if (myTurnRedScript != null)
{
// Call a server command from our player object
myTurnRedScript.CmdTurnRed(gameObject);
}
}
}

The second script script is TurnRed.cs and must be added to the player prefab (in our case, it is the VRDefaultNetworkPlayer located in the MiddleVR/Script/Networking folder).
The TurnRed class handles the network logic to turn the cube red on all clients.
using UnityEngine;
using UnityEngine.Networking;
public class TurnRed : NetworkBehaviour
{
// This is executed asynchronously on the player instance of the server
// and can be called only if this script is attached to an object with
// local authority
[Command]
public void CmdTurnRed(GameObject go)
{
RpcTurnRed(go);
}
// This is executed asynchronously on the player instance of ALL clients
// and call be only called from the server.
[ClientRpc]
private void RpcTurnRed(GameObject go)
{
go.GetComponent<Renderer>().material.color = Color.red;
}
}

Run your application. You should see the cube turn red on all clients when you interact with it.

3.8.5.2 3.8.5.2: Step 4. Changing the default avatar
In this section you will learn how to change the default avatar.
Create two prefabs:
- A Cube named “NetworkHandCube” with a position of (0,0,0) and a scale of (0.1,0.1,0.1). Remove its collider. Add the script “VRNetworkLocalObject” (found in the
MiddleVR/Scripts/Networkingfolder) to it. - A Sphere named “NetworkHeadSphere” with a position of (0,0,0) and a scale of (0.2,0.2,0.2). Remove its collider. Add the script “VRNetworkLocalObject” to it.


Note: It is important that the position of these prefabs stay at (0,0,0) to avoid any offset when running the application.
Open the Player prefab, fill the Head Prefab and Hand Prefab properties with the new prefabs we created.

Open the NetworkManager object in your scene. in the “Registered Spawnable Prefabs” list, add the new prefabs.

Run your application. You should see the new avatar instead of the default ones.

3.9 3.9: Tutorial - What next?
Now you’re ready to learn more about MiddleVR.
At this point, typical questions are:
- How can I navigate in my scene? - Section “Wand interactions”.
- How can I grab objects? - Section “Wand interactions”.
- How can I interact with objects with the Wand? - Section “Wand interactions”.
- I want my object to react when I click on it with the Wand Ray - Section “VRActor”.
- How to have my own objects move with VR trackers? - Section “How to attach your nodes in the VR hierarchy”.
- How can I choose the parent of the VR hierarchy? - Section “VR Manager options”.
- How do I program interactions? - Section “Programming interactions”.
- Things don’t work as I want, how can I troubleshoot the issues? - Section “Troubleshooting”.
- What should I know about MiddleVR before building a standalone player? - Section “Exporting to a standalone player”.
4 4: Basic concepts
4.1 4.1: Configuration and workflow
A typical workflow to use MiddleVR is first to create a description of your VR system.
MiddleVR will then use this description to configure the 3D application to match this description.
This description is called a Configuration and is stored by MiddleVR as an XML file with the .vrx extension.
MiddleVR will also provide access to the data of all the devices that you specified (3D trackers data, button states, joystick axis) thanks to its application programming interface (API).
The description includes:
- the devices that are used by your VR system
- a description of how those devices interact with the real world: for example specifying that tracker
Ais tracking the head of the user whereas trackerBis tracking the user’s hand - a description of where the physical screens are positioned in the real world
- a description of where to display the rendering of which camera.
This description is stored as a VRX (VR XML) configuration file.
4.2 4.2: Examples of predefined configurations
MiddleVR ships with multiple configuration examples located in the “data/Config” folder:
Here is the complete list of predefined configurations:
- Cluster:
- VirtualCluster: Perfect to test your cluster application on one computer
- VirtualClusterStereo: Same as VirtualCluster but with active stereoscopy
- Cube: Predefined configurations for immersive cubes, CAVEs(tm), etc.
- Cube-5-Sides: A complete configuration for a 5-sided immersive cube and 5 computers in cluster
- Cube-5-Sides-Flatten: A debug configuration where all viewports are displayed on one computer
- Cube-5-Sides-Flatten-VirtualCluster: A debug configuration where all viewports are displayed on one computer, but each viewport is a separate cluster player
- HMD: Predefined configurations for head-mounted-displays (HMDs)
- HMD-NVis-SX-60: NVisor SX 60
- HMD-NVis-SX-111: NVisor SX 111
- HMD-HTC-Vive: HTC Vive
- HMD-Oculus-Rift-DK1: Oculus Rift DK1
- HMD-Oculus-Rift-DK1-Razer-Hydra: Oculus Rift DK1 + Razer Hydra for hand tracking and navigation
- HMD-Oculus-Rift-DK2: Oculus Rift DK2
- HMD-Oculus-Rift-DK2-Razer-Hydra: Oculus Rift DK2 + Razer Hydra for hand tracking and navigation
- HMD-Oculus-Rift-LeapMotion-SDK2: Oculus Rift DK2 + LeapMotion (SDK2) for hand and fingers tracking
- HMD-Oculus-Rift-CV1: Oculus Rift CV1
- HMD-Oculus-Rift-CV1-Razer-Hydra: Oculus Rift CV1 + Razer Hydra for hand tracking and navigation
- HMD-Sensics-zSight-60: Sensics zSight 60
- HMD-Sony-HMZ-T1: Sony HMZ-T1 in compressed side-by-side stereoscopy
- HMD-Sony-HMZ-T1-Full720p: Sony HMZ-T1 with full 720p resolution for each eye
- HMD-Vuzix-VR920-Mono: Vuzix VR-920 in monoscopy
- HMD-Vuzix-Wrap-1200-VR: Vuzix Wrap 1200VR
- Misc: Configurations that don’t fit in other categories
- AdvancedCalibration: Contains a more complex hierarchy for more finetuned calibration of eye and hand offsets
- Default: The default configuration. A simple viewport and mouse navigation
- HoloStage: An holostage has one wall in the front and one floor
- Kinect1: Configuration for full body tracking in the Kinect V1
- Kinect2: Configuration for full body tracking in the Kinect V2
- LeapMotion: Configuration for one hand tracking with Leap Motion SDK1
- LeapMotion-SDK2: Configuration for up to four hands tracking with Leap Motion SDK2
- RazerHydraHeadHand: One razer hydra tracks the head, the other one tracks one hand
- Wall: A simple stereoscopic wall
- zSpace: The zSpace
- Stereo:
- SimpleStereoActive: A single active stereo viewport
- SimpleStereoPassive: A single passive stereo viewport
- TV3D:
- TV3D-32inch-82cm: A 82cm / 32-inch 3D TV
- TV3D-46inch-117cm: A 117cm / 46-inch 3D TV
4.3 4.3: Portability - Create once, presence everywhere!
One goal of MiddleVR is to help you deploy your application on many different VR systems. Wikipedia defines portability as “the software codebase feature to be able to reuse the existing code instead of creating new code when moving software from an environment to another.”
MiddleVR also has the ability to bring VR capabilities to your 3D application, increasing the number of tools that you can use on your VR system:
If you VR system is modified, you only have to modify its description from the configuration file, and all the applications using MiddleVR will be automatically reconfigured to adapt to your changes.
4.4 4.4: Drivers and devices
Devices are managed by drivers. Each driver can create several devices. For example, the driver responsible for handling basic devices uses Microsoft DirectInput to create devices such as a keyboard, a mouse or joysticks.
Below are the different devices currently supported by MiddleVR.
4.5 4.5: Trackers
The goal of a tracking device is to give information to the computer about the position and orientation of a tracked object/human in space.
A VR system typically needs to know where the hand or the head of the user is. A tracking system can also report the position of arbitrary objects, such as a wand or any object whose position is useful for the application.
The trackers hold the position and/or orientation information of a device in space. This information is typically stored as a transformation matrix. The data can also be accessed more simply by asking for a position (a vector of three floats), and an orientation (a quaternion).
4.5.1 4.5.1: Types of tracking devices
Over time, tracking devices have evolved and gained in precision and usability. Popular tracking techniques include magnetic tracking, optical tracking and inertial tracking.
4.5.1.1 4.5.1.1: Optical trackers
The current trend is to use optical tracking (A.R.T, Vicon, Natural Point, Motion Analysis, IO Tracker, etc.) by putting inexpensive markers on your body and watching them through special video cameras. This technique has the advantage of being wireless and is becoming cheaper and cheaper.
4.5.1.2 4.5.1.2: Inertial trackers
Inertial trackers are also quite popular nowadays, since you can find then in any mobile phone and a lot of HMDs. An inertial sensor is made up of one or all of the following:
- accelerometer,
- gyroscope,
- magnetometer.
After fusing the information reported by those three sensors, an inertial tracker is able to give an accurate and fast information about the current orientation, or even position in some cases, of an object.
Note that it is not currently possible to obtain a correct information about the position of an object using only an inertial sensor.
4.5.1.3 4.5.1.3: Magnetic trackers
The most common trackers used to be magnetic trackers, (Polhemus, Ascension) but they require cables (except the new Polhemus Patriot Wireless) and can lose precision as the magnetic field is perturbed by metal.
4.5.2 4.5.2: Tracking data
A tracking device can report:
- a position,
- an orientation,
- accelerations (in translations or rotations)
When comparing trackers, there are several parameters to take into account:
- refresh rate,
- latency,
- precision,
- general usability.
4.5.3 4.5.3: Coordinate systems
There is no accepted norm for how the data from a tracking device is reported.
Moving on the Up/Down axis can modify data on the Y or Z axis, positive or negative.
4.5.3.1 4.5.3.1: MiddleVR native drivers
All the devices that are integrated natively in MiddleVR report the following:
- Positive X axis: moving to the right,
- Positive Y axis: moving to the front,
- Positive Z axis: moving up.

4.5.3.2 4.5.3.2: Adjusting the coordinate system
Some devices must be configured so that their axis match the definition above. For example: A.R.T DTrack or VRPN trackers.
The representation of 3D information in space is not standardized. Sometimes moving a tracker from the user to the screen can be seen as an increase of value on the Z axis, sometimes as a decrease of value on the Y axis. This is dependent on the way the device reports its data and how the driver interprets this data.
The drivers that require a specific coordinate system come usually with the “TrackerCoordinateSystem” property that it is needed to set up properly. For VRPN trackers, “TrackerCoordinateSystem” is replaced directly with Right/Front/Up.

One easy way to configure the axis of a device is first to add it without modifying the existing Right/Front/Up definition. Then go to the 3D nodes tab, assign the device to any node, for example the “HandNode”.
Now try to move the 3D tracker to the right, to the front (going away from the user towards the screen), and upwards. If the 3D node moves correctly, you’re lucky, your calibration is done.
If this is not the case, you will have to write down some information:
start by moving the 3D tracker to the right. By looking at the 3D node X,Y,Z, find out which one of these axis is moving by the largest amount. Most of the time when you’re moving to the right, it’s the X axis that is moving the most. Please also note if, when moving to the right, the axis is increasing or decreasing. For example, if the X axis is increasing, write down: Right: +X. If it’s decreasing, write: Right: -X.
repeat this step for front (away from the user, towards the screen) and up.
You will end up with something like:
- Right: +X
- Front: +Z
- Up: -Y
Now remove the existing driver and recreate it by specifying the coordinate system with “TrackerCoordinateSystem” property.
4.5.4 4.5.4: Configuring a tracking system
Please read the section “Configuring a tracking system”.
4.6 4.6: Axis
Axis typically store data and events about the axis of a joystick or a mouse, but this could represent any kind of analog information, such as a slider. This information is stored as an array of floats.
4.7 4.7: Buttons
Buttons store buttons data. This information is stored as an array of booleans.
4.8 4.8: Joystick
The Joystick device is used to store information about joysticks, gamepads, or similar devices. A joystick internally stores its information as both Axis and Buttons types.
4.9 4.9: Keyboard and mouse
MiddleVR can of course handle basic devices such as a keyboard or a mouse.
4.10 4.10: Wand
In most immersive cubes you interact with the simulation with a standard device called a Wand, or a Flystick.
This device is held in the hand and tracked in space. It commonly has several buttons and a two-axis joystick.


A Wand can be decomposed in three parts:
- its 3D position and orientation,
- the value of the joystick axis,
- the state of the buttons.
MiddleVR includes standard interactions based on the Wand: navigation, grabbing of objects.
You first have to configure the three parts of the Wand before you can use those interactions.
4.11 4.11: Does MiddleVR support output devices such as force feedback or haptic devices?
MiddleVR supports Haption’s haptic devices. Contact us for more information.
4.12 4.12: 3D Nodes
MiddleVR internally uses its own scenegraph to describe the user and the VR system:
This representation is made up of cameras, screens and simple 3D nodes.
4.12.1 4.12.1: Coordinate system
MiddleVR uses a right-handed coordinate system, with X pointing to the right, Y pointing away from the user, towards the screen, and Z pointing up.

4.12.2 4.12.2: 3D Node
Most of the time 3D nodes represent real world objects, like a screen, or a body part like a user’s head, hand, or eyes (i.e. cameras).
In the 3D nodes view, you can configure what your VR system looks like “in the real world”: what body parts of your user you’re tracking, where are the screens located, how they’re seeing the virtual world.
A basic 3D node is represented as a blue square.

4.12.3 4.12.3: VR hierarchy
The VR hierarchy is the whole scenegraph describing your VR system and users. The root of the VR hierarchy represents the physical center of your VR system. We call it the “VRSystemCenterNode”. The VR hierarchy is the whole scenegraph including the VRSystemCenterNode and all its children.

VRSystemCenterNode as root.4.12.4 4.12.4: Cameras
Cameras are like real world cameras. They capture a view of the virtual world. They can also represent a user’s eye position and orientation. You can either use regular cameras or stereoscopic cameras, which will render two slightly different views so as to recreate the 3D perception.

camera (pointing here to the right).4.12.5 4.12.5: Screen
A Screen is the physical representation of a display surface. A display surface can be for example a projection screen or a computer monitor. A screen node is useful to specify the position, orientation and size of the display surface.

screen.A Screen doesn’t hold any information about resolution or refresh rate. This information is handled by the Display.
A Screen is used by a camera to determine its viewing frustum in order to compute the correct perspective based on the user position with respect to this Screen.
A typical computer monitor is a combination of a Screen and a Display: the display surface is the monitor, which is not the case with a projector.
A projector is a good example of why the two concepts are separated. The projector in itself is responsible for the refresh rate and resolution of the image, a Display, whereas the projection surface can be very different depending on where you place the projector. The physical position and size of the display surface is then stored as a Screen.
You can find the concept of Screen in other software, named as projection referential, projection surface, display surface, etc.
4.13 4.13: Viewports
A viewport is simply the layout of cameras on a display. A viewport is a 2D area on your display where you display the rendering of a particular camera.
For example here you can see three different viewports that were each assigned a camera:
Note: Two blue rectangles are displayed behind the viewports because the configuration was authored on a computer with two connected screens.
This will result in the following layout in your 3D application:
4.14 4.14: Display
A display is only the electronic part of a viewing system: a combination of your graphics card and the pixels in your projector or monitor. A display knows about the refresh rate and the resolution of your monitor or projector. The actual physical display surface needed to compute the correct perspective is defined by the screen.
4.15 4.15: Stereoscopy
To create stereoscopy in MiddleVR you will most of the time use a stereoscopic camera, then associate this camera with a viewport. The stereoscopic options are chosen in the viewport parameters.
See also Understanding stereoscopy.
There are several ways to display stereoscopic images in MiddleVR.
4.15.1 4.15.1: Active stereoscopy
Active stereoscopy means displaying the left and right images alternatively on the screen. You then need glasses that will hide the left eye when the right picture is visible, then switch: when the left picture is displayed, hide the right eye. Those glasses have electronic shutters to achieve this, which is why they are called active glasses, which also gives the name to active stereoscopy.
The active stereo mode, also known as the OpenGL quad-buffer mode, requires specific graphics card, typically professional cards such as NVidia Quadro cards or ATI FireGL pro cards.
You also have to active this mode in your graphics drivers. Refer to your drivers manual to find out how.
4.15.2 4.15.2: Passive stereoscopy
Passive sterescopy is often achieved by displaying the left and right images side by side or at least displaying the two images at the same time. You then have to put “passive” glasses, which don’t have any electronics, which can achieve the separation between the two pictures by means of optical filters, which are often simply polarized filters.
4.16 4.16: Understanding head-tracking and perspective
When in front of a screen or projected image, it is often said that this display surface is like a window to the virtual world. Ideally, this display will work exactly the same as a window in the real world.
When the user moves closer to the window, he will see more of the scene. If he moves further from the window, he should see less of the scene.
If he goes to the left, he should, counter-intuitively, see more of the right side of the scene. Conversely, if he goes to the right, he should see more of the left side of the scene.
4.16.1 4.16.1: Symmetric cameras
With a simple 3D screen, or when watching a 3D movie in a theater, the game/movie always assumes that the user is always perfectly in front of the center of the screen. It does not take into account the potential movements of the viewer. If you move your head, the game/movie doesn’t know about it and nothing will change. You will just get a wrong perspective.
In this case, the virtual camera is said to be “symmetric”, because your are as far from the left border of the screen than from the right border. Same goes for top and bottom of the screen.
4.16.2 4.16.2: Asymmetric cameras
Now, if the computer knows exactly where your eyes are in front of the screen, it can modify the parameters of the virtual camera: the goal is that the computed image looks like the picture you would see through a virtual window of the exact same size as the screen, being at the same relative position in real and virtual world.
In this case, the virtual camera is said to be “asymmetric”: your eyes are not in front of the center of the screen anymore, they can be anywhere in space, and the perspective will be computed correctly.
Asymmetric cameras are required to achieve virtually all stereoscopic images in HMDs, 3D screens / Walls and CAVEs.
4.16.3 4.16.3: Video
This is perfectly illustrated in this famous video by Johnny Lee.
4.17 4.17: Configuring a head-mounted display (HMD)
Most VR head-mounted displays (HMD for short), are made up of two parts:
The Oculus Rift DK1 and DK2 have integrated trackers, but some HMDs, like the NVisor SX 60, SX 111 or Sony HMZ-T1/T2 series don’t have one. It is quite easy to add one depending on the requirements of the final application.
Note that most of the time the tracking system only tracks the head rotation, rarely the head position, and even more rarely the eyes direction.
4.17.1 4.17.1: Understanding the display system
The display system can be monoscopic or stereoscopic but we will focus on stereoscopic ones, even if stereoscopy might not be important in every cases.
Make sure to read the article: “Understanding stereoscopy”.
The idea is quite simple: place two screens in front of your eyes! The reality is of course a bit more complex, involving a lot of design choices between different types of screens and the optical lenses.
The difficulty on the user side is to know the size and location of the virtual screens (the screens as seen through the lenses) with respect to the eyes. This will determine the configuration of the cameras / screens and viewports in MiddleVR.
4.17.2 4.17.2: Understanding the tracking system
The tracking system from a HMD can be any of the existing tracking system.
Make sure to read the article: “Understanding tracking devices”.
4.17.3 4.17.3: Configuring HMDs
4.17.3.1 4.17.3.1: Predefined configurations
The easiest way of using an HMD in MiddleVR is to simply load a predefined-configuration. There are currently predefined configurations for:
- Oculus Rift (DK1, DK2, CV1),
- HTC Vive,
- NVIS-SX60,
- NVIS-SX111,
- Sony-HMZ-T1,
- Vuzix VR 920,
- Vuzix Wrap 1200VR
4.17.3.2 4.17.3.2: Configuring the display system
As mentioned above, the display system can be entirely configured in MiddleVR using Cameras, Screens and Viewports.
The display system will determine the field of view, the perspective but now how the head (or the eyes) are moving. This is done by the tracking system.
To properly configure the display system, you need to have all the information about the “virtual screens” that display the images in the HMD. Those information should be given by the constructor. They will help you determine how to configure MiddleVR cameras, screens and viewports.
Often the configuration is the same as for stereoscopic screen / wall. This is assuming that the HMD is designed so that the two screens are seen as one stereoscopic screen in the distance.
With more advanced HMDs, the two screens are clearly positioned differently. This is particularly obvious with the NVIS SX 111 where the two screen are rotated outwards.
With the Oculus Rift, the two screens are offset outwards, and distortion must be compensated.
See also “How to show a viewport on a specific display”.
4.17.3.3 4.17.3.3: Configuring the tracking system
Please read “Configuring a tracking system”.
4.18 4.18: Configuring a stereoscopic screen / wall
When using a 3D TV or a 3D projector, you have to make sure the configuration is perfect so that the stereoscopy and perspective perfectly match the setup.
Make sure to read the article: “Understanding head-tracking and perspective”.
4.18.1 4.18.1: Understanding what parameters to configure
The only factors affecting the perspective are:
the screen size: for a 3D TV it is simply the size of the display panel. For a 3D projector, it is the size of the projected rectangular surface.
the screen position with respect to the eyes of the user.
Stereoscopy is affected by the same factors plus the inter-eye distance.
Finally, a tracker device can be used to provide head-tracking.
4.18.2 4.18.2: Configuring without head-tracking
In this configuration, the user is assumed to be at a fixed position from the screen.
There are two ways to configure this kind of VR systems in MiddleVR:
using only a stereoscopic camera,
using a stereoscopic camera and a screen.
4.18.3 4.18.3: Using only a stereoscopic camera
In MiddleVR you can simply setup a stereoscopic camera.
The two important parameters here are:
- Screen Distance: set this parameter to the distance between your user and the screen.
- Vertical FOV: set this parameter to the apparent vertical field of view of your screen from the user position.
4.18.4 4.18.4: Using a stereoscopic camera and a screen
The easiest way to configure such a system is to setup a stereoscopic camera linked with a screen (see also Screen parameters).
There are several predefined configurations for such systems that you should use as a basis for configuring your own VR system:
- Wall.vrx
- HoloStage.vrx
- TV3D-32inch-82cm.vrx
- TV3D-46inch-117cm.vrx
4.18.5 4.18.5: Configuring with head-tracking
Make sure to read the article: “Understanding head-tracking and perspective”.
You should first start by configuring your system with a stereoscopic camera and a screen, as described above.
Finally you should configure your head-tracking device and assign it to a node manipulating the stereoscopic camera.
4.18.6 4.18.6: Configuring viewports
See article: “How to show a viewport on a specific display”.
4.19 4.19: Configuring an immersive cube / CAVE (tm)
A CAVE is “simply” made up of multiple stereoscopic walls, so make sure to read the corresponding section.
The main difficulties are:
- properly configuring the position / size of the Screens so everything is exactly matching the real world projection surfaces,
- properly configuring the trackers,
- properly configuring the computers for shared network
There are a mainly two predefined configurations for a CAVE:
- Cube-5-Sides: if your CAVE is using a cluster,
- Cube-5-Sides-Flatten: if your CAVE is using a single computer with multiple outputs
4.19.1 4.19.1: Configuring the trackers
Make sure that the zero of the trackers (generally represented as the origin of MiddleVR coordinate system) corresponds to the position of the screens.
In real-life, if the zero of the tracker is set to be at the center of the floor screen, make sure the center of the floor screen is configured to be at the origin of MiddleVR’s coordinate system.
Make sure to read the article “Configuring a tracking system”.
4.19.2 4.19.2: Configuring the cameras
Make sure to have one stereo camera per wall.
4.19.3 4.19.3: Configuring the cluster
Please read the section “Configuring the cluster computers”.
4.20 4.20: VRPN
VRPN (Virtual Reality Peripheral Network) http://www.cs.unc.edu/Research/vrpn/ is a very popular, community-based, and standard software library to access a lot of VR devices.
It is used by a lot of VR commercial and free applications. It is cross-platform and runs on many different OS including Windows, Linux and MacOS. VRPN has been released to the public domain by Russel M. Taylor II from the University of North Carolina at Chapel Hill, and the VR community contributed a lot to improve the project.
The list of supported devices can be found on the VRPN home page http://www.cs.unc.edu/Research/vrpn/.
MiddleVR uses VRPN for some trackers because of its robustness and resilience, and because a lot of VR systems around the world already have a VRPN server configured for their VR devices.
Note: Where possible it is preferable to use native drivers rather than the VRPN client. This will give you more control and less latency
4.20.1 4.20.1: Understanding VRPN
VRPN “converts” data from most devices to mostly three types: Tracker, Analog (Axis in MiddleVR terminology) and Button.
The Tracker type holds a position and an orientation.
The Analog type is used for any type of axis: joystick axis, mouse axis, etc.
The Button type is used for any type of binary button: joystick button, mouse button, etc.
For example a mouse has a 2 channel Analog and a 3 channel Button.
A Wand, a typical VR device, has a Tracker, an Analog data for the joystick, and Buttons.
VRPN requires that you configure a server with the devices that you want to use. The server will then stream data coming out of your devices. Finally your program can easily connect to the server to get those data in a standardized way.
5 5: Configuring a VR system in MiddleVR
5.1 5.1: MiddleVR configuration interface
The MiddleVR configuration tool is divided into five elements:
- The device window,
- The 3D nodes window,
- The viewports window,
- The cluster window,
- The simulations window.
5.1.1 5.1.1: Devices
The devices window allows you to configure which devices you want to use for the current configuration:
5.1.2 5.1.2: 3D Nodes
The 3D nodes window allows you to create a description of the real world elements that will influence the 3D rendering and interaction of your virtual world. 3D Nodes were explained above in the documentation.
For example this is where you will specify that a physical screen is two meters wide, that the head, left and right hands of a user are tracked, by which tracker etc.
In this window, you can also choose to hide or display the cameras in the 3D view by opening the “View” menu and (un)checking “Display cameras”.
In the 3D View you can move by using the right button of the mouse , you just have to maintain the right button and then move the mouse. To reset the view to it’s original position you can click on the “Reset View” button in the “View” menu. You can also center the view on a specific 3D node by selecting it in the TreeView.
5.1.3 5.1.3: Viewports
The viewports window allows you to specify where to display the rendering of your cameras. You can specify for example that a camera will only display its rendering in the top-left corner of the screen, and another camera could display its rendering only in the bottom-right corner of the screen. Or you could say that the camera rendering the left eye is displayed on your primary screen, and that the camera rendering the right eye is displayed on your secondary screen.
The viewports are displayed as red rectangles. The available displays are displayed in blue.
5.1.4 5.1.4: Cluster
The cluster window manages the configuration of the cluster nodes and specific cluster parameters:
5.1.5 5.1.5: Simulations
The simulations window allows you to manage the applications that you want to run with MiddleVR. It allows you to choose an application and then select the configuration file it should be running with.
5.2 5.2: Configuring devices
5.2.1 5.2.1: Adding a device
To add a device to your VR system description, simply go to the Devices tab, and click the ‘+’ button to display the Add Device window:
You can check all the supported devices in the section: Devices.
5.2.1.1 5.2.1.1: Keyboard, mouse, joysticks
Keyboard, mouse and all available joysticks are automatically added by the DirectInput driver when MiddleVR is initialized:
5.2.1.2 5.2.1.2: A.R.T. DTrack
The DTrack driver lets MiddleVR receive positions and orientations of A.R.T. Flysticks, body and finger targets, and also values of Flystick joysticks and Flystick button states.
The settings “Right”, “Front” and “Up” let you provide the coordinate system that your A.R.T system is calibrated with (see “Adjusting the coordinate system”).
In order to work, the DTrack driver must listen on a local port (default 5000) and create a set of MiddleVR trackers (default to 1).
The driver will try to listen on the local port immediately after its addition. It is important that you check it did not display the following error message: “[X] DTrack driver: Init error. Check that the local port ‘5000’ is available.”. As it is suggested, this port (here, 5000) is already in use by an application, so you should use another port number.
5.2.1.3 5.2.1.3: GameTrak Trackers
The GameTrak trackers are an old mechanical device that reports two absolution positions but no orientation. This device is mainly used by hobbyists and is difficult to find.
5.2.1.4 5.2.1.4: InterSense
The InterSense driver provides position and rotation of tracking stations. This information is determined by the accelerometers and gyros of those tracking stations and is being corrected by fusing the output of the inertial sensors with range measurements obtained from ultrasonic components.
A “Device” is an InterSense Processor Unit like the IS-900.
A “Tracker” is an InterSense tracked device (aka InterSense station): it gives positions and orientations from the coordinate system that is provided through the parameters named “Right”, “Front” and “Up” (see “Adjusting the coordinate system”).
Some InterSense Stations are wands (tracker, joystick and buttons) or styluses (buttons). InterSense Stations are plugged to InterSense Processor Units.
For each InterSense Station, MiddleVR provides:
- a virtual tracker with positions and orientations.
- two axis for joysticks inputs.
- buttons states.
Axis and buttons are updated only when wand or stylus are connected to an InterSense Processor Unit.
Note: the InterSense Driver will always assume that there are as many connected InterSense stations as the maximum number of addressable stations.
Please have a look to the InterSense documentation for hardware configuration. It is suggested that you put the “isports.ini” file (if you need to use it) in a directory specified by the “ISPORTS_INI_DIR” environment variable. This folder should not reside in the MiddleVR system folders in order to keep a clean installation.
The content of the “isports.ini” file varies according to your hardware setup:
Example with a RS-232 connection:
Port1 = COM1:115200
The baud rate is optional and defaults to the maximum performance: 115200. You might lower the value if you use a long serial cable.Example with a remote computer (via TCP):
Port1 = ip:port
The IP “port” will probably equal to the default value: 5005.
If you encounter difficulties to use a RS-232 connection, we recommend that you check the version of the firmware and upgrade it before trying again.
5.2.1.5 5.2.1.5: Kinect trackers (Microsoft SDK)
MiddleVR will create 3D trackers based on the skeleton furnished by the Microsoft SDK. MiddleVR 1.6 supports both Kinect 1 and Kinect 2.
Note: Starting with MiddleVR 1.2, the Kinect orientations are also applied. To disable the rotations check the 3D nodes options.
You can find a ready-to-use configuration file that will automatically assign all the trackers to the corresponding 3D nodes. This configuration file is located here: data/Config/Kinect1.vrx. or data/Config/Kinect2.vrx. depending on your Kinect version.
5.2.1.6 5.2.1.6: Leap Motion
The Leap Motion is an optical tracking system. This driver will report one hand (one palm and five fingers).
5.2.1.7 5.2.1.7: Leap Motion (SDK2)
The Leap Motion SDK2 is an optical tracking system. This driver will report up to four hands data in 6 DoF for: elbows, wrists, palms, fingers (metacarpal bones, proximal bones, intermediate bones, distal bones).
This driver provides read-only properties: the maximum number of tracked hands, the current number of visible hands, whether each visible hand is left (if a hand is not visible, it will be considered as not left hand).
The best way to create a configuration for the Leap Motion (SDK2) is to start from the Leap Motion (SDK2) predefined configuration: LeapMotion-SDK2.vrx (you will get a configuration for two hands).
The property “HMDModeEnabled” should be used when the Leap Motion is attached to the front of a HMD, with its wire cable on the right. In this case, the Leap Motion will use an internal tracking mode that is optimized for HMD, and the coordinate frame will be as follows:
- +X pointing to the right of the HMD,
- +Y pointing to the front of the HMD,
- +Z pointing toward the top of the HMD.
When a Leap Motion is in front of an Oculus Rift DK2, you should use the file HMD-Oculus-Rift-DK2-LeapMotion-SDK2.vrx: a 3D node for the Leap Motion will be parented to a 3D node for the Oculus Rift DK2 so movements of your hands will be relative to the HMD. In addition, this configuration file will activate the internal tracking mode of Leap Motion for HMDs.
5.2.1.8 5.2.1.8: Motion Analysis (beta)
The Motion Analysis driver requires you to run Cortex on the local machine.
5.2.1.9 5.2.1.9: NaturalPoint OptiTrack Trackers (NaturalPoint NatNet SDK)
OptiTrack trackers are infrared trackers that will report orientation and position. The system is intended for body motion capture with high precision and low latency.
The configuration requires an IP address of the computer that is running a NaturalPoint tracking software (e.g. Motive).
| Option | Description |
|---|---|
| Number of trackers | The maximum number of trackers to read values from. |
| Local IP Address | Your IP address. It might be “127.0.0.1” if you run the software Motive on this computer. Otherwise, be sure that you use an IP address for the same network than the server running Motive. |
| Server IP Address | IP address of the computer running the software Motive. |
| Connection type | Select whether unicast or multicast transmission should be used. If the server delivers data to one computer only, both options should lead to the same performance. Note that this setting must match what is selected in the server to enable communication. |
| Right/Front/Up | Let you provide the coordinate system that your OptiTrack system is calibrated with (see “Adjusting the coordinate system”). |
| Command port (Advanced) | Port to send/receive commands to/from the server. By default, the server uses the port 1510 in UDP to send and receive. Your firewall must allow the communication. |
| Data port (Advanced) | Port to receive data from the server. By default, the server uses the port 1511 in UDP to send data. Meanwhile, this computer (i.e. the local machine) also uses the port 1511 in UDP to receive data from the server. Your firewall must allow the communication. |
If you encounter problems to connect to a server, or receive data please check out the following points:
- the server and your machine use the same connection type (i.e. unicast/multicast). Prefer unicast to ease the configuration.
- command and data port are correct.
- communication is not blocked by a firewall.
- the setting ‘Broadcast Frame Data’ in the server is checked.
- the local interface is not set to “local loopback” but to the IP of the machine if Motive and MiddleVR-Config (or your simulation) do not run on the same computer.
5.2.1.10 5.2.1.10: NaturalPoint TrackIR Tracker
The NaturalPoint TrackIR tracker is an infrared tracker that will report orientation and position. It is intended mainly for PC gaming. It requires that you have the latest version of the TrackIR software running.
There is no configuration required for this device in MiddleVR.
This requires that the latest version of the TrackIR software is installed and running: http://www.naturalpoint.com/trackir/06-support/support-download-software-and-manuals.html Make sure to update the list of supported games: MiddleVR is not supported unless you do that, even with the latest software version installed.
5.2.1.11 5.2.1.11: Oculus Rift DK1
The best way to create a configuration for the Rift is to start from the Oculus Rift DK1 predefined configurations (HMD-Oculus-Rift-DK1.vrx and HMD-Oculus-Rift-DK1-Razer-Hydra.vrx) that contain all the Rift-specific ready-to-use parameters.
Note: The FOV, aspect ratio and IPD are modified by the driver for an optimum integration of the latest Oculus SDK.
The Oculus Rift system can be divided into two parts: the tracking system and the display system.
The tracking system is an inertial tracker that only reports orientation. A basic usage doesn’t require any configuration, but if you want to activate the Magnetometer Drift Correction you need to first calibrate the magnetometer with the Oculus Configuration Utility tool. If a calibration is saved with the “Enable Mag Yaw Correction” option checked, then the drift correction will be activated in your simulation.
If you wish to start a configuration from scratch you need to add the OculusRiftDK2 Tracker in the Device panel. Note that this tracker will only add the tracker and will not configure the cameras and viewport.
To handle the Rift display system, the “OculusRiftWarping” option of the side-by-side viewport needs to be enabled. This option activates the lens deformation and chromatic aberration correction. It also forces the anti-aliasing on 2 to compensate the current low resolution and offer a good user experience.
Note: All application made using the earlier version of the Oculus Rift DK1 driver OculusRift with MiddleVR prior to 1.6.0 should be updated.
Note: The Oculus SDK does not allow the application to apply anti-aliasing directly from the application. Thus, if in your MiddleVR configuration file you set the anti-aliasing level above 1 you will not have anything rendered in the HMD.
Note: Again we really encourage you to use the predefined configurations and modify them to suit your needs.
5.2.1.12 5.2.1.12: Oculus Rift DK2
The best way to create a configuration for the Rift is to start from the Oculus Rift DK2 predefined configurations (HMD-Oculus-Rift-DK2.vrx, HMD-Oculus-Rift-DK2-Razer-Hydra.vrx and HMD-Oculus-Rift-DK2-LeapMotion-SDK.vrx) that contain all the Rift-specific ready-to-use parameters.
Note: The FOV, aspect ratio and IPD are modified by the driver for an optimum integration of the latest Oculus SDK.
The Oculus Rift system can be divided into two parts: the tracking system and the display system.
The tracking system is composed of a gyroscope, an accelerometer and a magnetometer only for the orientation of the head. For the positional tracking, the Oculus Rift DK2 is encrusted with infrared LEDs tracked by an infrared camera.
The center of positions is the center of the tracking camera and does not depend on the camera orientation.
Note that adding the Oculus Rift DK2 tracker in the Device panel will only add the tracker and will not configure the cameras and viewport.
To handle the Rift display system, simply select the camera stereo from which the Oculus Rift DK2 will get its images. To do so, in the “TargetCamera” property of the “Oculus Rift Driver” tracker, select the camera you want to use.
Please be aware that the support of the Oculus Rift DK2 is still currently maintained by Oculus using the runtime 1.8 but they have made it clear that the support will not last forever. In the case of the Oculus Rift DK2 not being supported anymore please use the OculusRiftDK2 Tracker in the Device panel.
Note: The Oculus SDK does not allow the application to apply anti-aliasing directly from the application. Thus, if in your MiddleVR configuration file you set the anti-aliasing level above 1 you will not have anything rendered in the HMD.
Note: Again we really encourage you to use the predefined configurations and modify them to suit your needs.
5.2.1.13 5.2.1.13: Oculus Rift CV1
The best way to create a configuration for the Rift is to start from the Oculus Rift predefined configurations (HMD-Oculus-Rift-CV1.vrx and HMD-Oculus-Rift-CV1-Razer-Hydra.vrx) that contain all the Rift-specific ready-to-use parameters.
Note: The FOV, aspect ratio and IPD are modified by the driver for an optimum integration of the latest Oculus SDK.
The Oculus Rift system can be divided into two parts: the tracking system and the display system.
The tracking system is composed of a gyroscope, an accelerometer and a magnetometer only for the orientation of the head. For the positional tracking, the Oculus Rift is encrusted with infrared LEDs tracked by an infrared camera.
The center of positions is the center of the tracking camera and does not depend on the camera orientation.
Note that adding the Oculus Rift tracker in the Device panel will only add the tracker and will not configure the cameras and viewport.
To handle the Rift display system, simply select the camera stereo from which the Oculus Rift will get its images. To do so, in the “TargetCamera” property of the “Oculus Rift” select the camera you want to use.
Regarding the support of the Oculus Touch: under the “OculusRift SDK 1.4 Driver” you will find the left and right hands trackers under the name of “OculusRift0.LeftHandTracker” and “OculusRift0.RightHandTracker”. In the HMD-Oculus-Rift-CV1.vrx predefined configuration those trackers are already binded to the “HandNode” and the “LeftHandNode” as well as the two Wands (one for each hand).
Note: The Oculus SDK does not allow the application to apply anti-aliasing directly from the application. Thus, if in your MiddleVR configuration file you set the anti-aliasing level above 1 you will not have anything rendered in the HMD.
Note: In case you do not have access to a pair of Oculus Touch the OculusRift Driver has a native support for the XBox controller so the inputs will be received as if you had an Oculus Touch. In this case you won’t have any hand tracking.
Note: Again we really encourage you to use the predefined configurations and modify them to suit your needs.
5.2.1.14 5.2.1.14: HTC Vive
The best way to create a configuration for the HTC Vive is to start from the HTC Vive predefined configuration (HMD-HTC-Vive.vrx) that contain all the Vive-specific ready-to-use parameters.
Note: The FOV, aspect ratio and IPD are modified by the driver for an optimum integration of the latest OpenVR SDK.
The HTC Vive can be divided into two parts: the tracking system and the display system.
The tracking system is composed of a gyroscope, an accelerometer and a magnetometer only for the orientation of the head. For the positional tracking, the HTC Vive is encrusted with photo-sensors that detect the positions of the LightHouses relative to itself.
The center of positions and of orientation is specified by the users when they configure their virtual space.
Note that adding the HTC Vive tracker in the Device panel will only add the trackers and will not configure the cameras and viewport.
To handle the HTC Vive display system, simply select the camera stereo from which the HTC Vive will get its images. To do so, in the “TargetCamera” property of the “OpenVR Driver” select camera you want to use.
| Option | Description |
|---|---|
| ControllerTrackersNb | The maximum number of controllers to read values from. |
| TrackingReferenceNb | The maximum number of Lighthouses to read values from. |
| TargetCamera | The camera stereo from which the HTC Vive will get its images. |
| PlayAreaWidth | The width in meters of the play area specified by the user in SteamVR. (read-only) |
| PlayAreaLength | The length in meters of the play area specified by the user in SteamVR. (read-only) |
Note: Again we really encourage you to use the predefined configurations and modify them to suit your needs.
It is possible to make the controllers vibrate through the use of vrCommand.
Here is a short example:
// The parameters for the "vrDriverOpenVRSDK.TriggerHapticPulse" vrCommand are:
// - ControllerId: int
// It is the controller we want to make vibrate. The first controller is
// the controller 0. If ControllerId is -1 then all the
// connected controllers will receive the haptic pulse.
// - Axis: uint
// It is the axis we want to make vibrate on the controller. Controllers
// usually have only one axis but they can have more. The first
// axis is the axis 0.
// - VibrationTime: uint
// It is the time in microseconds the pulse will last. It can last
// up to 3 milliseconds.
// Note that after this call the application may not trigger another haptic
// pulse on this controller and axis combination for 5 ms.
var value = vrValue.CreateList();
value.AddListItem(new vrValue(-1));
value.AddListItem(new vrValue(0));
value.AddListItem(new vrValue(3000));
MiddleVR.VRKernel.ExecuteCommand("vrDriverOpenVRSDK.TriggerHapticPulse", value);
An extensive example is given in the sample “TriggerHapticPulseOnVRAction”.
5.2.1.15 5.2.1.15: Razer Hydra trackers
The Razer Hydra has two magnetic trackers as well as a joystick and several buttons on each trackers.
5.2.1.16 5.2.1.16: SpaceMouse
A SpaceMouse is a hardware device that lets a user translate and rotate objects in 3D. As the name suggests, it also provides buttons to trigger actions.
There is no configuration required for this device.
The SpaceMouse tracker is a tracker that can be translated and rotated in 3D by a SpaceMouse device. The configuration fields are described below.
| Option | Description |
|---|---|
Tracker translation speed |
A factor to be applied on the translation that comes from the SpaceMouse. This factor depends on time, hence it is expressed as linear-movement unity per second. The SpaceMouse sends arbitrary quantities so it is impossible to talk about meter per second (m/s) for example. However, the SpaceMouse control panel can also be tweaked to increase or decrease the translations that the SpaceMouse sends to software including MiddleVR, see its settings. |
Tracker rotation speed |
A factor to be applied on the rotation that comes from the SpaceMouse. This factor is similar to the translation speed but is expressed as an angular velocity, in degrees. The SpaceMouse control panel can also be tweaked to increase or decrease the angle values it sends to software including MiddleVR, see its settings. |
Tracker movements in local space |
When checked, tracker movements are expressed in the local space of the tracker. Assuming a 3D space with X to right, Y pointing forward and Z to the top, let’s now consider that it is checked and that you rotated the tracker around the X axis by 45 degrees. If you then move the tracker forward, it will go along the Y axis that was rotated by 45 degrees. Said differently, the tracker will “climb”. If we now consider that we are working in global space, then the Y axis remains flat (i.e. aligned with the global Y axis of the world). Said differently, you are moving the rotated tracker along the flat Y axis and you look towards the top. In addition, rotations are relative to the world axis but around the object pivot. |
5.2.1.17 5.2.1.17: SpacePoint Fusion Tracker
The PNI SpacePoint Fusion tracker is a inertial tracker that will report an absolution orientation in space, but no position.
There is no configuration for this device.
5.2.1.18 5.2.1.18: Triviso Colibri
The Trivisio Colibri is an inertial sensor. It only reports orientation. It doesn’t require any configuration.
5.2.1.19 5.2.1.19: Vicon Trackers (Vicon DataStream SDK)
Vicon trackers are infrared trackers that will report orientation and position. The system is intended for body motion capture with high precision and low latency.
The configuration requires an IP address of the computer that is running a Vicon tracking software (e.g. Vicon Tracker).
| Option | Description |
|---|---|
| Number of trackers | The maximum number of trackers to read values from. |
| Remote Address | IP address of a computer running a Vicon’s software to read data from. |
| Remote Port | Port to be used on the computer with a running Vicon’s software. |
| Use multicast connection | Select whether unicast or multicast transmission should be used. If the server delivers data to one computer only, both options should lead to the same performance. Note that another connected client must have turned on before this type of connection. |
| Right/Front/Up | Let you provide the coordinate system that your Vicon system is calibrated with (see “Adjusting the coordinate system”). |
MiddleVR Trackers can only use sets of labeled Vicon’s markers.
So how to get sets of labeled markers? Indeed, MiddleVR trackers directly match to what Vicon names Segments. A Segment is for example a tracked forearm and is always made up of markers. Thus sets of labeled markers are automatically obtained from Segments, or as Objects (which is the another name used by Vicon in its Tracker software to designate Segments).


5.2.1.20 5.2.1.20: Vuzix Tracker
The Vuzix tracker is an inertial tracker. It only reports orientation. It doesn’t require any configuration.
Note that adding the Vuzix tracker only add the tracker and will not configure the cameras and viewport.
5.2.1.21 5.2.1.21: zSpace
A zSpace is a hardware device providing a 24" stereo display, passive stereo glasses and a stylus. Glasses and stylus are tracked by infrared cameras mounted on the screen in order to track their positions and rotations. The stylus is able also to vibrate and turn on its LED. It furnishes buttons and tapping events on its tip.
Several trackers are provided. They all work in the tracker space that is explained in the schema below for side and front views.
The available settings are presented below.
| Option | Description |
|---|---|
| Stylus LED color | The color of the stylus LED, in RGB. Each color component can be set to 0 or 1 only. Any value in the opened range ]-1,1[ will be reinterpreted as 0. Above 1 or below -1, the value will be reinterpreted as 1. Note that the LED will not light if the color is black. |
| Stylus LED turned on | As the name suggest, check it to turn on the stylus LED. However you must set a LED color that is not black to see a result on the stylus. |
| Stylus default duration vibration | Defines the duration in seconds of the vibration of the stylus. This value is used by the setting “Fire vibration”. |
| Stylus default duration between vibrations | Defines the duration in seconds between vibrations of the stylus. This value is used by the setting “Fire vibration”. |
| Default number of vibrations | Defines the number of vibrations for the stylus. If 0 is given, the stylus will not vibrate. With -1, the stylus will not stop vibrations until the user ask for it explicitly. This value is used by the setting “Fire vibration”. |
| Intensity of the vibrations | From 0.0 to 1.0. Currently only intensities at 0.1 intervals are supported by the zSpace SDK (i.e. 0.1, 0.2, …, 0.9, 1.0). Intensity values not specified at a valid interval will be rounded down internally to the nearest valid interval. |
| Fire vibration | Fire a vibration of the stylus. The values to be used are the values defined above with the name “default”: about the vibration duration, the duration between two vibrations and the number of vibrations. Please note that this checkbox does not tell whether the stylus is vibrating but only indicates that the user fired the vibration system. However unchecking will stop the vibration immediately (if the stylus is vibrating). |
It is possible to make the stylus vibrate (with non-default parameters) or to change color of its LED (once turned on) through the use of vrCommand. An extensive example is given in the sample “VRZSpaceSample”; it also indicates how to track visibility of head or zSpace stylus.
5.2.1.22 5.2.1.22: Tracker Simulator - Gamepad
MiddleVR is able to simulate a 3D tracker with an official Microsoft XBOX 360 gamepad. This is useful if you don’t have a real 3D tracker available, or simple if you want to navigate with a Gamepad.
For the moment only an official XBOX 360 gamepad will work.
5.2.1.23 5.2.1.23: Tracker Simulator - Mouse
MiddleVR can simulate a 3D tracker with a three button mouse.
- Press middle mouse button to move forward/backward and rotate (yaw).
- Pressing the Alt key will translate left/right, up/down.
- Pressing both left and right mouse button will reset the tracker.
5.2.1.24 5.2.1.24: Tracker Simulator - Keyboard
MiddleVR can simulate a 3D tracker with key presses.
- Press Ctrl + X/Y/Z to translate.
- Press Ctrl + Alt + Y/P/R to rotate.
- Adding the Shift key will reverse translations and rotations.
5.2.1.25 5.2.1.25: VRPN Tracker
VRPN is an open-source project that handles lots of different VR devices. VRPN requires that you configure a server with the devices that you want to use. Read more about VRPN.
MiddleVR can handle VRPN trackers, axis and buttons. This section describes the configuration of trackers. For axis and buttons, see below.
Once the VRPN server is up and running, you must specify in MiddleVR the address and optionally the port of this server, the number of trackers that you want to use, and optionally modify the way axis are applied.
| Option | Description |
|---|---|
| Address | Address of the VRPN server, plus the name of a particular device on this server. Examples: Tracker0@localhost, Tracker1@192.168.1.99, Kinect@LabPC.Moulinsart.fr. You can also specify the port of the server: Tracker0@localhost:3884, Tracker1@192.168.1.99:3886. The default VRPN port is 3883. |
| Index | A device, such as a Polhemus 3D tracker, can send data of multiple 3D trackers through one device. For example Tracker0@localhost can represent 10 different trackers, also named channels. Index specifies the starting index of the device. |
| Number of trackers | After specifying the starting index, you can also specify the number of trackers (devices) to use. For example, an index of 0 and a Number of trackers of 3 will result in the usage of channels 0,1,2. A starting index of 4 and a 2 trackers will result in the usage of trackers 3 and 4. |
| Name | Name prefix for each tracker. |
| Right/Front/Up | Axis coordinate system (see “Adjusting the coordinate system”). |
| Scale | A factor to be applied on each tracker value. This value must be not null and positive. One can find it useful to adapt VRPN values with the dimensions of the current virtual world. |
| Wait for data | Should the driver wait until a new data arrives? This can be useful in case your tracker sends updates at the same refresh rate as your display. It means that for each frame you know you’ll have a new update, and not miss an update. |
Once the VRPN trackers are added, you will immediately be able to see if the data are streamed correctly:
If the VRPN server is not reachable, you will probably see the following error in the log window:
For more information about troubleshooting VRPN, see Troubleshooting VRPN on the knowledge base.
5.2.1.26 5.2.1.26: VRPN Axis
VRPN Axis can represent joystick axis, sliders or other analog information.
| Option | Description |
|---|---|
| Address | Address of the VRPN server, plus the name of a particular device on this server. Examples: Joystick@localhost, Mouse0@192.168.1.99, Sliders1@LabPC.Moulinsart.fr. You can also specify the port of the server: Joystick@localhost:3884, Mouse0@192.168.1.99:3886. The default VRPN port is 3883. |
| Number of axis | Number of axis on this device. |
| Name | MiddleVR device name. |
5.2.1.27 5.2.1.27: VRPN Buttons
VRPN Buttons represent a button with a value of true or false.
| Option | Description |
|---|---|
| Address | Address of the VRPN server, plus the name of a particular device on this server. Examples: Joystick@localhost, Mouse0@192.168.1.99, Sliders1@LabPC.Moulinsart.fr. You can also specify the port of the server: Joystick@localhost:3884, Mouse0@192.168.1.99:3886. The default VRPN port is 3883. |
| Number of buttons | Number of buttons on this device. |
| Name | MiddleVR device name. |
5.2.2 5.2.2: Configuring the Wand
As mentioned previously, the Wand is composed of three parts:
- a 3D tracker,
- a two-axis joystick,
- a set of buttons.
You have to manually add and configure the three devices that will make up the wand. In the screenshot above, we have added a VRPN Tracker, VRPN Axis for the joystick axis, and VRPN Buttons for the buttons.
You can also use a simple joystick for the axis and buttons. You can also use mouse buttons.
You then have to specify in the Wand section, which axis and buttons devices you want to use, and the ordering of the axis and buttons.
You also have to assign the tracker to the HandNode.
Finally you have to configure the usage of the Wand in Unity.
| Option | Description |
|---|---|
| Device for wand navigation (axis) | The device that will be used to get the wand joystick axis values |
| Horizontal axis index | Index of the horizontal axis of the joystick |
| Horizontal axis scale | Scale factor that should be applied to the value of the horizontal axis |
| Horizontal axis value | Display value of the computed horizontal axis after scaling |
| Vertical axis index | Index of the vertical axis of the joystick |
| Vertical axis scale | Scale factor that should be applied to the value of the vertical axis |
| Vertical axis value | Display value of the computed vertical axis after scaling |
| Device for wand interaction (buttons) | The device that will be used to get the wand buttons states |
| Button 0 index | Index of the primary button |
| Button1 index | Index of the secondary button |
| … | … |
Sometimes, only one wand is not enough. In these cases, you will need to add and configure additional wands to your configuration. This way, you will be able to retrieve the wand you want from the device manager and get it’s tracker, axis and buttons data.
5.3 5.3: Configuring 3D Nodes
5.3.1 5.3.1: 3D Node
Most of the time 3D nodes represent real world objects, like a screen, or a body part, like a user’s head, hand, or eyes (i.e. cameras).
In the 3D nodes view, you can configure what your VR system looks like “in the real world”: what body parts of your user you’re tracking, where are the screens located, how they’re seeing the virtual world.
3D nodes are stored as a tree structure. Each node can have many children but only one parent. This hierarchy can be used to represent a user’s body and the natural relationship between its body parts. If I move my body, all my body parts are moving with me. If I move my arm, my hand will also move, etc.
Each node has a position and orientation in space with respect to its parent, represented by their X/Y/Z and Yaw/Pitch/Roll properties in Local and World space.
The world position and orientation of a node are computed by combining the transformation (position and orientation) of all its parents. This result is represented as the PositionWorld and OrientationWorld properties. The OrientationWorld read-only property is represented as a Quaternion, whereas the editable orientations are represented as Euler angles. Internally MiddleVR uses quaternions.
3D Nodes can also be assigned a tracker. For more information see the Tracker property below.
There are several types of 3D nodes.
- Regular 3D node: this object only represents a reference frame in space. This reference frame can be interpreted how you want. It could be a body part, or a tracking system’s base that you use as a reference.
- Camera: a view of the world.
- Stereoscopic camera: a stereoscopic (S3D) view of the world.
- Screens: a physical display surface.
5.3.1.1 5.3.1.1: Creating nodes
You can create 3D nodes by clicking on the ‘+’ button. You will be presented with a window to choose which 3D node you want to create:
Choose the type of node you want to add and press Add.
Note: The new node will automatically be added as a child of the currently selected node.
5.3.1.2 5.3.1.2: Properties
A 3D node has several properties that you can modify in the configuration tool.
| Option | Description |
|---|---|
| Name | The name of the 3D node. The name can be used to find a particular node. |
| Tag | A tag is a word representing a semantic information. For example a node can have a “Hand” tag. Another node can also have the same “Hand” tag. You can then find all the nodes that have this particular tag, or behave in a certain way when you find a node with this tag. For example some objects would only react if they are touched by a 3D node that has the “Hand” tag. |
| Parent | The parent is another 3D node. The parent can not be a child of the node. |
| Tracker | When a 3D node has a tracker assigned to it, the position and orientation data of the tracker will automatically be applied to the local position and orientation of the node. |
| X,Y,Z Local | The local position of the node with respect to its parent coordinate system. In meters. Disabled if the node has a Tracker. |
| PositionWorld | The world position, taking into account all the parents’ cumulated transforms. In meters. |
| Yaw,Pitch,Roll Local | The local orientation of the node with respect to its parent coordinate system. In degrees. Disabled if the node has a Tracker. |
| OrientationWorld | The world orientation, taking into account all the parents’ cumulated transforms, represented as a quaternion. |
A 3D node also has some advanced properties.
Note: The UseTrackerX/Y/Z/Yaw/Pitch/Roll options are only available when the Tracker of the node is not Undefined.
| Option | Description |
|---|---|
| IsFiltered | Filters the data applied from the tracker by using the “One Euro Filter”. Enabled only if the node has a tracker. More info: http://www.lifl.fr/~casiez/1euro/. |
| CutOffFrequency | Appears only if the filter is active. The “One Euro Filter” is mainly a low-pass filter, which means it reduces the movement jitter of frequencies higher than the cutoff frequency. The default value is 1Hz. The closer you go to 0 Hz, the smoother the movements are, and the higher the lag is. |
| Reactivity | Appears only if the filter is active. Depending on the “Cut Off Frequency” value, the “One Euro Filter” can add some lag to the movement. The “Reactivity” parameter is a scalar value to reduce this lag. You can find more details about this parameter in the filter calibration procedure below. Start from ‘0’ where no reactivity is added and make this value slowly grow to have the best results without bringing back too much jitter. |
| X,Y,Z World | The world position of the node. In meters. Disabled if the node has a Tracker. |
| Yaw,Pitch,Roll World | The world orientation of the node. In degrees. Disabled if the node has a Tracker. |
| UseTrackerX | Applies the X information from the tracker to the node if checked. |
| UseTrackerY | Applies the Y information from the tracker to the node if checked. |
| UseTrackerZ | Applies the Z information from the tracker to the node if checked. |
| UseTrackerYaw | Applies the Yaw information from the tracker to the node if checked. |
| UseTrackerPitch | Applies the Pitch information from the tracker to the node if checked. |
| UseTrackerRoll | Applies the Roll information from the tracker to the node if checked. (Did you guess?) |
5.3.1.3 5.3.1.3: Calibration
You can calibrate a 3D node to have a neutral position and/or orientation. What does it mean?
It means that:
- its world position is (0,0,0),
- the node orientation is neutral: it is facing the Y axis, with its right looking at the X axis, and up looking at the Z axis.
Note: Only a node without a tracker can have its transformation calibrated. If your node has a tracker, you can either use “Calibrate Parent”, or create a child node that you can calibrate
MiddleVR offers different ways of calibrating a node:
- Set Neutral Transformation: This action will set a local transformation such that the world transformation is neutral.
- Set Neutral Position: This action will set a local position such that the world position is (0,0,0), without impacting the orientation.
- Set Neutral Orientation: This action will set a local orientation such that the world orientation is neutral, without impacting the position.
- Calibrate Parent: This action is different from the three others. It will modify the node’s parent’s transformation so that the current node’s transformation is neutral. See below for more information.
The Reset Transform will reset the local transformation of the selected node.
The “Calibrate Parent” action is particularly useful when the parent represents the origin of a tracker (tracker’s base), for example the Razer Hydra’s base, a Kinect or a camera.
If you want the nodes to be correctly positioned in world space, you have to place the camera node correctly. This can be a tedious task to do manually: you have to measure the distance from the center of world space as well as the correct orientation.
If you set a tracked object at the center of the physical world, with a neutral orientation in the real world, you will notice in MiddleVR that the corresponding node is probably not at a neutral place/orientation.
You can simply select this node in MIddleVR and choose “Calibrate Parent”. This will automatically set the transformation of the parent so the selected node gets a neutral transformation. You will also notice that the parent now has a position/orientation corresponding to the physical position/orientation of the tracker’s base in world space.
5.3.1.4 5.3.1.4: Filter Calibration
Here is a simple two-step procedure proposed by the authors of the “One Euro Filter” to set the two filter parameters to minimize jitter and lag when tracking human motion:
- First ‘Reactivity’ is set to 0 and ‘CutOffFrequency’ to a reasonable middle-ground value such as 1 Hz. Then the body part is held steady or moved at a very low speed while ‘CutOffFrequency’ is adjusted to remove jitter and preserve an acceptable lag during these slow movements.
- Next, the body part is moved quickly in different directions while ‘Reactivity’ is increased with a focus on minimizing lag.
Note that parameters ‘CutOffFrequency’ and ‘Reactivity’ have clear conceptual relationships: if high speed lag is a problem, increase ‘Reactivity’; if slow speed jitter is a problem, decrease ‘CutOffFrequency’.
5.3.2 5.3.2: Camera
A camera is a 3D node, so it inherits all the 3D node properties: tracker, local position, local orientation…
Regular cameras in 3D engines are said to be symmetrical: to compute a correct perspective, they suppose that the viewer is always exactly in front of the center of the screen:
The particularity of a VR camera is that it can be assigned a screen.
A screen is exactly like a window to the virtual world. Once the camera is associated with a screen, its view frustum (pyramid of vision) is always totally constrained by this screen. This means that the view is dependent on the position of the camera but also on the position of the screen. At this point, if the user is not exactly facing the center of the screen, the view frustum will not be symmetrical: the camera is said to be asymmetrical.
In the following pictures, the screen is represented as a gray rectangle. Notice how the camera frustum always matches the screen:
If the camera or the screen moves, the camera frustum will always match the screen.
This is exactly as if you’re looking through a window: if the camera is close to the screen, it’s like when you stand close to a window: you see a lot of the outside world.
When you go away from the window, your field of view gets narrower. When you move left or right, you see a different part of the world.
Note: When a camera is assigned a screen, its orientation is always constrained to face the normal of the screen.
A screen can be assigned to multiple cameras.
5.3.2.1 5.3.2.1: Properties
| Option | Description |
|---|---|
| VerticalFOV | Vertical field of view, in degrees. |
| Near | The near clipping plane distance. |
| Far | The far clipping plane distance. |
| Screen | Assign a screen if you want asymmetrical cameras. |
| UseViewportAspectRatio | Does this camera use its own AspectRatio or should it use the aspect ratio of its viewport? Disabled if the node has a Screen. |
| AspectRatio | The aspect ratio of the camera. Disabled if the node has a Screen or if UseViewportAspectRatio is true. |
5.3.3 5.3.3: Stereoscopic camera
To get a stereoscopic (S3D) rendering, you need to have two views of the virtual world, like your two eyes do for the real world.
Stereoscopic camera automatically create two cameras, a left and a right camera, as children. Those cameras will act as if there was a screen placed at a distance specified by the “Screen Distance” property. The size of the screen is determined by the screen distance, the field of view of the camera, and its aspect ratio.
Stereoscopic cameras inherit all camera’s properties, and they add the Screen Distance, and the Inter-Eye Distance.
5.3.3.1 5.3.3.1: Properties
| Property | Description |
|---|---|
| ScreenDistance | The zero-parallax distance. Disabled if the camera has a Screen. |
| InterEyeDistance | The distance between the left and right cameras. |
The two cameras are represented as follow:
5.3.4 5.3.4: Screen
The two main attributes of a screen are its position in space and its size.
Screens are assigned to cameras.
A screen is a 3D node, so it inherits all its properties. It simply adds a Width and a Height.
This also means that a screen can be assigned a tracker!
A screen can be assigned to multiple cameras.
5.3.4.1 5.3.4.1: Properties
| Property | Description |
|---|---|
| Width | Width of the screen. |
| Height | Height of the screen. |
5.4 5.4: Configuring viewports
5.4.1 5.4.1: Viewport
A viewport is a rectangular area on your desktop in which your camera will display its rendering. A viewport must be assigned a camera.
If you want to display a stereoscopic picture on a particular viewport, you must assign a Stereoscopic Camera.
This will enable the stereoscopic options, such as StereoMode or StereoInvertEyes.
There are two stereo modes:
- OpenGL Quad-Buffer: This is the mode for active stereoscopy.
- Side-by-side: This is the mode for passive stereoscopy.
You can also manually create a passive stereo viewport by creating two distinct viewports and assign them the left and right cameras.
5.4.1.1 5.4.1.1: Properties
| Property | Description |
|---|---|
| Name | Name of the viewport |
| Left | The left pixel coordinate of the viewport. Can be negative, to display on a secondary monitor for example. |
| Top | The top pixel coordinate of the viewport. Can be negative, to display on a secondary monitor for example. |
| Width | Width of the viewport in pixels. |
| Height | Height of the viewport in pixels. |
| Camera | The camera assigned to this viewport. Required. |
| StereoMode | The stereoscopic mode of the viewport. Will be enabled if the Camera is a Stereoscopic camera. |
| CompressSideBySide | In case your display scales the side-by-side image horizontally, use this option to compress it. |
| StereoInvertEyes | Reverse the left-right eye rendering. |
| OculusRiftWarping | Used by the Oculus Rift configurations to provide proper warping. Make sure to use a predefined configuration: the correct warping is also dependent on the proper viewport size, aspect ratio, cameras field of view etc. |
5.4.2 5.4.2: Window
MiddleVR will create a single window that contains all your viewports. You can control the behavior with the following properties:
| Property | Description |
|---|---|
| Fullscreen | Will the window go into fullscreen mode? |
| AlwaysOnTop | Will the window remain above all other windows? |
| WindowBorders | If the window is not fullscreen, keep the borders if set to true, remove them if set to false. |
| ShowMouseCursor | Hide the mouse cursor if set to false. |
| VSync | Will the window wait for vertical synchronization? This will be forced to true when an active stereoscopy viewport is detected. |
| GraphicsRenderer | Force use of a graphics renderer. |
| Anti-Aliasing | Sets the anti-aliasing level. Currently only available in Forward Rendering. |
| ChangeWorldScale (Advanced) | Enables the World Scale option. When turned off, the world scale value here after is ignored. |
| WorldScale (Advanced) | Scales positions of the VR nodes so that the virtual world appears bigger or smaller. The result is that the physical user becomes scaled indirectly compared to the virtual scene. For example, setting the value to 2 will make the physical user twice taller, or said differently the size of the virtual world will be divided by 2. |
If your viewports span across multiple displays, you shouldn’t use the Fullscreen mode, since it’s only able to use your primary display. If you want to have viewports on several displays that look like fullscreen, disable Fullscreen and disable Borders.
5.4.3 5.4.3: Homography
In some cases, a homography transformation is needed on the displayed image to straighten the final visual result. This technique is particularly helpful to calibrate projector displays.
This feature is handled by the “Advanced” section of the viewport parameters:
| Property | Description |
|---|---|
| UseHomography | Will this viewport use homography transformation? |
| HomographyTopLeftCornerOffsetX | Horizontal position offset of the top left corner destination point. This parameter is in pixels. Zero makes the corner be at it’s original position. |
| HomographyTopLeftCornerOffsetY | Vertical position offset of the top left corner destination point. This parameter is in pixels. Zero makes the corner be at it’s original position. |
| HomographyTopRightCornerOffsetX | Horizontal position offset of the top right corner destination point. This parameter is in pixels. Zero makes the corner be at it’s original position. |
| HomographyTopRightCornerOffsetY | Vertical position offset of the top right corner destination point. This parameter is in pixels. Zero makes the corner be at it’s original position. |
| HomographyBottomRightCornerOffsetX | Horizontal position offset of the bottom right corner destination point. This parameter is in pixels. Zero makes the corner be at it’s original position. |
| HomographyBottomRightCornerOffsetY | Vertical position offset of the bottom right corner destination point. This parameter is in pixels. Zero makes the corner be at it’s original position. |
| HomographyBottomLeftCornerOffsetX | Horizontal position offset of the bottom left corner destination point. This parameter is in pixels. Zero makes the corner be at it’s original position. |
| HomographyBottomLeftCornerOffsetY | Vertical position offset of the bottom left corner destination point. This parameter is in pixels. Zero makes the corner be at it’s original position. |
There, by default, the image corners coordinates are set to the standard positions. Change these corners coordinates will stretch the viewport to fit the geometry described by the four points. Here is an example showing the same view with no homography and with homography using the viewport coordinates with a ( 100, 100 ) pixels offset for the top left corner and a ( -100, 100 ) pixels offset for the top right corner.
Without homography:
With homography:
You can correct a basic keystone, but you can go even further in the correction:
We offer a visual configuration tool for the homography. You can see it in action here: http://www.youtube.com/watch?v=_eZ5LKQqtjU.
You can download it on the Download page of our website.
5.4.4 5.4.4: Debug information
MiddleVR has a few options to further investigate issues:
| Option | Description |
|---|---|
| LogLevel | The level of logs that will be printed in log files when the application runs. |
| LogInSimulationFolder | To write the logs in the .exe folder (in a new folder: MiddleVRLogs/). Note: when used in a cluster system and if cluster clients use a shared network folder, all clients will be writing the logs over the network. Depending on the loglevel, this can slow the application down significantly. |
| EnableCrashHandler | For more serious crashes, this option will enable even more information to be logged. Warning: this option can render Unity more sensitive. If an exception is raised in Unity, the Crash Handler will catch it and quit immediately. |
The log level can be from 0 to 7:
- LogLevel 0: will only display errors,
- LogLevel 1: will only display errors and warnings,
- LogLevel 2: will only display errors, warnings and information,
- LogLevel 3 or more: debug information.
5.5 5.5: Running simulations
The “Simulations” window allows you to manage the simulations that you want to run, and the different configurations that can be used with them and the quick links.
The goal is to simplify the management of your applications and VR systems. You can simply chose to run an application with different configurations, or run different applications on the same VR system, or create the association of a simulation and a specific configuration allowing you to gain time.
The current command line that will be executed is displayed in “Current command line”.
Pressing “Run” or using the “Ctrl+R” shortcut will run the current command line.
If the configuration is a cluster configuration, it will send the command to all the cluster daemons, including the master.
Note: This means that you need to have the cluster daemon running on the server as well.
The Simulations tab is split into two views:
- The Quick Links view,
- The Simulations view.
5.5.1 5.5.1: The Quick Links view
The main list manages the quick links. A quick link is the composition of a simulation and a configuration plus a custom argument. By clicking the “+” or “-” you can add or remove quick links from the list. By clicking the “+” button the “Add Quick Link” window will appear.
You can sort the list as you wish by dragging and dropping the quick link where you want in the list.
In this window you can create quick links. To create a quick link in this view, you only need to:
- Select a simulation,
- Select a configuration,
- [optional] Enter a custom argument for the command line,
- Press “Create”.
By right clicking on a quick link a menu lets you choose between removing the item from the list or renaming it.
Note: You can also edit the name of the selected quick link by pressing “F2”.
Double clicking on a quick link will load its configuration and execute the linked simulation using the custom arguments it contains.
5.5.2 5.5.2: The Simulations view
The left list manages the simulations, the right list manages the configurations. In the simulations list next to each simulations you can see it’s path in your computer. Each configuration is within a category (HMD, Cube, Cluster, etc.). You can add or remove simulations and configurations by pressing the “+” or “-” button. You can remove custom configuration categories by pressing “-”. Removing a simulation or a configuration will only remove it from the list, not from the computer. Removing a configuration category will remove all it’s configurations from the list. You can sort the simulations list as you wish by dragging and dropping the simulation where you want in the list.
Pressing the “New category” button will create a new custom category.
Double-clicking on an application in the simulations list will open a file explorer in the folder containing it.
By right clicking on a simulation a menu lets you choose between removing the item from the list or show it in the explorer.
Double-clicking on a configuration in the configurations list will load it.
Right clicking on a configuration category a menu lets you choose between removing it or renaming it. Right clicking on a configuration a menu lets you choose between removing the item from the list, renaming it, duplicating it or showing it in the explorer. Renaming a configuration will also renaming the file. Duplicating a configuration will duplicate its file on the disk.
- A gray configuration is a configuration whose file does no exist on the computer.
- A bold configuration is a loaded configuration.
Note: You can also edit the name of the selected configuration or the selected category the by pressing “F2”.
Note: Configurations and categories marked by a “*” are read-only, this means that you cannot rename them. You cannot remove a read-only category.
You can add a custom argument to the command line in “Custom arguments”.
By pressing the “Add to quick links” button a new quick link will be created in the “Quick Links” view. This quick link will have a name composed from the simulation and the configuration (e.g. “Shadow_Forward - HMD-Oculus-Rift-DK2”). You can rename it as you like afterward in the “Quick Links” view. It will be linked to the selected simulation and the selected configuration and may contain a custom argument to be executed by the command line. This custom argument is the one in “Custom arguments”.
6 6: MiddleVR for Unity
6.1 6.1: Introduction
6.1.1 6.1.1: Integration
The core of MiddleVR has no knowledge of a particular 3D engine.
This means that for each 3D engine, a small interface has to be created. This interface will make the bridge between MiddleVR on one side, and the 3D engine on the other side. It will configure 3D nodes, viewports, and give access to devices.
Typically, this bridge is based both on the MiddleVR API and the host 3D engine API. It will gather information from MiddleVR and use that to configure the 3D engine.
For example, the interface will load a particular configuration file, ask MiddleVR for the number of nodes, their properties, and create those nodes as 3D nodes from the 3D engine. In Unity, this will translate MiddleVR 3D Nodes as GameObjects.
The interface will also read information about viewports, cameras, and everything needed to create a VR experience.
Each frame, MiddleVR will then update all the nodes and camera it has created inside Unity with the values from the devices and the computations of the camera’s projection matrix.
Note: By default MiddleVR will disable all your cameras and only work with the cameras you’ve defined in MiddleVR.
Note: Before exporting your application to a standalone player, make sure to read the “Exporting to a standalone player” section below.
6.1.2 6.1.2: Unity coordinate system
As said previously, MiddleVR uses a right-handed coordinate system, where X is pointing to the right, Y is pointing away from the user towards the screen and Z pointing up:
Unity’s coordinate system is left-handed, with X pointing to the right, Y pointing up, and Z pointing away from the user, towards the screen:
When updating Unity’s nodes and cameras, MiddleVR will automatically convert the 3D information from one coordinate system to the other.
But when you read the information of a MiddleVR node or of a 3D tracker directly from MiddleVR from a Unity script, it will be in MiddleVR’s coordinate system. You then have to convert this 3D information into Unity’s coordinate system. MiddleVR provides methods to do exactly that. See section “Input devices”.
6.2 6.2: Adding MiddleVR to your Unity application
6.2.1 6.2.1: Import the MiddleVR package
(If you are upgrading an old Unity project, make sure to read this article: Upgrade MiddleVR Unity Package.)
MiddleVR is split in two parts:
The generic MiddleVR which can be used for different 3D engines. This part is typically installed in your system in
C:\Program Files (x86)\MiddleVR\bin. It contains all the DLLs of MiddleVR and the drivers for the devices.The 3D engine specific part that links the 3D engine to the generic MiddleVR. This part has to be added to your Unity project. It contains all the scripts and plugins that link your project to the generic part of MiddleVR to drive the cameras and 3D nodes.
To import the MiddleVR package, open the Asset menu, then Import package and Custom package:
You will find the MiddleVR UnityPackage in the data folder of your MiddleVR installation, typically C:\Program Files (x86)\MiddleVR\data:
Open the MiddleVR.unitypackage file.
This will open a new Unity window:
Simply click “Import”.
The package will then be imported and is now available in your project.
6.2.2 6.2.2: Add the VR manager to your project
Importing the package is not sufficient, you need to add an important component to your project that will manage all the VR aspects: the VR manager.
Open the MiddleVR folder in the Project tab.
Drag and drop the VRManager prefab to the Hierarchy tab of your project:
6.3 6.3: The VR manager
6.3.1 6.3.1: Introduction
The VRManager is simply a Unity GameObject with several scripts attached to it:
Those script handle all the management of 3D nodes, cameras, viewports, devices, clustering.
The VRManager will initialize MiddleVR with the specified options, especially the configuration file. It will create the 3D hierarchy of nodes that you’ve specified in the configuration tool and the cameras with their respective viewports.
It will then automatically update MiddleVR, and then reflect all the updates of 3D nodes and cameras to Unity.
All the devices data will also be updated so you have the latest information about your input devices.
Note: The VR Manager will not have any effect on your application before you press play. More precisely, no object of the VR hierarchy (3D nodes, cameras, screens) will be created unless you run your application.
MiddleVR automatically disables any existing camera of your scene for performance reason. Indeed, the more cameras render their view, the slower your application might be. You can change this behavior with the “Disable Existing Cameras” option.
6.3.2 6.3.2: VR Manager properties
| Property | Description |
|---|---|
| Config File | Specifies the path to the configuration file that should be used. The path can be absolute or relative. In the editor, the path will be relative to the project folder. In the player, the path will be relative to the .exe file. |
| VR System Center Node | Specify the GameObject that will be used as VRSystemCenterNode for the VR hierarchy. |
| Template Camera | Camera to duplicate instead of creating new cameras for each VR camera. If you set the Template Camera option to an existing camera in your scene, this camera will be duplicated for each VR camera instead of creating a new one. This is useful if you want to have on all VR cameras scripts (like image effects [SSAO, Blur…] ), parameters (clear color), or any other component like Flare Layers, GUILayer etc. |
| Show Wand | Show the Wand geometry. Pressing Shift-W will toggle wand on/off. |
| Use VR Menu | Activate/disable availability of VR menus. |
| Navigation | Choose the default navigation method you want to use. Pressing Shift-N will switch between the three navigation modes. |
| Manipulation | Choose the manipulation method you want to use. |
| Virtual Hand Mapping | Choose the way the VRWand will move. “Direct” will use the direct tracking data. “Gogo” will upscale the hand movements as in the Gogo interaction technique. |
| Show Screen Proximity Warnings | Show proximity visual warning when the head or another watched node is too close to a screen. |
| Fly | Allow a free navigation (Fly enabled) or keep the current height (Fly disabled). |
| Navigation Collisions | Enable/disable navigation collisions. |
| Manipulation Return Objects | Enable/disable automatic return of a manipulated object to its original position when it is released. |
| Show FPS | Display the number of frames per second. Pressing Shift-D (like “D”ebug) will toggle display on/off. |
| Disable Existing Cameras | Will parse the scene to find existing cameras that don’t belong to the MiddleVR hierarchy and disable them. This is mainly done for performance reasons. |
| Grab Existing Nodes | Will parse the scene to find existing nodes that match a node name in the MiddleVR hierarchy. The existing node will then be inserted as part of the MiddleVR hierarchy. |
| Debug Nodes | Will display the nodes of the MiddleVR hierarchy as transparent blue cubes. This allows for easy debugging of their position and orientation. |
| Debug Screens | Will display the screens of the MiddleVR hierarchy as transparent blue rectangles. This allows for easy debugging of their position, orientation and size. |
| Logs To Unity Console | Will redirect MiddleVR logs to the Unity console. Works only in Editor mode. This setting is useful when you are working with a high log level but you want to temporarily turn-off log redirection to the Unity console in order to get better performances in Unity Editor. |
| Quit On ESC | When in a standalone player, will exit the application if the Escape key is pressed. |
| Don’t Change Window Geometry | MiddleVR will not try to change the player’s window size, position or resolution. |
| Simple Cluster | Enable the Simple Cluster option. See section “Clustering” for more information. |
| Simple Cluster Particles | Enable/disable synchronization on a cluster for particles. |
| Force Quality | Force a specified Player Quality. See below for more information. |
| Force Quality Index | Index of the Player Quality to force. See below for more information. |
6.3.3 6.3.3: Force Player Quality
Usually when you start a Unity application / player a startup window pops up asking for the resolution you want to use and the player quality that should be set. MiddleVR deactivates this window because it will automatically handle the resolution. When exporting a Unity application, you can select what should be the default quality:
In this example, the default player quality will be “Good”, because the second column (Windows build) is checked green.
There is currently a bug in Unity where the quality currently selected (here in blue) in the Unity editor will set the quality of the next Unity player that will be run. In cluster mode, there bug can also appear and select a random quality.
The “Force Quality” option from the VR Manager will allow you to force the specified “Force Quality Index” to the quality you want to be applied. The index is starting at 0 for the first quality (here Fastest).
6.4 6.4: Running your application in the Unity editor
As soon as you have configured the VR Manager, you are ready to run your application.
If the “Disable Existing Cameras” option is set, the VR Manager will start by disabling all the existing cameras of your application.
It will then create the whole VR hierarchy by creating a Game Object for each 3D node, including Screens. A regular Unity camera will be created for each MiddleVR camera found in the VR hierarchy.
Note: It is important to understand that the VR hierarchy are objects created only when you press play. As soon as you press stop, those objects will be deleted. The VR hierarchy is recreated each time you press play after having stopped the application. Pausing the application will not destroy the VR hierarchy.
If you change the configuration file, the VR hierarchy will be recreated as specified in this updated configuration file. This will ensure that your application is always up to date with your VR system.
Note: As the viewport geometry cannot be programmatically changed while in the Unity Editor, the geometry and aspect ratio of the viewports created by MiddleVR will look different than what they will be in the player.
Note: Active stereoscopy will not be displayed while in Unity editor. Only one eye will be displayed in monoscopy.
6.5 6.5: Exporting to a standalone player
You can choose to export either in 32-bit (x86) or 64-bit (x86_64):
Note: starting with MiddleVR 1.2, you shouldn’t have to modify the following options manually, they should be modified automatically when you drag and drop the VRManager in your project. It might still be interesting to understand what is happening behind the scenes.
Before exporting your application as a standalone player, you also have to make sure Unity is correctly configured to not get in the way of MiddleVR.
Go in the player settings (Edit > Project Settings > Player) and make sure the following parameters match the screenshot:
The “Default is Full Screen” option will make sure that Unity does not override MiddleVR’s window configuration.
The “Display Resolution Dialog” option will also make sure that Unity does not override MiddleVR’s window configuration. This is especially important in cluster mode: you probably don’t want to close the resolution dialog on each cluster node each time you run your application.
The “Player Log” might slow down the cluster if a slave is trying to write its log through the network.
6.5.1 6.5.1: Additional parameters for active stereo (OpenGL Quad-Buffer)
If you’re running in active stereo (OpenGL Quad-Buffer), you should also have the vertical synchronization (VSync) de-activated in the menu Edit > Project Settings > Quality.
When importing the MiddleVR.unitypackage, VSync is automatically disabled for all Qualities(Fast, Simple, Good, Beautiful, Fantastic…).
MiddleVR will internally handle the VSync.
Note: Make sure that the Quality line selected (above in blue) is the one you want to use with MiddleVR. It seems Unity will use this selection as the default quality setting for the player.
6.6 6.6: Running your application as a standalone application
The favorite option to run your VR application is to use the Simulations window from the configuration editor:
You can also just have to run it by double-clicking on the generated exe file.
Note: With applications generated by Unity 4, MiddleVR can’t automatically remove the border of the window anymore. You need to add the -popupwindow argument on the command line. This is automatically done when launching your application from the configuration editor. Note: If you want to copy your application on another computer, MiddleVR will first have to be installed on the system because MiddleVR is not embedded in the data folder of the application.
Note: If you want to copy your application on another computer, MiddleVR will first have to be installed on the system because MiddleVR is not embedded in the data folder of the application.
Note: The first time you run your application, if you’re running in OpenGL quad-buffer (active stereo) mode, MiddleVR will copy an important file next to the application: the d3d9.dll which allows MiddleVR to get important information from your application. After the file is copied, MiddleVR will exit the application. Simply run it again and the application will run in stereo.
You can override the configuration that was specified in Unity by adding the command line option: --config c:\my_folder\my_config.vrx.
6.7 6.7: How to attach your nodes in the VR hierarchy?
Once you have configured that the hand of your user is moving with a tracker, you might want that a 3D object that you’ve created moves with the user’s hand.
For example you might have added a tennis racket to your project and want it to be attached to the user’s hand.
There are two options for that:
- the “Grab Existing Nodes” property of the VR manager,
- and the “VR attach to Node” script.
6.7.1 6.7.1: Grab existing nodes
The easiest option is to give your object the same name as a node in the VR hierarchy and check the “Grab Existing Nodes” option of the VR manager.
If this option is activated, when the VR manager initializes, it will parse the scene for nodes that have the same name as the nodes in the VR hierarchy, and insert those instead of creating an empty GameObject.
For example if in the VR hierarchy there is a node named HandNode, and you named the parent of your 3D racket model also HandNode, the VR manager will simply “grab” this node and insert it in the VR hierarchy. The VR manager will then use this node as a parent for the rest of the sub-hierarchy.
The children of your 3D model will all be moved with its parent.
6.7.2 6.7.2: Attach to node
The second option is simply to attach the “VR Attach to Node" script to the node that you want to insert in the VR hierarchy.
You can find this script in the “Scripts/Interactions” folder of the MiddleVR package.
| Parameter | Description |
|---|---|
| VRParent Node | Name of the parent vrNode3D. For example “HandNode”, “HeadNode”. |
| Keep Local Position/Rotation/Scale | In Unity by default when parenting an object to another, the local transform of an object is modified so that it keeps its world transform. This means the object is not moved when it changed its parent. If you want to use the local transform to specify an offset from the VRParent Node to the object, enable “Keep local Position/Rotation/Scale” |
6.8 6.8: Using a template camera
A template camera can be configured as an option of the VR Manager.
If you set the Template Camera option to be an existing camera in your scene, this camera will be duplicated for each VR camera instead of creating a new one. This is useful if you want to have on all VR cameras scripts (like image effects [SSAO, Blur…]), parameters (clear color), or any other component like Flare Layers, GUILayer etc.
Note: Some post-processing effects might not work when using active stereoscopy / clustering. See “VR-compliant post-processing effects in Unity”.
6.9 6.9: Wand interactions
When you have correctly configured your Wand in MiddleVR, you can use standard interactions like:
- Navigation: navigate the VR hierarchy in the virtual world using the Wand orientation and joystick axis,
- VR First Person Controller: Control Unity First Person Controller with Wand axis/buttons.
- Action: trigger actions when a 3D node is selected by the wand and a button is pressed,
- Grabbing: grab 3D nodes to move them in the scene.
Here’s the 3D representation of the wand:
The first thing you have to do is enable the Show Wand option of the VRManager:
The wand is visible and active by default.
The VRWand node has several scripts attached to it that handle and configure interactions.

You can safely deactivate the scripts that you don’t need.
As you can see in the parameter of the script VRAttach To Node, by default the Wand is attached to the HandNode. You can change the name of the node to HeadNode for example if you wanted to navigate in the direction where you look.
Note that despite the fact that MiddleVR supports multiple wands, the 3D representation and the interactions will still only use the first wand from the configuration. The other wands signals are accessible through the device manager and you can use them to make your own interaction scripts.
6.9.1 6.9.1: Navigation
The navigation is handled by one of the Navigation Interaction scripts attached to the VRWand. The VRManager parameter “Navigation Method” allows you to choose the navigation mode for your simulation.
Some of these navigation methods have a “Fly” property to lock/unlock vertical translations. This parameter can be directly checked/unchecked on the interaction script before building, but you can also press Shift-F to toggle the fly mode when the simulation is running.
When the simulation is running, it is also possible to switch the current navigation method to another by pressing Shift-N .
Some navigations techniques need a navigation button to be used. By default, this button is the wand button 1. This way, in the MiddleVR configuration, you can map your chosen device button to the Wand button 1 to make it be the navigation button.
| Navigation | Description |
|---|---|
| Joystick | Use the Wand’s joystick to navigate in the direction pointed by the Wand. |
| Elastic | Stretch a virtual elastic to the direction you want to move. The more your stretch, the faster you go. |
| Grab World | Grab the air with the Wand and drag the world towards you. |
If you want to create your own navigation, simply select “None” in Navigation Method of the VRManager.
6.9.1.1 6.9.1.1: Joystick Navigation Method
You can navigate using the axis of the Wand joystick. The navigation will move the selected node (Navigation Node) in the direction that the Reference Node is pointing.
By default, the navigation is moving the VRSystemCenterNode in the direction pointed by the HandNode. It will also rotate left or right if you move the horizontal axis of theWand.
If you want to sidestep, point your hand to the right or to the left and press forward.
By default you will be able to fly in the scene. If you wand to stick to the current height, uncheck the Fly option.
| Option | Description |
|---|---|
| Direction Reference Node | Move in the direction pointed by this node. By default it will move in the direction of “HandNode”. |
| Turn Around Node | The rotation will occur around the given node. By default it will rotate around the “HeadNode”. |
| Translation Speed | Translation speed, in meters per second. |
| Rotation Speed | Rotation speed, in degrees per second. |
| Fly | Allow a free navigation (Fly) or keep the current height (Fly disabled). |
6.9.1.2 6.9.1.2: Elastic Navigation Method
You can navigate by pressing the navigation button of the Wand joystick and stretching the appearing elastic gizmo that binds the Wand to it’s stretch start position. The navigation will move the VRSystemCenterNode in the direction that the elastic gizmo is pointing.
By default, the navigation is moving the CenterNode in the direction pointed by the elastic gizmo. The more the elastic is stretched, the faster the navigation is. The same concept applies to rotation: the rotation difference between now and the moment you pressed the navigation button will make the VRSystemCenterNode rotate the same way.
If you want to sidestep, press the navigation button and stretch the elastic to the right or to the left.
By default you will be able to fly in the scene. If you wand to stick to the current height, uncheck the Fly option.
| Option | Description |
|---|---|
| Reference Node | The node that defines the start and end positions of the elastic. By default it will use the node “HandNode”. |
| Translation Speed | Translation speed, in meters per second. |
| Rotation Speed | Rotation speed, in degrees per second. |
| Distance Threshold | The minimum elastic stretch distance to start moving. |
| Angle Threshold | The minimum rotation angle between the start and end orientations to make a rotation movement. |
| Use Translation X | Is translation along the X axis allowed? |
| Use Translation Y | Is translation along the Y axis allowed? |
| Use Translation Z | Is translation along the Z axis allowed? |
| Use Rotation Yaw | Is horizontal rotation allowed? |
| Fly | Allow a free navigation (Fly) or keep the current height (Fly disabled). |
6.9.1.3 6.9.1.3: Grab World Navigation Method
You can grab a position in the air with the navigation button and drag yourself in the virtual space. The navigation will move the VRSystemCenterNode to make the Reference Node stay static as if your hand was grabbing a handle fixed to the world.
By default, the navigation is moving the CenterNode in reference of the grabbing HandNode. It will also rotate left or right around the Reference Node if you rotate it.
If you want to sidestep, grab the world and drag yourself to the right or to the left.
| Option | Description |
|---|---|
| Reference Node | Move relatively to this node when grabbing. By default it will be “HandNode”. |
6.9.1.4 6.9.1.4: Navigation Collision
You can activate the simple VRNavigationCollision script to add collisions to the navigation interaction. This script is attached to the “Wand” node. If it is enabled, VRNavigationCollision searches for an activated navigation interaction and makes sure that the specified collision node will not penetrate the scene colliders and slide over them.
| Option | Description |
|---|---|
| Collision Node Name | The name of the 3D node that will not be allowed to penetrate scene colliders. By default we use the “HeadNode” to do the collisions. |
| Collision Distance | The minimum distance allowed between the collision node position and the collider surface. |
6.9.1.5 6.9.1.5: Screens Proximity Warning
You can activate the parameter “Show Screen Proximity Warnings” in the VRManager to make screens visually warn you when you get too close to them. This will activate the VRInteractionScreenProximityWarning script that is attached to the “Wand” node. When it is enabled, VRInteractionScreenProximityWarning watches if your head (by default) is too close to a screen. If yes, it makes a physical warning screen appear in the place of the virtual screen. This way, you are remembered of the real screen presence when you are just about to physically hit it.
| Option | Description |
|---|---|
| Nodes To Watch | A list of the 3D nodes’ names (declared in your configuration file) that will be used to check the distance from the screens. By default we only use the “HeadNode” to prevent the user’s head to hit a screen. |
| Warning Distance | When the distance between the watched node and the screen is lower than “Warning Distance”, the visual warning appears. |
6.9.2 6.9.2: Manipulation
The manipulation is handled by one of the Manipulation Interaction scripts attached to the VRWand. The VRManager parameter “Manipulation Method” allows you to choose the manipulation mode for you simulation.
| Option | Description |
|---|---|
| Ray | Use the Wand’s ray to manipulate the object as if it would be put on a spit. |
| Homer | The object moves like your hand from it’s position, but the translations and rotation are upscaled. |
If you want to create your own manipulation, simply select “None” in Manipulation Method of the VRManager.
6.9.2.1 6.9.2.1: Ray Manipulation Method
The object you grabbed with the Hand Node is attached to the wand ray, this way you can manipulate it as if it would be put on a spit.
By default, the Hand Node is set to “HandNode”.
| Option | Description |
|---|---|
| Hand Node | This node is used as a reference for the object’s movements. By default it will move in the direction of “HandNode”. |
6.9.2.2 6.9.2.2: Homer Manipulation Method
The object you grabbed starts moving like the Hand Node, but with larger translations and rotations.
By default, the Hand Node is set to “HandNode”.
| Option | Description |
|---|---|
| Hand Node | This node is used as a reference for the object’s movements. By default it will move in the direction of “HandNode”. |
| Translation Scale | The factor used to upscale the Hand Node translations. |
| Rotation Scale | The factor used to upscale the Hand Node rotations. |
6.9.3 6.9.3: Virtual Hand Mapping
The mapping between the Hand tracking input and the hand node can be handled by one of the Virtual Hand Mapping scripts attached to the VRWand. The VRManager parameter “Virtual Hand Mapping” allows you to choose the way to move the Hand in you simulation.
The “Direct” mapping doesn’t change the Hand behavior.
The “Gogo” mapping will increase your hand movements when it is far from your Head to help you (for example) reach far items.
| Option | Description |
|---|---|
| Hand Node | The actual Hand Node. By default it is set to “HandNode”. |
| Head Node | The Head Node. Used to get the distance between the head and the hand. By default it is set to “HeadNode”. |
| Gogo Start Distance | In meters. When the distance between Hand Node and Head Node is under this value, the movements are scale 1. When it is over this value, the further you go, the higher is the translation scale. |
| Real Distance Max | Maximum distance in meters that the user can put between his hand and his head. This value is used with Virtual Distance Max. When the user reaches his maximum physical hand movement, the Hand Node is at Virtual Distance Max meters from the Head Node. |
| Virtual Distance Max | Maximum distance in meters that the user wants to be able to reach between the Hand Node and the Head Node. This value is used with Real Distance Max. When the user reaches his maximum physical hand movement, the Hand Node is at Virtual Distance Max meters from the Head Node. |
6.9.4 6.9.4: Interaction
The Wand can be used to select any 3D object in your scene. You can then notify this object that a button of the wand has been pressed. You can also grab this object to move it around in a different place.
The VRWand Interaction script handles the selection. It has several options:
| Option | Description |
|---|---|
| Ray Length | Length of the ray. Any object further than this distance will not be selected. |
| Highlight | Highlight the wand when an object is selected? |
| Highlight Color | Highlight color of the wand when an object is selected. |
| Grab Color | Highlight color of the wand when an object is grabbed. |
| Repeat Action | When this option is enabled, will send the VRAction message every frame. If the option is disabled, will send the VRAction message only when the button is toggled. |
6.9.4.1 6.9.4.1: Action
When the object is selected, it can be notified that a button is pressed on the wand. This is useful if you want to perform an action when a particular node is selected.
If you want a node to be able to be notified, you have to attach the VRActor scripts to it. This script will enable notifications on this object. You can find the VRActor script in the MiddleVR/Scripts/Interactions folder:
Grabable |
Can this object be grabbed? |
SyncDirection |
The Unity GameObject movements will be synced with its corresponding MiddleVR internal 3D node (called a vrNode3D, more information about vrNode3D in section: Programming interactions ). This parameter specifies which node sets its position and orientation to the other:
|
When the Wand intersects an object that has a VRActor attached to it, it will change its color to the highlight color defined in the VRWandInteraction script options.
When an object is selected and the main button of the Wand is pressed, the Wand will send the VRAction message to it. To react to this message, the only thing you have to do is create a method called VRAction on any script attached to the object. This method will be called every time the VRAction message is sent.
You can find such a sample script in: MiddleVR/Scripts/Samples/VRActionSample.
The Unity GameObject with a VRActor script will be synced with a MiddleVR vrNode3D. This way, the Unity and MiddleVR representations of this node will always have the same position and orientation in space. It is possible to configure which one will move the other with the SyncDirection parameter. Please refer to the table above for a more precise understanding of the possible values and effects of SyncDirection.
6.9.4.2 6.9.4.2: Grabbing
If you want an object to be grabable, simply add a VRActor script to it and enable the Grabable option. When the object is selected and the main button is pressed, the object will be grabbed by the Wand, and released when the button is released.
6.9.5 6.9.5: VR First Person Controller
Unity’s First Person Controller is a handy way of having a nice navigation. You can control this First Person Controller using the Wand axis and buttons.
First you have to import the MiddleVR_FPS package. You can find it in your MiddleVR/data installation folder.
This will import a new script in your project: VRFPSInputController:
Drag this script on your First Person Controller.
You can specify which node to use for direction. By default, it’s the HandNode.
Note: If you want the VR hierarchy to follow the First Person Controller, you must set the VR System Center Node parameter of the VR Manager to a node that belongs the First Person Controller hierarchy.
Beware: The First Person Controller object’s center is not on the ground, it is located 0.5m above. This means that if you set the VR System Center Node to the First Person Controller directly, the VR hierarchy will be 0.5m too high. One solution is to create a child node of the First Person Controller that is simply offset by -0.5 on the Y axis and set the VR System Center Node to this offset object.
| Option | Description |
|---|---|
| Reference Node | Node that will be used to determine the forward direction. |
6.9.6 6.9.6: Improving the portability of your application
Being able to use your application on your VR system is a good thing, but it would be even better if you could use it on all the other VR systems!
Your application is said to be portable when it can
Here are a few guidelines.
6.9.6.1 6.9.6.1: Rely on 3D nodes instead of trackers
- Trackers don’t necessarily have the same name or orientation/position offset from one platform to the other.
- 3D nodes semantically represent the actual user instead of devices. They can also be offset from the tracker. If both configurations are using the HeadNode to represent the physical position/orientation of the Cyclops for example, you can use this node instead.
- You could even want to prototype on a computer without trackers.
Read more about accessing 3D nodes in the “Programming interactions”.
If you don’t want to look for a 3D node by its name, you can also assign tags to them. You can find the tag parameters just under the name of the 3D node in the configuration editor. You can them parse all nodes to find a node with a particular tag. We will add methods to better handle this later.
6.10 6.10: Programming interactions
6.10.1 6.10.1: Introduction
In this section we will cover the basics of programming interactions from within a Unity C# script.
MiddleVR handles all aspects of your VR simulation:
- access to the state and events of all devices,
- cameras, viewports, stereoscopy.
For example, how to react when the user presses a button on a joystick? Or a certain key on a keyboard?
Check Appendix 2 - Class Hierarchy for an overview of the relationship between classes.
For a complete class reference, check the MiddleVR class reference:
6.10.2 6.10.2: Creating an interaction script
First create a script and attach it to an active object: go into the Assets menu, then Create > C Sharp script.
Drag and drop the script to an active object.
Double-click on the script to edit it, and add: using MiddleVR_Unity3D;.
using UnityEngine;
using MiddleVR_Unity3D;
public class Example : MonoBehaviour
{
void Start()
{
}
void Update()
{
}
}
6.10.3 6.10.3: Input devices
MiddleVR has an object that manages all the devices: the device manager.
You can query the device manager for the keyboard and mouse states:
if (MiddleVR.VRDeviceMgr != null)
{
// Testing mouse button
if (MiddleVR.VRDeviceMgr.IsMouseButtonPressed(0))
{
MVRTools.Log("Mouse Button pressed!");
MVRTools.Log("VRMouseX: " + MiddleVR.VRDeviceMgr.GetMouseAxisValue(0));
}
// Testing keyboard key
if (MiddleVR.VRDeviceMgr.IsKeyPressed(MiddleVR.VRK_SPACE))
{
MVRTools.Log("Space!");
}
}
Note: The Unity package of MiddleVR contains this sample script: MiddleVR/Scripts/Samples/VRAPISample.
The device manager holds the reference to all declared devices. If you want to get access to trackers’ data, or the state a joystick, you will first have to ask a reference of the corresponding object to the device manager:
if (MiddleVR.VRDeviceMgr != null)
{
// Testing mouse button
if (MiddleVR.VRDeviceMgr.IsMouseButtonPressed(0))
{
MVRTools.Log("Mouse Button pressed!");
MVRTools.Log("VRMouseX: " + MiddleVR.VRDeviceMgr.GetMouseAxisValue(0));
}
// Testing keyboard key
if (MiddleVR.VRDeviceMgr.IsKeyPressed(MiddleVR.VRK_SPACE))
{
MVRTools.Log("Space!");
}
}
vrTracker tracker = null;
vrJoystick joy = null;
vrAxis axis = null;
vrButtons buttons = null;
// Getting a reference to different device types
if (MiddleVR.VRDeviceMgr != null)
{
tracker = MiddleVR.VRDeviceMgr.GetTracker("VRPNTracker0.Tracker0");
joy = MiddleVR.VRDeviceMgr.GetJoystickByIndex(0);
axis = MiddleVR.VRDeviceMgr.GetAxis("VRPNAxis0.Axis");
buttons = MiddleVR.VRDeviceMgr.GetButtons("VRPNButtons0.Buttons");
}
// Getting tracker data
if (tracker != null)
{
MVRTools.Log("TrackerX: " + tracker.GetPosition().x());
}
// Testing joystick button
if (joy != null && joy.IsButtonPressed(0))
{
MVRTools.Log("Joystick!");
}
// Testing axis value
if (axis != null && axis.GetValue(0) > 0)
{
MVRTools.Log("Axis Value: " + axis.GetValue(0));
}
// Testing button state
if (buttons != null)
{
if (buttons.IsToggled(0))
{
MVRTools.Log("Button 0 pressed!");
}
if (buttons.IsToggled(0, false))
{
MVRTools.Log("Button 0 released!");
}
}
Note: The Unity package of MiddleVR contains this sample script: MiddleVR/Scripts/Samples/VRAPISample.
6.10.4 6.10.4: Accessing wand data
It is very easy to access the axis values and buttons states of the Wand through the device manager:
if (MiddleVR.VRDeviceMgr != null)
{
// Getting wand horizontal axis
float x = MiddleVR.VRDeviceMgr.GetWandHorizontalAxisValue();
// Getting wand vertical axis
float y = MiddleVR.VRDeviceMgr.GetWandVerticalAxisValue();
// Getting state of primary wand button
bool wandButtonPressed0 = MiddleVR.VRDeviceMgr.IsWandButtonPressed(0);
// Getting toggled state of primary wand button
// bool wandButtonToggled0 = MiddleVR.VRDeviceMgr.IsWandButtonToggled(0);
if (wandButtonPressed0)
{
// If primary button is pressed, display wand horizontal axis value
MVRTools.Log("WandButton 0 pressed! HAxis value: " + x + ", VAxis value: " + y + ".");
}
}
Note: The Unity package of MiddleVR contains this sample script: MiddleVR/Scripts/Samples/VRAPISample.
You can also access the Wand data from a JavaScript:
Note: In order for the JavaScript to correctly compile, you have to move the MiddleVR folder into the “Standard Assets”, “Pro Standard Assets” or “Plugins” so that the VRManager script is compiled before the JavaScript. Otherwise the JavaScript will complain that VRManagerScript is an unknown type. For more information see Unity Script Compilation (Advanced).
function Update() {
var VRMgrObject : GameObject = GameObject.Find("VRManager");
var VRMgr : VRManagerScript;
if (VRMgrObject != null) {
VRMgr = VRMgrObject.GetComponent(VRManagerScript);
} else {
print("Couldn't find VRManager object.");
}
if (VRMgr != null) {
var x = VRMgr.WandAxisHorizontal;
var y = VRMgr.WandAxisVertical;
var wandButtonPressed0 = VRMgr.WandButton0;
if (wandButtonPressed0) {
VRMgr.Log("WandButton 0 pressed! HAxis value: " + x + ", VAxis value: " + y + ".");
} else {
print("Couldn't access VRManagerScript: " + VRMgrObject);
}
}
}
Note: Since MiddleVR 1.4.2 f1, it is possible to add multiple wands in a configuration. To access their inputs, you simply need to retrieve the wand you wish by calling:
vrWand myWand = MiddleVR.VRDeviceMgr.GetWand("MyWandName");
6.10.5 6.10.5: The display manager
The display manager (MiddleVR.VRDisplayMgr) is responsible for 3D nodes, cameras, viewports and display management:
// 3D nodes
vrNode3D node = null;
vrCamera camera = null;
vrCameraStereo scam = null;
vrScreen screen = null;
vrViewport vp = null;
if (MiddleVR.VRDisplayMgr != null)
{
node = MiddleVR.VRDisplayMgr.GetNode("HeadNode");
if (node != null) { MVRTools.Log("Found HeadNode"); }
camera = MiddleVR.VRDisplayMgr.GetCamera("Camera0");
if (camera != null) { MVRTools.Log("Found Camera0"); }
scam = MiddleVR.VRDisplayMgr.GetCameraStereo("CameraStereo0");
if (scam != null) { MVRTools.Log("Found CameraStereo0"); }
screen = MiddleVR.VRDisplayMgr.GetScreen("Screen0");
if (screen != null) { MVRTools.Log("Found Screen0"); }
vp = MiddleVR.VRDisplayMgr.GetViewport("Viewport0");
if (vp != null) { MVRTools.Log("Found Viewport0"); }
}
Note: The Unity package of MiddleVR contains this sample script: MiddleVR/Scripts/Samples/VRInteractionTest.
6.10.6 6.10.6: Converting data from MiddleVR to Unity
As explained in the Unity coordinate system section, MiddleVR and Unity use a different coordinate system. When you get a 3D coordinate (3D vector, quaternion, matrix) from a tracker or a 3D node of MiddleVR, you have to convert it before using it to Unity:
vrNode3D node = MiddleVR.VRDisplayMgr.GetNode("HeadNode");
transform.position = MVRTools.ToUnity(node.GetPositionVirtualWorld());
transform.rotation = MVRTools.ToUnity(node.GetOrientationVirtualWorld());
6.10.7 6.10.7: Converting data from Unity3D to MiddleVR
On the opposite way, when you get a 3D coordinate (3D vector, quaternion, matrix) from a Unity GameObject and want to set it on a MiddleVR node or tracker, you first need to convert it from Unity to MiddleVR coordinate system:
vrNode3D node = MiddleVR.VRDisplayMgr.GetNode("HeadNode");
node.SetPositionVirtualWorld(MVRTools.FromUnity(transform.position));
node.SetOrientationVirtualWorld(MVRTools.FromUnity(transform.rotation));
6.10.8 6.10.8: Debugging with MonoDevelop
You can use MonoDevelop to build applications using MiddleVR.
You just need go in the MonoDevelop menu Tools > Preferences > Unity > Debugger and disable “Build project in MonoDevelop”:
6.10.9 6.10.9: Troubleshooting
What to do if things don’t work as expected? Check the online Knowledge Base.
6.11 6.11: Some useful MiddleVR sample scripts
6.11.1 6.11.1: Shortcut to invert eyes at runtime
By dropping the sample script MiddleVR/Scripts/Samples/VRShortcutInvertEyes.cs in your scene, you will be able to switch the left and right eyes when pressing Shift-I.
6.11.2 6.11.2: Shortcut to reload level at runtime
By dropping the sample script MiddleVR/Scripts/Samples/VRShortcutReload.cs in your scene, you will be able to reload the current level when pressing Shift-R and load the level 0 when pressing Control-Shift-R.
6.12 6.12: Upgrade the MiddleVR Unity Package
6.12.1 6.12.1: Clean an old Unity project before upgrading it
If you already have a Unity project using an old version of MiddleVR, you may want to upgrade it. In MiddleVR 1.6, some files of the UnityPackage moved. This should be done automatically. In case this fails, you should clean the old files before importing the new MiddleVR package. Here are the files to remove:
- Assets/MiddleVR (full folder)
- Assets/Editor/VRCustomEditor.cs
- Assets/Plugins/MiddleVR_UnityRendering.dll
- Assets/Plugins/MiddleVR_UnityRendering_x64.dll
- Assets/Resources/OVRLensCorrectionMat.mat
7 7: Networking with MiddleVR for Unity
7.1 7.1: Introduction
MiddleVR Networking allows the creation of multi-user VR applications. You can create your own cyberspace applications! The users can be in the same room or in another country.
In Unity, MiddleVR Networking extends the Unity Networking API (also known as UNet) to work on systems supported by MiddleVR, including clusters.
In this section we will focus on a general overview of MiddleVR Networking, its differences from UNet, and its limitations. For specific details users of MiddleVR Networking in Unity should refer to the following links:
Note: There is a tutorial on multi-user applications using MiddleVR.
Note: This is a preliminary and incomplete documentation about networking in MiddleVR. Any part of the current implementation may change until its first official release! Don’t hesitate to contact our support if you have any question!
7.2 7.2: MiddleVR Networking Installation
MiddleVR Networking is based on Unity’s UNet source code and needs to be installed in the Unity Editor folder.
When importing the MiddleVR package in Unity or when opening a MiddleVR Unity project for the first time on a computer, you will be prompted to install MiddleVR Networking. Installing MiddleVR Networking replaces Unity’s UnityEngine.Networking module.

Note: Installing a new version Unity will prompt you to install MiddleVR Networking again.
If the version of MiddleVR Networking on your system is different than the version your project is using, you will be prompted to replace it.

When using MiddleVR, this module provide support for the following features:
- Cluster support
- Voice communication
When not using MiddleVR, this modules behaves like the original UnityEngine.Networking (it is based on same source code), except for the following differences:
- The maximum and default refresh rates of
NetworkTransformis higher. - The default value of “Local Player Authority” is set to
trueinNetworkIdentity
Note: These modifications only affect new objects.
To uninstall MiddleVR Networking, go to you Unity installation folder (for example C:\\Program Files\\Unity\\Editor), from there go into Data\\UnityExtensions\\Unity\\Networking and execute the MiddleVR Networking uninstaller. This will uninstall the MiddleVR Networking module and restore the original from Unity.
7.3 7.3: Network ports
By default, MiddleVR Networking communicates through port: 7777. You can change this value in the NetworkManager component. The voice chat feature always uses the next port. By default it will be 7778. Remember to allow traffic for those ports in your firewall.
7.4 7.4: Limitations
At this time, when using MiddleVR features with MiddleVR Networking, there is no support for:
- Unity Matchmaking
- Unity Relay Servers
- Server migration
Those limitations only apply if you have a MiddleVR’s VRManager in your scene. If you don’t, you can use all the features of UNet even if MiddleVR Networking is installed.
7.5 7.5: General Overview - Concepts
This section is intended as a quick reminder to make a network application run properly. For additional information, see the Unity Networking documentation.
7.5.1 7.5.1: Servers, Clients and Host
Applications follow a client-server model. This means that there is a single server and all clients are connected to it. A host is a special case where a single application acts both as a server and a client at the same time.
The same application works on all modes, it only depends on what you start it with. Usually, an application launched in a virtual reality environment will either be a host or a simple client, whereas dedicated servers are run in batch mode without a graphic output.
7.5.2 7.5.2: Network objects
- There MUST be a
NetworkManagerinstance (or any derived class) on a GameObject in your starting scene. - ALL network prefabs that you wish to instantiate dynamically must be referenced in the
NetworkManagerinstance. - ALL network prefabs must have a
NetworkIdentityinstance on them.
7.5.3 7.5.3: Authority
A network client MUST have authority on an object to call a server command on it (or to send any message whatsoever, including updating its position with NetworkTransform).
A network client only has authority on the local player by default. All other objects must be spawned with client authority, or have their authority assigned.
Furthermore, ALL network prefabs that may need to have authority must have the “Local Player Authority” option checked (This option is checked by default when using the MiddleVR version of NetworkIdentity)
7.5.4 7.5.4: RPC
A RPC (remote procedure/function call) allows a network application to call a function on another computer.
In the case of UNet (Unity’s networking module since Unity 5) and MiddleVR Networking, a client can only call a function on the server, not another client directly. To communicate between clients, you must call a function on the server (in Unity terms, this is called a “Command”), which in turn will call a function on any other client (which is called a “ClientRpc”).
The RPC is linked to a particular instance of an object. This brings another question: How does the network know an instance is the same on different computers?
This is solved through the “Network Identity” component. When added to an object, it gives the object a unique Id number that is the same on all the computers. For example when we created the cube, the “Network Identity” component was automatically added on the object when we added the “Network Transform” component. Thus the object now has a unique “identity card” so we can match the object on all instances of the application, whatever computer they run on.
7.6 7.6: Creating and connecting to a collaborative server
Creating and connecting to a network server can be done in several ways.
7.6.1 7.6.1: Creating a server and connecting from the command line
Using MiddleVR’s default NetworkManager prefab, you can do the following actions from your application’s command line:
- Create a network host
--mvr-start-host[=PORT], - Create a dedicated server with
--mvr-start-server[=PORT], - Connect to a server with
--mvr-client-connect[=ADDRESS[:PORT]].
When starting a network client with MiddleVR Config, use the “Custom arguments” field for this purpose.
You can find the script VRNetworkCommandLineParser in the MiddleVR/Scripts/Networking folder in Unity.
7.6.2 7.6.2: Creating a server and connecting from the menu
Using MiddleVR’s default NetworkManager prefab, you can do the following actions from the default MiddleVR menu:
- Create a network host (using the NetworkManager default settings),
- Connect to a server (using the NetworkManager default settings),
- Disconnect from a server
You can find the script VRNetworkAddOptionsToMenu in the MiddleVR/Scripts/Networking folder in Unity. This script is mainly intended for test purposes.
7.6.3 7.6.3: Creating a server and connecting from a C# script
You can always do these operations yourself from a Unity script. In this example we assume that the script is attached to the same GameObject as the NetworkManager.
var networkManager = GetComponent<NetworkManager>();
// Start a network host (server + client in a single player)
networkManager.networkPort = 1234;
networkManager.StartHost();
// Start a network dedicated server
networkManager.networkPort = 1234;
networkManager.StartServer();
// Connect to a server
networkManager.networkAddress = "XXX.XXX.XXX.XXX";
networkManager.networkPort = 1234;
networkManager.StartClient();
Note: To ensure that networking works correctly with MiddleVR, don’t set your script execution order before the VRManagerScript component!
7.7 7.7: Networked Cluster
Note: Currently MiddleVR clusters cannot run a network server.
On a MiddleVR cluster, only the cluster server can act as a UNet client or Host. All cluster clients behave as a UNet client, with some exceptions:
- Cluster clients receive a copy of all UNet messages from the cluster server.
- Cluster clients are unable to send UNet messages at all.
This behaviour is usually enough to handle UNet Commands, ClientRpc and SyncVars, however there are cases where a NetworkBehaviour must be synchronized across the cluster immediately. For this purpose you can override these two methods:
override bool OnClusterSerialize(NetworkWriter writer)
{
return true; // if there is anything to send
}
override void OnClusterDeserialize(NetworkReader reader)
{
}
7.8 7.8: MiddleVR Networking Features
7.8.1 7.8.1: Voice Communication
MiddleVR’s sample network manager integrates voice communicatiuon, you can enable it or disable it through the VRVoiceChatManager object, which is a child of the VRDefaultNetworkManager prefab.
If you want to integrate voice chat with your custom NetworkManager, please use the “VRVoiceChatManager” prefab. It can be found in the MiddleVR/Scripts/Networking/VRVoiceChat folder.
7.9 7.9: Networking scripts and prefabs
MiddleVR provides several scripts and prefabs for Networking purposes. These scripts are located in the MiddleVR/Scripts/Networking folder.
7.9.1 7.9.1: Local object scripts
| Script name | Description |
|---|---|
VRNetworkLocalObject |
Must be present on objects spawned by VRNetworkSpawnLocalObjects |
7.9.2 7.9.2: Utility scripts
| Script name | Description |
|---|---|
VRNetworkCommandLineParser |
Parses command-line arguments related to networking (to automatically start a host or a client). Handled arguments are: - --mvr-start-server[=PORT] - --mvr-start-host[=PORT] - --mvr-client-connect[=ADDRESS[:PORT]] |
VRNetworkNetworkAddOptionsToMenu |
Adds networking options to the MiddleVR immersive menu. |
7.9.3 7.9.3: Example scripts
| Script name | Description |
|---|---|
VRSampleNetworkManager |
An example of overridable methods of NetworkManager |
VRSampleNetworkPlayerScript |
An example of overridable methods of NetworkBehaviour on a Player object. |
7.9.4 7.9.4: Default prefabs
| Prefab name | |
|---|---|
VRDefaultNetworkManager |
A default network manager prefab. |
VRDefaultNetworkPlayer |
A default network player prefab. |
VRDefaultNetworkHand |
A default networked hand prefab. |
VRDefaultNetworkHead |
A default networked head prefab. |
8 8: Cluster
8.1 8.1: Introduction
Clustering allows you to use multiple computers to drive a VR system than cannot be used with only one computer. The main issue then becomes how to synchronize all those computers? MiddleVR is able to synchronize multiple instances of itself running on different computers.
One particular computer, the cluster server, will synchronize its information with the other computers, the cluster clients. There are multiple levels of synchronization: framelock, swaplock and genlock. See section “Cluster Concepts” for more details.
Important note: The usage of clusters is complex, and MiddleVR will help you as much as possible in this task. But MiddleVR cannot do everything for you: you will have to understand the mechanisms of cluster and adapt your application accordingly.
Currently MiddleVR will automatically synchronize the state of all its devices. You can also choose which of your own data to synchronize, see below.
8.2 8.2: Concepts
8.2.1 8.2.1: Cluster nodes
A cluster is a collection of computers that are used together to drive one VR system. Typically, one computer drives one or more projector/screen. Each computer is called a “Cluster Node”. There is a primary computer that is called the “Cluster Server” and acts as the master for the other computers, which are called “Cluster Clients”.
The server will gather all the information needed to synchronize all the cluster nodes and send it to them.
8.2.2 8.2.2: Synchronization
In a cluster, the main issue is to correctly synchronize all the nodes, otherwise there will be discrepancies at the junction of projectors.
There are several layers of synchronization.
8.2.2.1 8.2.2.1: Framelock
Framelock makes sure that the simulated world is the same on all cluster nodes. It makes sure that everything is at the same place and the same state on all computers, so the distributed simulation is coherent. Perfect framelock is hard to achieve unless you are writing the 3D engine yourself, and even then, it’s not always possible to synchronize everything perfectly.
MiddleVR does its best to automatically synchronize scenes by using heuristics to synchronize only the necessary states and thus saving network bandwidth and CPU resources.
Note: Framelock is also sometimes another name for Genlock.
8.2.2.2 8.2.2.2: Swaplock
Swaplock makes sure that all computers swap their double buffers at the same time, meaning that each new picture is displayed at the same time on every computer.
Otherwise you can have one computer displaying a previous frame while another computer displays the current frame.
This swaplock can be done by software (SoftSwapLock), but is more precise when doubled by a hardware swaplocked. This hardware swaplock is generally handled by the graphics cards through an external synchronization card, such as a GSync card with NVidia Quadros. The same option is also possible with ATI cards. This option is called NVSwapLock.
8.2.2.3 8.2.2.3: Genlock
Genlock is a hardware option that also requires a GSync card with NVidia Quadros and ATI cards. When using stereoscopy, Genlock makes sure that all the computers display the same eye (left or right) at the same time. If you’re not using it, a computer might be displaying the right eye while another one displays the left eye. Headache guaranteed!
8.3 8.3: Configuring
8.3.1 8.3.1: Summary
First you have to prepare all the cluster nodes:
Make sure that the master has the correct license. Clients don’t need to have any license installed, the master handles everything.
Note the IP addresses of each node.
Check the Windows firewall options. It is recommended to add exceptions for the VRDaemon and all your cluster applications in your firewalls. MiddleVR communicates on TCP ports 9996, 9997, 9998 and 9999. VRPN communicates on port 3883. Other devices (Vicon, ART, Optitrack) have specific network ports.
Install the MiddleVR package on all machines. Alternatively you can share
MiddleVR/binfolder from the master and add this shared folder to the PATH environment on all clients. See “Creating a shared folder”.- Run the VRDaemon on all machines including on the cluster server (see “Run the VRDaemon”)).
Make sure your application is accessible from all the computers of the cluster, by either doing a local copy of the application (see section “Local Copy”) or creating a shared folder accessible from all cluster nodes (see section “Creating a shared folder”).
On the server: - Create a network shared folder named for example “MiddleVRDemo” - Copy the Shadow demo and the configuration files in MiddleVRDemo
On the server and each other computer:
- Map a new network drive pointing to “\\ServerMachine\MiddleVRDemo” for example let’s say on the ‘Z’ letter (see Creating a shared folder)
- Run
MiddleVRDaemon.exe
Note: Unity applications can’t be run from a windows shared folder directly, you will have to mount the shared folder as a network drive. See section “Creating a shared folder”.
Back on the server:
- Open
MiddleVRConfig.exe - Configure the cluster information
- Go to the simulation panel
- Add the Shadow demo from “Z:”
- Add the configuration file from “Z:”
- Save, run and check if everything is ok (all nodes run, images at the right places, stereo working and wand navigation/grab are ok)
- Open
- When all the players are started correctly, you can start configuring the tracking system:
- Configure the tracking system
- Check that the positions are received in MiddleVR
- Set these trackers to the right 3D nodes (HeadNode, HandNode, etc. )
- If the zero of the tracking system is not the center of the floor screen, move the tracker base of screen nodes to make your tracker world position being the ones you measure from the center of the screen
- Save, run and check that the previous steps are still ok and that your tracked head and hand behave like you want
8.3.2 8.3.2: Troubleshooting
The knowledge base has valuable articles: http://www.middlevr.com/kb/troubleshooting-the-cluster-setup.
If you have any issue, don’t hesitate to contact support.
8.3.2.1 8.3.2.1: VRDaemon
Once the MiddleVR package is installed, you must launch the VRDaemon on all the nodes, including the master, standing in C:\Program Files (x86)\MiddleVR\bin\VRDaemon.exe.
The result is a DOS window that should always stay opened:
If you need you cluster nodes computers to automatically run the VRDaemon at Windows startup, you can follow the steps of our knowledge base here: http://support.middlevr.com/hc/en-us/articles/204367495
8.3.2.2 8.3.2.2: Creating a shared folder
Locate the folder of your Unity application and share it so that it is visible by all the computers on the network. You can achieve this by right-clicking on your folder and select “Share with”:
Once this is done,you have to mount this network path by right clicking on your computer in the file explorer and selecting “Map a network driver”:
Make sure to mount the folder with the same drive on all computers, including the server:
8.3.2.3 8.3.2.3: Local copy
Local copy consists in copying your application to every computer’s hard drive. The goal is mainly to speed-up the loading process. The improvements are generally very good. Several solutions like Microsoft SyncToy, SpiderOak, Dropbox or other synchronization services can do the trick.
8.3.3 8.3.3: Configuring the cluster
In the Cluster tab, you need to create one server and as many clients (+Client) as needed.
This is the window to configure cluster options and cluster nodes.
| Property | Description |
|---|---|
| NVidiaSwapLock | Hardware swaplock. See section “Concepts - Cluster Synchronization”. |
| Disable VSync On Server | If VSync is enabled in the Viewports configuration, disable VSync only on Server. This is useful if the master uses a different refresh rate than the rest of the cluster. Often the master only has a mono display at 60hz while the other nodes have 120hz displays. In this case, disabling the VSync on the master gives better performances. |
| Force DirectX > OpenGL conversion | When not using active stereo, you can still force the display of the DirectX rendering in an OpenGL window. This is particularly useful when using a cluster: if your master is not in active stereo but the rest of the cluster nodes are, you should activate this option. This is also required if your cluster is not using active stereo. |
| Multi-GPU | When enabled, MiddleVR will find the GPUs that are connected to a display. By this way, DirectX and OpenGL (if the conversion above is activated) will render only on the GPU on which the display is connected. This setting is very interesting for computers with several GPUs because it enables the use of all of them instead of only one by default. |
Here’s the configuration for the cluster server:
| Property | Description |
|---|---|
| Address | Specify the hostname IP address of the cluster server. Should be reachable by all cluster clients. Note: If you specify “localhost” or “127.0.0.1”, the clients will not be able to find the server, unless they all run on the same machine. |
| Viewports | Specify the viewports used by the server. |
| CPU Affinity | Specify the CPU cores to be used. For example with 4 cores, activating only the first two ones will be done by setting this value to 0,1. Note that MiddleVR relies on the number cores/CPUs as reported by Windows. The value does not always reflect the real number of physical CPUs because of technologies such as Intel Hyper-Threading, and because a CPU can be made up of several cores. Note that the activity of each CPU can be seen in the Windows task monitor. It is suggested that you try this feature only with multi-physical-CPUs (not simply-multi-cores CPUs) because Windows is already able to distribute very well threads on a same CPU equipped with several cores (you will just get bad performances). For example, try to use the 4 cores of a 2nd CPU but not the 4 cores of the 1st and measure performance differences. |
Here’s the configuration for all cluster clients:
| Property | Description |
|---|---|
| Address | Specify the hostname or IP address of the cluster client. |
| ClusterID | You can specify a specific cluster identification name for readability or better debugging. |
| Viewports | Specify the viewports used by the client. |
| CPU Affinity | Specify the CPU affinity settings used by the client. |
8.3.3.1 8.3.3.1: Multi-GPU Vs. Nvidia Mosaic
According to Nvidia, Mosaic is a technology that lets the system views multiple displays as a single unified desktop environment without software customization or performance degradation. However our in-house tests showed that one GPU only was used with Mosaic whereas our Multi-GPU option used significantly every GPU. With deferred-rendering, we even reached x2 better performances between Multi-GPU and Mosaic with two graphics cards.
Nevertheless the Multi-GPU option comes at a price because it forces the use of clustering and so to manage its difficulties (see the cluster section).
8.3.4 8.3.4: Starting a cluster application from the Simulations window
The easiest way to run your cluster application is to use the Simulations window.
If you are using a network drive, make sure to add the application from this network drive. MiddleVR will tell all nodes to use the exact same command line, so if you’re adding your application from a local folder that does not exist on the cluster nodes, the VRDaemon will not be able to start it.
Simply choose your application from the network driver and cluster configuration and hit Run. Make sure you have the VRDaemon running on all machines, including the master.
8.3.5 8.3.5: Stopping a cluster application
The easiest way to stop an application is by pressing the Escape key on the server’s keyboard.
If the application is frozen, you can also use the “Kill All Cluster Nodes” option in the simulations window:
Pressing this button will send a message to all VRDaemons to kill the last applications that they started.
8.3.6 8.3.6: Manually starting a cluster application
You can also manually start the application without the graphical tool. On the server, you can execute the application by double-clicking on it or create a .bat file that contains the right command line.
MiddleVR will then tell the VRDaemons of all the configured cluster clients to run the same application with the exact same path and command line arguments.
Make sure to use all the required command line arguments. You can find them in the Simulations tab, after clicking on the requested application and configuration file.
8.4 8.4: Synchronization
There are a few things to understand when you want to create a cluster application. The root issue is that MiddleVR will run a Unity instance (player) per computer that make up the VR system. This requires multiple levels of synchronization, which we defined in section “Cluster - Concepts” of the user guide.
On each cluster node, the Unity player will be running. This means that all your scripts will still be running on all cluster nodes.
The theory is the following: if your application is deterministic (“whose resulting behavior is entirely determined by its initial state and inputs, and which is not random”), its state at the end of each frame will be the same on all machines.
This means that the initial state and inputs for each frame must be the same on all machines.
Inputs include:
- user input: keyboard,mouse,joystick, trackers, GUI inputs,
- network input: you might need to send network data to all cluster clients,
- time input: the elapsed time and time to render each frame (delta time) might be different on each cluster node.
For the parts that have a random component, you will need to synchronize the state manually. For example in Unity, rigid bodies (physics objects) don’t behave the same on all computers. At the end of the frame, a cube might be at different positions on different cluster nodes. This is why MiddleVR has the VRManagerPostFrame script that will synchronize the state of those objects.
8.4.1 8.4.1: Sequence diagram
Here’s how the synchronization works:
8.4.2 8.4.2: Simple cluster
You can first try to use the SimpleCluster option of the VRManager.
This will try to automatically determine which objects need their position/orientation to be synchronized by the VRManagerPostFrame script. For the moment this is all the objects that react to physics, i.e. objects that have a Rigid Body component and are not Kinematic, and all First Person Controllers.
MiddleVR will automatically add the VRClusterObject script that will synchronize their position/orientation. Note that this will only work if all your objects are already created and not dynamically added later on in the application. You might then need to manually add the VRClusterObject script. You can also manually add the MiddleVR/Scripts/Cluster/VRClusterObject.cs script to the nodes that you want to synchronize.
Note: The Simple Cluster option this will only work if all your objects are already created and not dynamically added later on in the application. For objects dynamically created, make sure to manually add the VRClusterObject script (on all cluster nodes).
8.4.3 8.4.3: Inputs
If you have a script that gets inputs from the keyboard, the mouse or a joystick, this script will probably only work with the master because no physical key/button from the keyboard/mouse/joystick on the clients is pressed!
This is why you have to get input events (keyboard / mouse / wand / trackers) from MiddleVR. MiddleVR will synchronize the state of all the devices it handles from the master to all clients.
So instead of: Input.GetKeys(...), you need to use: MiddleVR.VRDeviceMgr.IsKeyPressed and other methods to managed the different devices.
See section “Input devices” in the User Guide.
Note: Make sure that your scripts always execute *after* the VRManagerScript.
8.4.4 8.4.4: Random
MiddleVR automatically gets during start-up the seed of Unity for pseudo-random values and synchronizes it across the cluster.
You can safely use the Random functions of Unity after the first call to VRManager.Update on all cluster nodes. The seed being synchronized on all cluster clients, the Random functions will output the same sequences on all of them.
8.4.5 8.4.5: Physics
Unity’s physics gives random events, so you have to synchronize the position and orientation of those nodes. If you’re using the SimpleCluster option from the VRManager, MiddleVR will automatically add the VRClusterObject script to objects that react to physics (i.e. that have a Rigid Body component and are not kinematic).
Note: The Simple Cluster option this will only work if all your objects are already created and not dynamically added later on in the application. You might then need to manually add the VRClusterObject script.
8.4.6 8.4.6: Time / Delta time
If one of your script uses Unity time (Time.time) for some computation, be aware that this time might not be the same on all cluster nodes. Instead you can use MiddleVR.VRKernel.GetTime() which gives the elapsed time since the server started and is correctly synchronized on the cluster.
The delta time (time since the last frame) is used to get a time-dependent translation/rotation speed. When you create your first 3D application, you will say that you want an object to move 1 meter each frame. If the framerate changes, your object’s speed will change.
The issue is that the delta time may be different on all the different computers of the cluster. Not every computer will take the same time to complete the rendering of a frame. So if any of the script of your application is using Unity’s delta time (Time.deltaTime), it should instead use the MiddleVR’s delta time, which is synchronized among the cluster: MiddleVR.VRKernel.GetDeltaTime().
8.4.7 8.4.7: Particles
Starting from MiddleVR 1.6, the particles are correctly synchronized on the cluster. The random seed of the particles is automatically synchronized from the cluster server to the cluster clients. This behavior can be disabled by disabling the VRManager option SimpleClusterParticles.
8.4.8 8.4.8: Skyboxes
Skyboxes don’t correctly work with asymmetric cameras (stereo cameras or cameras used for head tracking in front of a VR wall), so you should replace them by a big sphere or cube geometry.
8.4.9 8.4.9: Shaders
Some shaders, like the water shader, don’t get along with asymmetric cameras. They need to be adapted.
8.4.10 8.4.10: Random objects
If as a result of your scripts, objects have random positions/orientations, you can apply to them the VRClusterObject which will synchronize their position/orientation at each end of frame.
8.4.11 8.4.11: Sharing custom data
MiddleVR provides the VRSharedValue<T> type to share any serializable C# object across the cluster.
Creating a VRSharedValue<T> requires a unique sharing name and an initial value:
using MiddleVR_Unity3D;
var mySharedBool = new VRSharedValue<bool>("MySharedBool", false);
Setting and getting its value is done through the value property:
// On the server
mySharedBool.value = true;
// On all nodes
if (mySharedBool.value)
{
// Do something
}
Note: Changing the value is an asynchronous action: the real change will happen on all cluster nodes upon the next Update() of VRManagerScript or VRManagerPostFrame, even on the server node! If you are not sure, refer to the script execution order.
Note: Changing a VRSharedValue
8.4.11.1 8.4.11.1: Sharing Unity types
As of Unity 4.x, Unity struct types such as vectors are not directly serializable. MiddleVR provides several wrapper classes and implicit conversion operators to enable seamless serialization of those types. You only have to create a VRSharedValue of one of the following types:
SerializableVector2SerializableVector3SerializableVector4SerializableQuaternionSerializableColorSerializableRect
using MiddleVR_Unity3D;
// Creation
var mySharedVector = new VRSharedValue<SerializableVector3>("MySharedVector", new Vector3(0.0f, 0.0f, 0.0f));
// Setting
mySharedVector.value = new Vector3(0.0f, 0.0f, 0.0f);
// Getting
Vector3 vec = mySharedVector.value;
8.4.12 8.4.12: Sharing events
See the Commands on a cluster section.
8.5 8.5: Testing your cluster application on a single computer
You can run your cluster application on one single computer for easier testing. You can simulate a cluster by running two Unity instances by running the MiddleVRDaemon on your computer and using the same IP address (or localhost). See VirtualCluster.vrx in the installation folder of MiddleVR: C:\Program Files (x86)\MiddleVR\data\Config\Cluster\VirtualCluster.vrx.
8.6 8.6: Converting existing applications
8.6.1 8.6.1: Converting ShadowDemo
You can download the shadow demo from here: https://unity3d.com/showcase/live-demos#shadows.
- Simply add the MiddleVR package and the VRManager and leave the SimpleCluster option of the VRManager checked.
- You can add the VRActor script to objects that you want to pick. Make sure to edit the
VirtualCluster.vrxconfiguration file to configure your Wand. - In the Player Settings, disable the “Display Resolution Dialog” and the “Use Player Log” options.
- In the Quality settings, make sure that VSyncCount is set to “Don’t Sync”.
- In the Build Settings, make sure that the current scene is build (“Add Current”) Testing:
- Run the VRDaemon on your computer.
- Run the resulting
.exethrough the Simulations tab of the GUI. - Enjoy.
8.6.2 8.6.2: Converting Car tutorial
You can download the car tutorial from here: https://www.assetstore.unity3d.com/en/#!/content/10.
We will have the car move with the configured Wand device.
- Open CompleteScene.
- Import MVR package and insert VRManager.
- Specify the Car GameObject as the VR System Center Node in the VRManager options.
- Disable “Show Wand”.
- Look at the “VRWand” object, child of the VRManager. Disable the VRWand Navigation, otherwise you will navigate around the car with the wand.
- Edit the
Car.jsscript: in the GetInput function, replace:
throttle = Input.GetAxis("Vertical");
steer = Input.GetAxis("Horizontal");
With:
var vrmgr : GameObject;
vrmgr = GameObject.Find("VRManager");
var script : VRManagerScript;
script = null;
if (vrmgr != null) {
script = vrmgr.GetComponent("VRManagerScript");
}
if (script != null) {
// Wand data throttle = script.WandAxisVertical;
steer = script.WandAxisHorizontal;
// Left
if (script.IsKeyPressed(0xCB)) { steer = -1; }
// Right
if (script.IsKeyPressed(0xCD)) { steer = 1; }
// Up
if (script.IsKeyPressed(0xC8)) { throttle = 1; }
// Down
if (script.IsKeyPressed(0xD0)) { throttle = -1; }
} else {
throttle = Input.GetAxis("Vertical");
steer = Input.GetAxis("Horizontal");
}
When you get back to Unity, it should complain that: “The name ‘VRManagerScript’ does not denote a valid type (‘not found’).”
This means that the MiddleVR scripts are compiled after this JavaScript. You simply have to move the MiddleVR folder in the “Pro Standard Assets” folder. If you can’t, close your script editor (Visual Studio or MonoDevelop). See for more info: http://docs.unity3d.com/412/Documentation/ScriptReference/index.Script_compilation_28Advanced29.html.
- Get rid of the Skybox or replace it with a giant sphere or cube.
- Modify the “Birds” prefab (Prefabs/VFX/Birds):
- in the particle emitter, set all components of
Rnd Velocityto 0. - Make sure
Rnd Angular Velocityis 0,Rnd Rotationis not checked. - in the particle animator, set all Rnd Force components to 0.
- in the particle emitter, set all components of
- The water shader will have problems, it hasn’t been addressed yet.
- In the Player Settings, disable the “Display Resolution Dialog” and the “Use Player Log” options.
- In the Quality settings, make sure that VSyncCount is set to “Don’t Sync”.
- In the Build Settings, make sure that the current scene is build (“Add Current”).
8.6.3 8.6.3: Converting AngryBot
You can download AngryBot from here: https://www.assetstore.unity3d.com/en/#!/content/12175.
This is a quick conversion that does not involve yet a first person perspective:
- Open AngryBots.
- Import MVR package and insert VRManager.
- In the Player Settings, disable the “Display Resolution Dialog” option.
- In the Quality settings, make sure that VSyncCount is set to “Don’t Sync”.
- In the Build Settings, make sure that the current scene is build (“Add Current”).
- In the Player Settings, disable the “Display Resolution Dialog” and the “Use Player Log” options.
- In the Quality settings, make sure that VSyncCount is set to “Don’t Sync”.
- In the Build Settings, make sure that the current scene is build (“Add Current”).
8.7 8.7: Optimization
8.7.1 8.7.1: Objects sync
Try to synchronize the minimum number of objects with VRClusterObject. The more objects you synchronize manually, or the more number of physic objects that you use (which will be automatically synchronized), the slower the network will be.
8.7.2 8.7.2: Master display
It’s better to display a small viewport on the master and to disable its VSync if it’s not part of the actual display system so it runs faster and doesn’t slow down the rest of the cluster.
Also if the master has a different refresh rate than the rest of the cluster, use the Disable VSync on Master option.
8.7.3 8.7.3: Logs
Disable Unity’s output logs (in the Player Settings, Use Player Log) otherwise Unity will write its logs through the network and slow the application down.
8.7.4 8.7.4: CPU Intensive tasks
It’s better to put all the CPU intensive tasks before the VRManagerScript::Update executes: we use a thread at the end of a frame which will handle the frame synchronization, but Unity can start to work before the frame sync is actually over, so we get some parallelization.
Just be aware that MiddleVR will update the devices state a bit later, when the VRManagerScript executes.
8.8 8.8: Limitations
It is possible that some parts of the simulation can’t be synchronized by MiddleVR. This includes:
- Tree foliage,
- Videos.
Internally MiddleVR uses a video synchronization mechanism that could be applied to all videos. Contact support for more information.
9 9: Advanced Programming
9.1 9.1: Commands and values
9.1.1 9.1.1: Introduction
Commands are objects that represent a named callback. They can be used as simple handlers, for example in the case of MiddleVR’s GUI Widgets, but they are also used to transmit events and data across different languages (for example in the case of HTML user interfaces) or cluster nodes.
9.1.2 9.1.2: Values
Commands use the vrValue type to pass data around. Instances of vrValue are simple data containers that can hold the following types:
- Numbers (Stored as
doubleinternally) - Booleans
- Strings
- Vectors (
vrVec2,vrVec3,vrVec4) - Matrices (
vrMatrix) - Quaternion (
vrQuat) - Lists of
vrValue - Maps of
stringtovrValue - Data buffers (
byte[]in C#) - References to
vrObject
vrValues in C# are created like this:
// Creating vrValues using the new operator...
vrValue val = new vrValue(true); // boolean
vrValue val = new vrValue(0); // number
vrValue val = new vrValue("MiddleVR"); // string
vrValue val = new vrValue(new vrVec2(0.0, 0.1)) // vec2
//... or implicit conversions
vrValue val = true; // boolean
vrValue val = 0; // number
vrValue val = "MiddleVR"; // string
vrValue val = new vrVec2(0.0, 0.1) // vec2
Lists and maps are created special static methods:
vrValue list = vrValue.CreateList(); // list
list.AddListItem(1);
list.AddListItem(2);
vrValue map = vrValue.CreateMap(); // map
map["MiddleVR"] = true;
map["X Axis"] = new vrVec3(1.0, 0.0, 0.0);
map["The answer to life, the universe, and everything"] = 42;
Testing types and getting values is done with methods vrValue.Is* and vrValue.Get*:
// Getting a boolean value
if (val.IsBool())
{
print(val.GetBool());
}
// Getting a number value
if (val.IsNumber())
{
// Internally, all numbers are stored as doubles
print(val.GetInt());
print(val.GetFloat());
print(val.GetDouble());
}
// Iterating over a list
if (val.IsList())
{
// GetList() returns an IEnumerable<vrValue>
foreach (vrValue item in val.GetList())
{
...
}
}
// Iterating over a map
if (val.IsMap())
{
// GetMap() returns an IEnumerable<KeyValuePair<string,vrValue>>
foreach (KeyValuePair<string, vrValue> item in val.GetMap())
{
...
}
}
9.1.3 9.1.3: Commands
Creating a command from C# in Unity is done by instantiating adding the [VRCommand] attribute to a MonoBehaviour method with a unique name.
Here is a short example:
[VRCommand]
private void MyCommandHandler()
{
}
MVRTools.RegisterCommands(this);
// Call the command by name
MiddleVR.VRKernel.ExecuteCommand( "MyCommandHandler" );
// Any vrValue can be passed to a command:
vrValue list = vrValue.CreateList();
list.AddListItem( 1 );
MiddleVR.VRKernel.ExecuteCommand( "MyCommandHandler", list );
Commands cannot be called recursively.
Note: Alternatively, you can instantiate a vrCommand with a unique name and delegate as an argument. The delegate must be of the form vrValue handler(vrValue), meaning you have to return a vrValue, even if it is null.
9.1.4 9.1.4: Commands on a cluster
When running on a cluster, all commands are synchronized by default.
Commands synchronized on the cluster behave differently than when running on a single computer in a few ways:
Calling
DoorExecuteCommandwill always return an undefinedvrValue.On the Server node, calling
DoorExecuteCommandschedules a call to its handler on all cluster nodes.On the Client node, calling
DoorExecuteCommanddoes nothing.Command handlers are not executed immediately but the next time the cluster synchronizes (either during the
VRManagerPostFramescript execution or the following frame duringVRManagerScriptscript execution, whichever happens first). Commands handlers are still executed in the same order they were called.
If you don’t want a command to be synchronized on the cluster and always behave like it’s running on a single computer, you have to pass the VRCommandFlags_DontSynchronizeCluster flag when declaring your command:
[VRCommand((uint)VRCommandFlags.VRCommandFlag_DontSynchronizeCluster)]
private void MyCommandHandler()
{
}
The VRClusterCommandSample script is a simple example of command usage on a cluster.
10 10: Graphical User Interfaces
10.1 10.1: Introduction
Since version 1.6, MiddleVR includes graphical user interface (GUI) capabilities based on web standards.
This allows you to:
- Display any internet webpage,
- Use MiddleVR’s VR menu,
- Create your own VRMenu,
- Create any type of GUI
10.2 10.2: Web views
10.2.1 10.2.1: Introduction
Web views offer a way to display web pages directly into an immersive 3D experience.
Make sure to start with those tutorials: Creating a graphical user interface in HTML5
10.2.2 10.2.2: Creating a web view
Creating a web view is as simple as using a Unity prefab, located in the Scripts\Samples\GUI directory in MiddleVR’s Unity package:
VRWebSample3Dis a web view on a 3D plane in world-space. You can interact with it using the wand. For debugging purposes it’s also possible to use the mouse in editor mode.VRWebSample2Dis a web view on a 2D plane in screen-space. It is only usable with the mouse.



Alternatively, simply add the VRWebView script to any GameObject. The script will change the GameObject’s material and texture.
Note: the mesh doesn’t have to be a plane, it can be of any shape.

| Property | Description |
|---|---|
| Width | Width (in pixels) of the web page texture. |
| Height | Height (in pixels) of the web page texture. |
| URL | URL of the web page. http://, https:// and file:// protocols are supported. The web view script assumes URLs without any protocol specified are file URLs. Absolute paths and paths relative to your Assets folder are supported for files. |
| Zoom | Web page zoom. The default value of 1.0 means a normal element size. |
10.2.3 10.2.3: Web resources in Unity projects
10.2.3.1 10.2.3.1: Storing web resources in a Unity project
MiddleVR web views can reference web pages located in the Assets folder by using a relative path. For example, WebContents/webpage.html will point at the webpage.html from the WebContents folder in the Unity project Assets.
However, storing web resources directly into the Assets folder may cause errors because Unity recognizes all files with the .js file extension as Unity scripts.
There are two ways to work around this issue:
Change the file extension of all javascript files so that the extension is not
.js(For example:.javascript).Put these files in a folder whose name begins with a dot (
.), such as the.WebAssetsfolder (see below), so that Unity ignores them completely.
10.2.3.2 10.2.3.2: Building a Unity standalone player with web resources
When building a player, MiddleVR automatically copies the contents (except .meta files) of the following folders, if they exist, to the data folder of the player:
Assets/WebAssetsAssets/.WebAssets
You should put any web-related file (HTML, CSS, JavaScript, images and web fonts) into one of these two folders. If you want to put web resources into another folder, you will have to copy them yourself.
Web pages can then be used in the VRWebView script with relative paths to the Assets folder: WebAssets/MyFolder/index.html or .WebAssets/MyHiddenFolder/index.html.

Note: To create a .WebAssets folder from the Windows File Explorer, you will have to type “.WebAssets.”. The additional dot at the end is necessary, and will be removed by the File Explorer.
Note: The Assets/MiddleVR/WebAssets and Assets/MiddleVR/.WebAssets are also automatically copied but are reserved for MiddleVR files and must not be used for your files.
10.2.4 10.2.4: Web view capabilities and limitations
As of MiddleVR 1.6, web views use Chromium as the underlying web engine. Web pages will work as in a standard Google Chrome browser, but with the following limitations:
- Pop-ups are not supported.
- File downloading is not supported.
- Drag and drop is not yet supported.
- WebRTC is not yet supported.
- No keyboard input yet. As a workaround you can use JavaScript code to fill forms.
- Installing the Flash plugin for Chrome from the Adobe web site may cause instabilities, especially in 64-bit mode.
Note: The web engine used by MiddleVR might change in future versions. We strongly suggest using standard HTML5, CSS or JavaScript code and not using browser-specific extensions when you design custom pages.
Note: To get the best performance out of web views we recommend using Unity Pro. Web views are much slower in the free version of Unity due to the restrictions on rendering plugins. Setting a large size on a web view will have a significant impact on performance when using the free version of Unity.
10.2.5 10.2.5: Web Views on a Cluster
Synchronizing the internal state of a web rendering engine across a cluster is a complex problem that is beyond the scope of MiddleVR. What MiddleVR provides, though, is image synchronization: web views are only rendered on the server node and the resulting image is distributed on the network.
Image synchronization uses TCP port 9996. Refer to the clustering documentation for more information regarding clusters.
By default, synchronized images are compressed with the JPEG algorithm to save network bandwidth.
10.2.6 10.2.6: Calling JavaScript code from C#
Web views have a ExecuteJavascript method that can be used to execute arbitrary JavaScript code.
VRWebView webViewScript = GetComponent<VRWebView>();
if( webViewScript.webView.IsReady() )
{
webViewScript.webView.ExecuteJavascript("MyJavaScriptFunction();");
}
See also tutorial “Creating a HTML GUI” for another example of calling the JavaScript of a webpage from Unity’s C#.
10.3 10.3: VR menus
Whether you don’t know HTML or you simply want to create basic user interfaces, MiddleVR provides widget classes to design simple hierarchical menus.
You can:
- use and extend MiddleVR’s VR menu
- or create a new VR menu from scratch.
10.3.1 10.3.1: MiddleVR default VR menu
Make sure to first read tutorial “MiddleVR VR Menu”.
MiddleVR offers an immersive menu that you can customize to include your own menu items. The default menu allows you to change the navigation scheme, the manipulation scheme and various other options.
By default you activate the menu by pressing button 3 of your Wand. This can be changed on the VRMenu GameObject:
You interact with the menu by pressing the button 0 of your Wand.
You can deactivate the menu by disabling the option “Use default menu” in the VRManager options:
10.3.1.1 10.3.1.1: Widget types
Here is the list of available widget types that you can use in a VR menu:

| Class | Description |
|---|---|
| vrWidgetMenu | Menu |
| vrWidgetSeparator | Menu separator |
| vrWidgetButton | Simple button |
| vrWidgetToggleButton | Checkbox (Two-state button) |
| vrWidgetRadioButton | Radio button |
| vrWidgetSlider | Slider |
| vrWidgetList | Single-selection list |
| vrWidgetColorPicker | Color Picker |
The class reference has information about all the methods of these widgets.
The script MiddleVR/Scripts/Samples/GUI/VRGUIMenuSample shows an example of using all those widgets.
10.3.1.2 10.3.1.2: Extending the default VR menu
10.3.1.2.1 10.3.1.2.1: Introduction
First make sure to read tutorial “MiddleVR VR Menu”.
The MiddleVR package provides a default menu that is activated with a wand button. You can customize this menu by retrieving its menu widget:
VRMenu MiddleVRMenu = FindObjectOfType(typeof(VRMenu)) as VRMenu;
new vrWidgetButton("VRMenu.MyMenuItem", MiddleVRMenu.menu, "My Menu Item", m_MyItemCommand);
The script MiddleVR/Scripts/Samples/GUI/VRCustomizeDefaultMenu.cs shows how to:
- Add a simple button,
- Remove an item,
- Move items in a sub-menu.
[VRCommand]
private void MyItemCommandHandler()
{
print("My menu item has been clicked");
}
private void AddButton(VRMenu iVRMenu)
{
// Add a button at the start of the menu
var button = new vrWidgetButton("VRMenu.MyCustomButton", iVRMenu.menu, "My Menu Item", MVRTools.GetCommand("MyItemCommandHandler"));
iVRMenu.menu.SetChildIndex(button, 0);
MVRTools.RegisterObject(this, button);
// Add a separator below it
var separator = new vrWidgetSeparator("VRMenu.MyCustomSeparator", iVRMenu.menu);
iVRMenu.menu.SetChildIndex(separator, 1);
MVRTools.RegisterObject(this, separator);
}
private void RemoveItem(VRMenu iVRMenu)
{
// Remove "Reset" submenu
for (uint i = 0; i < iVRMenu.menu.GetChildrenNb(); ++i)
{
var widget = iVRMenu.menu.GetChild(i);
if( widget.GetLabel().Contains("Reset"))
{
iVRMenu.menu.RemoveChild(widget);
break;
}
}
}
private void MoveItems(VRMenu iVRMmenu)
{
// Move every menu item under a sub menu
var subMenu = new vrWidgetMenu("VRMenu.MyNewSubMenu", null, "MiddleVR Menu");
while (iVRMmenu.menu.GetChildrenNb() > 0)
{
var widget = iVRMmenu.menu.GetChild(0);
widget.SetParent(subMenu);
}
subMenu.SetParent(iVRMmenu.menu);
}
10.3.1.2.2 10.3.1.2.2: Using other widgets

The script MiddleVR/Scripts/Samples/GUI/VRGUIMenuSample shows an example of using all those widgets:
private vrWidgetToggleButton m_Checkbox;
[VRCommand]
private void ButtonHandler()
{
m_Checkbox.SetChecked( ! m_Checkbox.IsChecked() );
print("ButtonHandler() called");
}
[VRCommand]
private void CheckboxHandler(bool iValue)
{
print("Checkbox value : " + iValue);
}
[VRCommand]
private void RadioHandler(string iValue)
{
print("Radio value : " + iValue);
}
[VRCommand]
private void ColorPickerHandler(vrVec4 iValue)
{
print("Selected color : " + iValue.x() + " " + iValue.y() + " " + iValue.z());
}
[VRCommand]
private void SliderHandler(float iValue)
{
print("Slider value : " + iValue);
}
[VRCommand]
private void ListHandler(int iValue)
{
print( "List Selected Index : " + iValue );
}
private void Start()
{
// Automatically register all methods with the [VRCommand] attribute
MVRTools.RegisterCommands(this);
// Create GUI
VRWebView webViewScript = GetComponent<VRWebView>();
if (webViewScript == null)
{
MVRTools.Log(0, "[X] VRGUIMenuSample does not have a WebView.");
enabled = false;
return;
}
var GUIRendererWeb = new vrGUIRendererWeb("", webViewScript.webView);
// Register the object so the garbage collector does not collect it after this method.
// The object will be disposed when the GameObject is destroyed.
MVRTools.RegisterObject(this, GUIRendererWeb);
var menu = new vrWidgetMenu("GUIMenuSample.MainMenu", GUIRendererWeb);
MVRTools.RegisterObject(this, menu);
var button1 = new vrWidgetButton("GUIMenuSample.Button1", menu, "Button", MVRTools.GetCommand("ButtonHandler"));
MVRTools.RegisterObject(this, button1);
var separator = new vrWidgetSeparator("GUIMenuSample.Separator1", menu);
MVRTools.RegisterObject(this, separator);
m_Checkbox = new vrWidgetToggleButton("GUIMenuSample.Checkbox", menu, "Toggle Button", MVRTools.GetCommand("CheckboxHandler"), true);
MVRTools.RegisterObject(this, m_Checkbox);
var submenu = new vrWidgetMenu("GUIMenuSample.SubMenu", menu, "Sub Menu");
submenu.SetVisible(true);
MVRTools.RegisterObject(this, submenu);
var radio1 = new vrWidgetRadioButton("GUIMenuSample.Radio1", submenu, "Huey", MVRTools.GetCommand("RadioHandler"), "Huey");
MVRTools.RegisterObject(this, radio1);
var radio2 = new vrWidgetRadioButton("GUIMenuSample.Radio2", submenu, "Dewey", MVRTools.GetCommand("RadioHandler"), "Dewey");
MVRTools.RegisterObject(this, radio2);
var radio3 = new vrWidgetRadioButton("GUIMenuSample.Radio3", submenu, "Louie", MVRTools.GetCommand("RadioHandler"), "Louie");
MVRTools.RegisterObject(this, radio3);
var picker = new vrWidgetColorPicker("GUIMenuSample.ColorPicker", menu, "Color Picker", MVRTools.GetCommand("ColorPickerHandler"), new vrVec4(0, 0, 0, 0));
MVRTools.RegisterObject(this, picker);
var slider = new vrWidgetSlider("GUIMenuSample.Slider", menu, "Slider", MVRTools.GetCommand("SliderHandler"), 50.0f, 0.0f, 100.0f, 1.0f);
MVRTools.RegisterObject(this, slider);
vrValue listContents = vrValue.CreateList();
listContents.AddListItem( "Item 1" );
listContents.AddListItem( "Item 2" );
var list = new vrWidgetList("GUIMenuSample.List", menu, "List", MVRTools.GetCommand("ListHandler"), listContents, 0);
MVRTools.RegisterObject(this, list);
}
10.3.2 10.3.2: Creating a custom VR menu
You can also create your own menu from scratch. The easiest way to get started is to modify the VRGUIMenuSample3D sample prefab.
Here are the steps to create a web menu from scratch:
- Attach the
VRWebViewscript to a GameObject. - Make the
VRWebViewscriptURLparameter point toMiddleVR/WebAssets/VRMenu/index.html. This is a blank web page that contains all scripts and CSS styles needed by the web GUI. It is automatically copied in your project when importing the MiddleVR package. - Create a new script and attach it to the same GameObject. Widgets will be created from this script.
- Create a
vrGUIRendererWebobject:
protected void Start()
{
VRWebView webViewScript = GetComponent<VRWebView>();
if(webViewScript == null)
{
MVRTools.Log(1, "[X] Custom VR menu does not have a WebView.");
return;
}
m_GUIRendererWeb = new vrGUIRendererWeb("MyMenuRenderer", webViewScript.webView);
// Now we can create widgets
m_Menu = new vrWidgetMenu("MyMenu", m_GUIRendererWeb);
...
All widgets are constructed using a MiddleVR object name, a parent widget, and a label. Most widgets will also take a reference to a command and additional initialization parameters.
For example, a button widget can be created that way:
// Create a button
vrWidgetButton button = new vrWidgetButton(
"MyMenu.MyButton", parentWidget, "My Button Label", command);
// Alternatively
vrWidgetButton button = new vrWidgetButton("MyMenu.MyButton");
button.SetParent(parentWidget);
button.SetLabel("My Button Label");
button.AddCommand(command);
Note: The MiddleVR Class Reference provides additional details regarding constructor arguments and widget methods.
Please refer to the VRGUIMenuSample2D and VRGUIMenuSample3D prefabs to see a complete example.
10.4 10.4: HTML graphical user interface (GUI)
Web views can be used for more than simple web pages. MiddleVR allows to create complex UIs based on open web standards by making communication possible between C# scripts and JavaScript code in your web pages.
Make sure to read tutorial “Creating a HTML GUI” for a good introduction.
10.4.1 10.4.1: Communication between web pages and C# code
- Calling JavaScript from C# is done through the
ExecuteJavascriptmethod (see section above). - Calling C# code from JavaScript is done through Commands using the
MiddleVR.CallJavaScript method:
JavaScript code:
// The second argument can be any JavaScript value.
// It will be made available as a vrValue in C\#
MiddleVR.Call('ButtonCommand', 42);
C# code:
[VRCommand]
private void ButtonHandler(int iValue)
{
// Do something with the value
print( iValue );
}
protected void Start()
{
MVRTools.RegisterCommands(this);
m_ButtonCommand = new vrCommand("ButtonCommand", MVRTools.GetCommand("ButtonHandler"));
}
Note: The second argument of MiddleVR.Call can be any valid JavaScript value! Note: For more information about the vrValue, read section “vrValue”.
10.4.2 10.4.2: Example
The VRGUIHTMLBasicSample2D and VRGUIHTMLBasicSample3D prefabs show basic interaction between C# and JavaScript code. These prefabs must be used with the data/GUI/HTMLBasicSample/index.html file in your MiddleVR installation folder.
11 11: Haptics
11.1 11.1: Haption haptics
This section presents how to use Haption’s haptic devices within MiddleVR.
11.1.1 11.1.1: Haptic features
MiddleVR provides an access to the main concepts of the Haption SDK, plus it tries to embrace the current concepts that are spread along the main physics engines for video games such as Bullet and PhysX. Hence, we chose to use the names that most people will recognize if they had to use physics engines.
We are going to present the Haption SDK concepts and also give differences with some usual physics engines. It is important to note that the internal name of the Haption SDK is IPSI (Interactive Physics Simulation Interface). Since MiddleVR is built upon IPSI, messages that contains “IPSI” can be displayed by a simulation.
11.1.2 11.1.2: Main physics notions for haptics with IPSI
11.1.2.1 11.1.2.1: Gravity?
A very important difference between the IPSI engine and other physics engine is that it does not rely on the concept of “gravity”. It means that every object is just floating “in the air”. However, there is one very important exception: gravity is applied on an object that is being manipulated by a Haption device.
Let’s rephrase the last sentence. A virtual physics object that is not manipulated by a Haption device just floats in the air. However a manipulated virtual object receives the gravity force. So the haptic device will go down if gravity is oriented toward the ground, because of the gravity applied on the manipulated virtual physics object.
Common physics engines define a global gravity vector and a mass value per physics object. We have decided to use these solution.
Gravity defaults to the value (0,0,-9.81) which corresponds to the average gravity vector on Earth expressed in the MiddleVR base (right-handed, +X to the right, +Y to the front, +Z to the top) in m.s − 1.
Mass is a float value given per physics object. IPSI obtains weight thanks to the usual formula:
weight = mass × gravity
11.1.2.2 11.1.2.2: Rigid bodies are physics objects
IPSI manipulates rigid bodies. As the name suggests, a rigid body is a physics object that is not deformable. Each rigid body is given a position, an orientation, a mass (that will be renders through a manipulation device), a physics geometry that approximates its shape, damping factors for linear and angular displacements.
11.1.2.3 11.1.2.3: Types of actions on physics bodies
IPSI supports two types of objects: static and movable ones.
As the name suggests, a static object cannot be moved, never: its position and orientation are set once and for all during its creation.
A movable object can be manipulated by two means:
- Applying forces and torques on it.
- Coupling, associating with a manipulation device (i.e. a haptic device).
The first case is a moving object that will apply a force during collision on another one.
The second case stands for the problem of bringing a haptic device into the physics simulation. The virtual object that is connected to a manipulation device will be directly moved by it, and any force applied on the virtual object will be render to the user through the manipulation device.
11.1.2.4 11.1.2.4: Managing collisions between physics bodies
IPSI lets the user activate or deactivate every collision in the simulation or per pair of rigid bodies (for example, it is possible to limit collisions to only two specific rigid bodies in a complex physics simulation).
Three goals for this feature:
- Reducing CPU computation cost.
- Easing manipulation when a physics object is moved in a crowded zone of physics bodies.
- Allowing some movements at the cost of interpenetration for a better understanding of object placements or how some constraints behave.
11.1.2.5 11.1.2.5: Adding constraints between physics bodies
Constraint are a way to remove degrees-of-freedom of rigid bodies or to limit their movements according to another rigid body.
IPSI supports several constraints that we expose in MiddleVR. However we sometimes decided to rename them in favor of a widespread vocabulary that we found in physics engines:
- Ball-and-socket: also called a spherical joint, 3 degrees-of-freedom, this constraint allows rotations to vary freely around a position.
Wikipedia: http://en.wikipedia.org/wiki/Ball_joint. - Cylindrical: this constraint, 2 degrees-of-freedom, combines properties of a hinge constraint and a prismatic constraint: free rotations around an axis that can slides.
Wikipedia: http://en.wikipedia.org/wiki/Cylindrical_joint. - Fixed: also called a weld joint, it links rigidly two bodies.
- Helical: also called a screw joint, 1 degree-of-freedom, this constraint provides axis translation by utilizing the threads of the threaded rod.
Wikipedia: http://en.wikipedia.org/wiki/Screw_joint. - Hinge: also called a revolute joint, 1 degree-of-freedom, this constraint allows to rotate around an axis.
Wikipedia: http://en.wikipedia.org/wiki/Revolute_joint. - Planar: this constraint, 3 degrees-of-freedom, allows to freely rotate around 2 axis while keeping the constrained body parallel to a planar surface.
- Prismatic: also called a slider joint, 1 degree-of-freedom, this constraint keeps orientations identical but allows to slide along an axis.
Wikipedia: http://en.wikipedia.org/wiki/Prismatic_joint. - Universal-joint: this constraint, 2 degrees-of-freedom, allows to rotate around 2 axis so that two rods connected this way would be seen as one ‘bent’ rod.
Wikipedia: http://en.wikipedia.org/wiki/Universal_joint.
11.1.3 11.1.3: Adding Haption devices to the system
The following steps are not strictly needed by MiddleVR but remain mandatory for every use of Haption devices. So if the configuration of the IPSI server is not done yet, let’s do it.
Run the “DEVICE_CONFIGURATOR.exe” program (it may be located in the “C:\Program Files\HAPTION\IPSI\Server\V2.20\bin” folder depending on the version of the IPSI server you use).
The available settings for an haptic device to be added are:
| Property | Description |
|---|---|
| Kind | Specify the kind of device. |
| Name | Specify an arbitrary name for the device. |
| Address | Specify the network address of the device. |
| Position and Orientation | Specify the base position and orientation of the haptic device. Indeed, you should keep these values to zero and manipulate the “Observation Node” in the “Add Device” window of MiddleVR-Config. |
Once you provided settings, click on “Add” to add the device.
Note: if you encounter a crash – such as an exception, when you try to add a device, please check that the folder “%programdata%\HAPTION\IPSI” is writable.
11.1.4 11.1.4: Configuring Haption devices
In the Devices part of MiddleVR-Config, click on the “+” button. In the “Add Device” window that popped-up, select the “Haption” item.
The Haption driver manages several settings:
| Property | Description |
|---|---|
| Server Address | Specify the address of the computer where the IPSI server is running. |
| Mode | Specify the connection mode defines the kind of devices to be used (see details below). |
| Time Step | Specify the simulation execution time step, in seconds (default value: 0.003). |
| Resolution | Specify the tessellation parameter, in meters (default value is 0.003). This parameter gives the simulation precision. |
| Observation Node | Specify a MiddleVR 3D-Node that will define the base of the haptic devices. This setting aims at always placing the haptic devices in front of the user’s body to ease its gestures. The list of nodes is populated by the 3D-nodes that were found |
| Enable Collisions (*) | Specify whether the simulation should start with collisions enabled. |
(*) Disabling collisions at start-up and enabling them later can lead to a faster start-up of the simulation.
The available connection modes are:
| Mode | Description |
|---|---|
| Desktop | Includes all SpaceMouses, Virtuose 15/25 and Virtuose Desktop. |
| Powerwall | Includes Virtuose 6D - 35/45 and Flysticks from motion capture. |
| Immersive | Includes Virtuose devices 35/45, Virtuose Inca and all compatible motion tracking systems. |
Each recognized haptic device will be listed under the “Haption.Driver” line and prefixed by the word “HaptionX.” where X stands for the number of this device. In addition, note that we always present at least one device, so “Haption0.” is always available.
The presented lines have these meanings for the corresponding device:
| Property | Description |
|---|---|
| .PowerOn | Tells whether the device is turned-on. Note that it will remain turned-off when a SpaceMouse is used. |
| .UserDetected | Tells whether the user is detected (e.g. the user handled the Virtuose wrench. It can also be seen as the opposite of the “dead-man status”). |
| .EmergencyStop | Tells whether the device was stopped. |
| .VirtualTracker | Provides a MiddleVR Tracker object which is a sort of virtual 3D cursor that can be moved and oriented in the virtual environment. The cursor can me translated indefinitely. |
| .SystemTracker | Similar to “Virtual Tracker” but relies only on the physical positions/orientations of the haptic device. No information is provided for a SpaceMouse (because such a device does not “move”). |
| .Buttons | Provides the pressed status of the buttons. |
| .Wrench.Force (*) | Provides the force applied by the user through the wrench. |
| .Wrench.Torque (*) | Provides the torque applied by the user through the wrench. |
(*) Notes for wrench force or torque:
1. no force or torque is computed from a SpaceMouse.
2. this value is refreshed at the frequency of devices in MiddleVR, which is based on the update frequency of Unity. Consequently, the frequency will never be up to usable values for haptic rendering. Please limit the use of this value to debugging or displaying of visual feedback for the user.
11.1.4.1 11.1.4.1: What to do if haptic devices are not displayed?
Please verify the following points:
Check that cables are correctly connected.
Check that you added Haption haptic device to the system (except for a SpaceMouse because it does not need to be added).
Remove the Haption driver in MiddleVR-Config by clicking on the “-” button, and then re-add it with “+” so the list of devices will be refreshed.
Verify the mode you use because it will hide not corresponding devices.
Check out logs that MiddleVR prints. For example with only a connected SpaceMouse, we get:
[ ] The Haption driver will use '1' manipulation devices.
[ ] + Manipulation device 'SpaceNavigator[0]'.
[<] The Haption driver connection is ended.
With the Virtuose Simulator, we obtain a second Haption device:
[ ] The Haption driver will use '2' manipulation devices.
[ ] + Manipulation device 'SpaceNavigator[0]'.
[ ] + Manipulation device 'Virtuose 6D Desktop'.
[<] The Haption driver connection is ended.
It is suggested to close MiddleVR-Config when playing a simulation with a Haption haptic device in Unity because the IPSI server cannot handle two instances of itself. To avoid crashes, MiddleVR-Config gets the exclusive control on IPSI when its window gets focus. Hence if you click on MiddleVR-Config while a demo is running, the demo will loose access on IPSI…
11.1.5 11.1.5: Properties of physics elements
11.1.5.1 11.1.5.1: Rigid body
In Unity, a rigid body will be simply a GameObject equipped with the “VRPhysicsBody” script.
Please note that the physics system that is shipped with Unity is based on Nvidia PhysX whilst MiddleVR implementation of physics relies on Haption physics. You should therefore avoid to mix the physics systems. So, do not add a Unity rigid body and a MiddleVR rigid body (VRPhysicsBody) on a same Unity GameObject. Do not expect to get Unity Colliders react with MiddleVR physics system.
IPSI works only with meshes so you will provide one for a rigid body via a Unity MeshFilter component.
The rigid body inspector provides the following settings:
| Property | Description |
|---|---|
| Static (*) | Tells that this rigid body is static. |
| Mass | Sets the mass in kg. Note that this setting is ignored for static objects. |
| Margin (**) | Sets a factor to enlarge or shrink the physics geometry. |
| Rotation damping | A factor to damp rotations. |
| Translation damping | A factor to damp translations. |
| Merge Child Geometries | Finds a geometry in this object or in its children and merge their geometries to build only one. Inactive GameObjects will be ignored. |
(*) When marked as static, an object will never be moved: it remains static for all the duration of the simulation. Internally, IPSI uses this property to place static objects in specific data structures and speedup computations. Such an object participates only to collisions.
(**) the mesh used for physics is built from the mesh given by the Unity Mesh Filter.
Position and orientation are given by the Transform part of the inspector and values are expressed in Unity base (left-handed, +X to the right, +Y to the top, +Z to the front). Scale will be applied at mesh loading. Please note that coordinates are given in meters therefore you must avoid to create very big objects if not necessary.
11.1.5.2 11.1.5.2: Associating with a manipulation device
A “Body manipulator IPSI” lets the user set an association with a manipulation device.
The “Manipulation Device Id” matches the number of a present manipulation device. The first one is numbered 0, the second 1, and so on according to the configuration authored in MiddleVR-Config.
Activating/deactivating this component automatically manipulate/un-manipulate the physics body.
The entry “Attach Point Type” let you select how to attach a manipulation device to this object:
- GEOMETRIC_CENTER: the geometric center of the object,
- AABB_CENTER: the center of the axis-aligned bounding box of the object, or said differently this is the construction center,
- ABITRARY_POINT: any point.
When selecting ABITRARY_POINT, an offset must be given to locate the arbitrary point relatively to the object frame: “Offset Translation” and “Offset Rotation” move the arbitrary point whilst a Unity gizmo shows its location if the object is selected. If the gizmo is not visible, it may be worth to increase its size with the parameter “Gizmo Sphere Radius”.
11.1.5.3 11.1.5.3: Constraints for rigid bodies
11.1.5.3.1 11.1.5.3.1: Important notes for the usage of constraints
Like rigid bodies, constraints cannot be added or removed dynamically. In addition, they cannot be deactivated. One exception however: the “fixed” constraint!
It is not possible to define constraints with limitless values. For example, you cannot define a sliding constraint that allow any translation: it must be limited to a range.
Constraints link two objects together but when the “connected body” is null, the constraint is linked with the world.
Values that define constraint, such as positions or axis, are defined in the frame of the first rigid body (i.e. the body that owns the constraint).
You might need to disable collisions between constrained bodies to allow their interpenetration.
11.1.5.3.2 11.1.5.3.2: Ball-and-socket constraint
Create a ball-and-socket constraint between the “GameObject” this component belongs to and another rigid body.
The available settings are:
| Property | Description |
|---|---|
| Connected body | The peer object to create the constraint with. |
| Anchor | Position of the ball (in this object frame). |
| Gizmo sphere radius | The radius of the gizmo sphere. Raise this value until you see a green sphere at the anchor position. |
Gizmos will be displayed only if the “GameObject” this component belongs to is selected in Unity and this component is not collapsed.
11.1.5.3.3 11.1.5.3.3: Cylindrical constraint
Create a cylindrical constraint between the “GameObject” this component belongs to and another rigid body.
The available settings are:
| Property | Description |
|---|---|
| Connected body | The peer object to create the constraint with. |
| Anchor | A point crossed by the axis of the cylinder (in this object frame). |
| Axis | The axis of the cylinder (in this object frame). |
| Angular limits | Minimum and maximum values of the rotation (in degrees). |
| Angular zero position | The angle value at start (in degrees). |
| Linear limits | Minimum and maximum values of the linear displacement. |
| Linear zero position | The position value at start. |
| Gizmo sphere radius | The radius of the gizmo spheres. Raise this value until you see a green sphere at the anchor position and roughly at linear limits. |
| Gizmo line length | The length of the line drawn along the axis. Raise this value until you see a line. |
Gizmos will be displayed only if the “GameObject” this component belongs to is selected in Unity and this component is not collapsed.
11.1.5.3.4 11.1.5.3.4: Fixed constraint
Create a fixed constraint between the “GameObject” this component belongs to and another rigid body.
Simply one parameter is required:
| Property | Description |
|---|---|
| Connected body | The peer object to create the constraint with. |
Gizmos will be displayed only if the “GameObject” this component belongs to is selected in Unity and this component is not collapsed.
11.1.5.3.5 11.1.5.3.5: Helical constraint
Create a helical constraint between the “GameObject” this component belongs to and another rigid body.
The available settings are:
| Property | Description |
|---|---|
| Connected body | The peer object to create the constraint with. |
| Anchor | A point crossed by the axis of the screw (in this object frame). |
| Axis | The axis of the screw (in this object frame). |
| Limits | Minimum and maximum values of the rotation (in degrees). |
| Zero position | The angle value at start (in degrees). |
| Gizmo sphere radius | The radius of the gizmo sphere. Raise this value until you see a green sphere at the anchor position. |
| Gizmo line length | The length of the line drawn along the axis. Raise this value until you see a line. |
Gizmos will be displayed only if the “GameObject” this component belongs to is selected in Unity and this component is not collapsed.
11.1.5.3.6 11.1.5.3.6: Hinge constraint
Create a hinge constraint between the “GameObject” this component belongs to and another rigid body.
The available settings are:
| Property | Description |
|---|---|
| Connected body | The peer object to create the constraint with. |
| Anchor | A point crossed by the axis of the hinge (in this object frame). |
| Axis | The axis of the hinge (in this object frame). |
| Limits | Minimum and maximum values of the rotation (in degrees). |
| Zero position | The angle value at start (in degrees). |
| Gizmo sphere radius | The radius of the gizmo sphere at the anchor position. Raise this value until you see a sphere. |
| Gizmo line length | The length of the line drawn along the axis. Raise this value until you see a line. |
Gizmos will be displayed only if the “GameObject” this component belongs to is selected in Unity and this component is not collapsed.
11.1.5.3.7 11.1.5.3.7: Planar constraint
Create a planar constraint between the “GameObject” this component belongs to and another rigid body.
The available settings are:
| Property | Description |
|---|---|
| Connected body | The peer object to create the constraint with. |
| Axis 0 and Axis 1 | Two axis (in this object frame) that define a plane. |
| Gizmo line length | The length of the lines drawn along the two axis. Raise this value until you see a line. |
Gizmos will be displayed only if the “GameObject” this component belongs to is selected in Unity and this component is not collapsed.
11.1.5.3.8 11.1.5.3.8: Prismatic constraint
Create a prismatic constraint between the “GameObject” this component belongs to and another rigid body.
The available settings are:
| Property | Description |
|---|---|
| Connected body | The peer object to create the constraint with. |
| Axis Limits | The axis to slide along (in this object frame). Minimum and maximum values of the translation. |
| Zero position | The translation value at start. |
| Gizmo sphere radius | The radius of the gizmo spheres: at the center of this object and roughly at the translation limits. Raise this value until you see spheres. |
| Gizmo line length | The length of the line drawn along the axis. Raise this value until you see a line. |
Gizmos will be displayed only if the “GameObject” this component belongs to is selected in Unity and this component is not collapsed.
11.1.5.3.9 11.1.5.3.9: Universal-Joint constraint
Create a universal-joint constraint between the “GameObject” this component belongs to and another rigid body.
The available settings are:
| Property | Description |
|---|---|
| Connected body | The peer object to create the constraint with. |
| Axis 0 | A first axis (in this object frame) for this U-joint constraint. |
| Axis 1 | A second axis (in this object frame) for this U-joint constraint. |
| Gizmo sphere radius | The radius of the gizmo sphere at the anchor position. Raise this value until you see a green sphere. |
| Gizmo line length | The length of the lines drawn along the two axis. Raise this value until you see a line. |
Gizmos will be displayed only if the “GameObject” this component belongs to is selected in Unity and this component is not collapsed.
11.1.5.4 11.1.5.4: Managing and visualizing collisions
11.1.5.4.1 11.1.5.4.1: Managing collisions
It may be useful to disable collisions for pairs of objects: to enable some movements with constraints or to ease object manipulations in crowded zones.
We provide the “VRPhysicsDisableCollisions” component.
Simply one parameter is required:
| Property | Description |
|---|---|
| Connected body (*) | The peer object to disable collision with. |
(*) If the “connected body” is null, this body will disable every collision with every rigid body.
11.1.5.4.2 11.1.5.4.2: Visualizing collisions
We provide the “VRPhysicsShowContacts” component that can display collisions between two rigid bodies, it needs to be part of a “GameObject”. So create one “GameObject” and this component to it.
The available settings are:
| Property | Description |
|---|---|
| Object at contact (*) | A GameObject that will be instantiated at each contact point to show it. |
| Max contacts nb (**) | Sets the maximum number of contact points to be displayed. |
| Translation and Rotation (***) | Apply a translation and rotation to the instantiated GameObjects. |
| Ray debug (****) | A green line that origins from the contact point position and follows its normal. |
(*) It is suggested to use a prefab here. For example, we created a prefab called MVR_PhysicsContactPoint that simply displays a red cylinder. In order to keep good performances, you should employ objects with a low-poly resolution and basic shading.
(**) Note that the value is limited internally to 512 in IPSI.
(***) These settings can be very helpful because the contacts points provide two values only: a position and a normal. We try to compute a rotation between the Y axis (i.e. ‘Vector.up’) and the normal so you should build your contact-point mesh along the Y axis or use the rotation setting.
(****) As the name suggests, this setting is mainly intended for debugging. Internally, it uses Unity gizmos so it will be displayed only in the Unity Scene view and will very likely induce a performance penalty.
Note that contacts are expressed with two points: one for each involved object (because normals are different and IPSI also send contacts that are going to happen because of surface proximity). We arbitrary chose to get values from the first body only for displaying.
Finally since contact detection implies a performance penalty, it is possible to deactivate every detection by the use of the VRDeactivateAllPhysicsContacts script. It has to be added to only one “GameObject”.
11.1.6 11.1.6: Sample scripts
The folder “Assets/MiddleVR/Scripts/Samples/Physics” contains samples that can be used to write your own scripts.
11.1.6.1 11.1.6.1: “Change Attached Physics Body IPSI”
Show how to change the physics body that is manipulated by a manipulation device. To use this sample, press the keyboard key H (i.e. “haptics”) and C (i.e. “change”) to iterate through all the rigid bodies. Each one will become alternately manipulated and a message will be output to logs.
Note that static or frozen rigid bodies cannot be manipulated.
This sample also illustrates how to use several vrCommands:
- “Haption.IPSI.GetManipulationDevicesNb”,
- “Haption.IPSI.GetManipulationDeviceName”,
- “Haption.IPSI.AttachManipulationDeviceToBody”,
- “Haption.IPSI.DetachManipulationDevice”,
- “Haption.IPSI.DetachBodyFromAManipulationDevice”,
- “Haption.IPSI.IsManipulationDeviceAttachedToABody”,
- “Haption.IPSI.IsBodyAttachedToAManipulationDevice”,
- “Haption.IPSI.GetIdOfManipulationDeviceAttachedToBody”,
- “Haption.IPSI.GetIdOfBodyAttachedToManipulationDevice”.
Simply one parameter is required:
| Property | Description |
|---|---|
| Manipulation Device Id | Id of the manipulation device that will alternately manipulate rigid bodies. |
11.1.6.2 11.1.6.2: “Apply force/torque sample”
Show how to apply a force or a torque on the physics body this component is member of. To use this sample, press the keyboard key H (i.e. “haptics”) and F (i.e. “force”) or T (i.e. “torque”). In addition, pressing a SHIFT key will apply the opposite force or torque.
The available settings are:
| Property | Description |
|---|---|
| Force | The force to be applied. |
| Torque | The torque to be applied. |
11.1.6.3 11.1.6.3: “Device buttons status sample”
Show how to track buttons states: a message is printed when a button of an Haption device is pressed or released.
11.1.7 11.1.7: Tips and tricks
11.1.7.1 11.1.7.1: Errors messages!
The following message is printed in logs:
[X] PhysicsBody: No PhysicsManager found.
It indicates that the physics manager was not loaded. It probably appears because you did not load the Haption driver of MiddleVR. Check that you provided the right configuration file and that it contains a Haption haptic device.
11.1.7.2 11.1.7.2: Haptics do not work anymore after a click on MiddleVR-Config
MiddleVR-Config gets the exclusive control on IPSI when its window gets focus. Hence if you click on MiddleVR-Config while a demo is running, the demo will loose access to IPSI.
11.1.7.3 11.1.7.3: MiddleVR-Config window takes time to get focus
A connection to IPSI is attempted when the window gets focus, which it may be slow. When focus is lost, a disconnection is done.
11.1.7.4 11.1.7.4: The simulation takes a long time to start
Big geometries (such as big cubes) will take more time for loading than small ones. You can try to shrink their size (use the scale factor of the FBX importer).
11.1.7.5 11.1.7.5: Huge memory usage
Do you respect scales of real objects in your simulation? Coordinates in Unity are expressed in meters so you must ensure that your objects are not too big compared to reality.
Another solution is to raise the value of “Resolution” (in the driver Haption) to get a coarser tessellation because the physics world is discretized by IPSI.
Note that a huge memory consumption might be the culprit of a slow-starting simulation.
11.1.7.6 11.1.7.6: Poor performances
Try to disable collision detection between objects where collisions are not worth to track.
11.1.7.7 11.1.7.7: The manipulated rigid body seems to stick with collided objects
The collisions of the manipulated body can lead indeed to a high amount of CPU computation which freeze the simulation during a short period of time.
This case can appear when the manipulated body is a big cube and you are trying to make one of its face collide with another cube. It is due to their large colliding surface. Prefer to use the cube corners to collide with objects. So, more generally, use smaller colliding surfaces if possible.
12 12: Advanced topic - Configuring a tracking system
12.1 12.1: Introduction
Make sure to read the article “Understanding tracking devices”.
12.2 12.2: VR system origin
The difficult part of understanding the configuration of a tracking system is understanding how the data from the tracker are related to the real world.
Position
The first thing to decide is where, in the real world, are want to set the origin of your VR system. This is an arbitrary point in space that has (0,0,0) as coordinates.
This could be a point on the floor of your CAVE, the center of the 3DTV, a point on your desk, or any point in space.
“Default” origin
If you are using only one tracking system, it is often easier to just use the default origin of the trackers. For example for the Kinect, the origin is the position of the 0. If a user is standing exactly on the Kinect, its position will be zero. For a Razer Hydra, the origin is the base. If the Hydra trackers could come exactly inside the base, their position would be exactly (0,0,0).
For any given tracking system, there is a position in space where the tracker will report (0,0,0).
You can decide to keep this point in space as the origin of your VR system, or decide to set the origin somewhere else because it is more convenient.
Neutral orientation
You also have to decide what will be the “neutral orientation”, that is an arbitrary rotation in space where Yaw=Pitch=Roll = 0.
It is often easier to think about your “neutral orientation” in terms of the Front, Right and Up vectors (represented in MiddleVR respectively as +Y, +X, +Z ).
The Up vector is generally easy to set, it’s the opposite of the gravity.
But the Front and Right vectors can be arbitrarily set. There is usually a natural orientation and it shouldn’t be hard to decide on one.
Just make sure to be consistent during the whole configuration process.
“Default” orientation
As for the position, there is a default orientation inherent to your tracking system. You can decide to keep this neutral orientation as the neutral orientation of your VR system, or decide to change it because it is more convenient.
12.3 12.3: Moving the origin of a tracker
There are two reasons why you might want to modify the origin of a tracker:
- because it is more convenient to have the origin at another place,
- you are using multiple trackers with different inner origins. Moving the origins of each tracker to one common origin will ensure coherent and homogeneous tracking data among all the devices.
There are two ways to modify the origin of a tracker:
- move or reconfigure the tracker itself,
- use MiddleVR 3D nodes
Using MiddleVR 3D nodes to move the origin of a tracker
MiddleVR offers a quick and easy way to move the origin of a tracker.
You simply have to create a 3D node that will represent the origin of your tracker, and place it with respect to the origin of your VR system.
Then all objects that are tracked by this particular tracker should be represented as 3D nodes that are children from this origin.
For example if you open the “Kinect” predefined configuration, you will see the following hierarchy:
Center Node
Kinect0.RootNode
Kinect0.User0.Head_Node
Kinect0.User0.Right_Hand_Node
etc.
If you move the Kinect0.RootNode, all the other nodes will be offset in too.
Calibrating the tracker origin
You can either manually move the tracker origin (Kinect0.RootNode in the example above), or use one of the calibration features of MiddleVR: “Calibrate Parent”.
Suppose you are using a Razer Hydra. You could create the following hierarchy:
Center Node
HydraBase
HandNode
etc.
Now let’s say that your Hydra base is not placed at the origin of your VR system. For example you decided that the origin of your VR system is the middle of a table, and this is where all the interactions will happen. You then decide to put the Hydra base away so that it will not disturb the interactions. This means that if you move the hand tracker to the origin, the data reported by MiddleVR will not be (0,0,0), but the actual distance from the hand to the base.
The first option is to manually measure the distance between the Hydra base and the actual origin of the VR system and enter that manually as the coordinates of the HydraBase 3D node.
The other option is to position one of the Hydra tracker, for example the one that you assigned to the HandNode, at the origin of the VR system. You can then simply select “Calibrate Parent” in the “Calibration” options on top of the 3D nodes properties, and press “Calibrate”. This will automatically move the HydraBase 3D node so that the position is (0,0,0). The effect is that now your HandNode is positioned at the origin and the position reported by MiddleVR for the HandNode is also (0,0,0).
Offset to the tracked object
Set Neutral Transformation / Position / Orientation
13 13: Advanced topic - Understanding stereoscopy
From Wikipedia:
Stereoscopy (also called stereoscopics or 3D imaging) is a technique for
creating the illusion of depth in an image by means of stereopsis for
binocular vision. [...]
Most stereoscopic methods present two offset images separately to the left
and right eye of the viewer. These two-dimensional images are then combined
in the brain to give the perception of 3D depth.
There are three things to consider:
- How are stereoscopic images generated?
- How are stereoscopic images transmitted?
- How are stereoscopic images displayed?
13.1 13.1: How are stereoscopic images generated?
The simplest way to generate a pair of stereoscopic images is to simply create two cameras and offset them by the same distance as the distance between your two eyes.
Unfortunately things are not so simple. This would only work if you had a screen in front of each eye and only looked at infinity: this way, the axis of your eyes and the axis of the cameras would be always parallel.
13.1.1 13.1.1: 3D screens
Currently, one of the most common way of displaying a 3D image on a single 3D screen and wearing glasses to separate the images.
If you have read “Understanding head-tracking and perspective”, you already know that the 3D screen is like a window on the virtual world. This means that what your eyes see is constrained by that window. The parts of the virtual world that you can see depend on the position of your eyes with respect to this window.
There are two ways to setup stereoscopy for a 3D screen in MiddleVR:
- Using a “Stereoscopic Camera”,
- Using a “Stereoscopic Camera” combined with a “Screen”.
With a stereoscopic camera, you can configure the field of view and the screen distance. This assumes that the eyes of the viewer are always exactly facing the middle of the screen and that the eyes are at the same distance as the convergence distance. The size of the screen is determined by the field of view of the camera and the “screen distance” parameter of the stereoscopic camera.
If you setup a screen for your stereoscopic camera, you can be more precise to specify the exact size and position of the screen with respect to the viewer.
You can then add a tracker on the stereoscopic camera so that when the viewer moves, the perspective is always correct.
13.1.2 13.1.2: 3D projectors
A 3D projector can be considered as a 3D screen, with the size of the screen being the size of the projected image.
13.1.3 13.1.3: HMDs
In the most simple cases, HMD display systems are considered as 3D screens (Sony HMZ-T1, T2).
In other cases, the two screens are offset: they can be offset horizontally (NVIS SX-60), or even rotated symmetrically (NVIS SX-111).
13.2 13.2: How are stereoscopic images transmitted?
Here are the most common ways of transmitting a stereoscopic image to a stereoscopic system. These different mechanisms only relate to the way the images are transmitted from the graphics cards to the display system, and are not necessarily linked to the way they will be displayed. See “Debate” below.
From Wikipedia:
There are multiple ways to provide these separate images:
- Use dual video inputs, thereby providing a completely separate video
signal to each eye
- Time-based multiplexing. Techniques such as frame sequential
combine two separate video signals into one signal by alternating
the left and right images in successive frames.
- Side by side or top/bottom multiplexing. This method allocated half
of the image to the left eye and the other half of the image to the right eye.
The advantage of dual video inputs is that it provides the maximum resolution
for each image and the maximum frame rate for each eye. The disadvantage of dual
video inputs is that it requires separate video outputs and cables from the
device generating the content.
Time-based multiplexing preserves the full resolution per each image, but
reduces the frame rate by half. For example, if the signal is presented at 60 Hz,
each eye is receiving just 30 Hz updates. This may become an issue with accurately
presenting fast-moving images.
Side-by-side and top/bottom multiplexing provide full-rate updates to each eye,
but reduce the resolution presented to each eye.
13.3 13.3: Debate
13.3.1 13.3.1: The “natural” way
Most of the time, an active stereo system will use a frame sequential transmission because it is simple to use the alternate frame signal directly with the active glasses.
In the same way, a dual projector passive stereo system will naturally use a dual-input transmission because you can simply plug the left input in the left projector and do the same for the right projector.
Most 3D TVs support side-by-side and/or top-bottom input directly, which makes it the easier way to setup stereoscopy on this kind of display.
13.3.2 13.3.2: The “twisted” way
Note however that a frame sequential transmission can be split to be used as a dual-input. A dual-input transmission can also be combined in a frame sequential signal.
Basically all transmission signals can be converted to one another depending on the requirements of the VR system.
13.4 13.4: How are stereoscopic images displayed?
Once the stereoscopic images have been generated and transmitted, the display must make sure that the left image is only seen by the left eye, and the same for the right eye.
13.4.1 13.4.1: HMD
As seen in “Configuring a HMD” and “How are stereoscopic images generated”, most HMDs are made up of two screens. By design, each eye only sees one screen.
13.4.2 13.4.2: 3D Screen
On a 3D screen, both images are displayed on the same display. This means that there must be a way for each eye to see only the corresponding image.
The two most common mechanism to achieve this involve 3D glasses. These two mechanisms are commonly called “Active stereoscopy” and “Passive stereoscopy”. The name comes from the fact that in one case the 3D glasses are “active”, and in the other case they are passive.
This denomination is also often used for the way the stereoscopic images are transmitted, even though nowadays an active stereo signal can be used to feed a passive stereo system and vice-versa.
13.4.2.1 13.4.2.1: Active stereoscopy
Active stereo refers to a mechanism in which the left and right images are displayed sequentially on the 3D screen. This means that frame 0 is left eye, frame 1 is right eye etc.
This also means that when frame 0 displays the left eye, the 3D glasses must hide this image from the right eye. This is accomplished by using LCD shutter glasses that can turn from transparent to opaque very quickly. While the left LCD from the glasses is transparent, the right one is opaque, and vice versa.
Those glasses are called active because they have active elements (the LCD shutters) and require batteries.
13.4.2.2 13.4.2.2: Passive stereoscopy
Most of the time, passive stereoscopy involves polarized glasses and displays.
From Wikipedia:
A polarized 3D system uses polarization glasses to create the illusion of
three-dimensional images by restricting the light that reaches each eye,
an example of stereoscopy.
To present stereoscopic images and films, two images are projected superimposed
onto the same screen or display through different polarizing filters. The viewer
wears low-cost eyeglasses which contain a pair of different polarizing filters.
As each filter passes only that light which is similarly polarized and blocks the
light polarized in the opposite direction, each eye sees a different image. This
is used to produce a three-dimensional effect by projecting the same scene into
both eyes, but depicted from slightly different perspectives. Several people can
view the stereoscopic images at the same time.
The glasses don’t have any active element, only polarizing filters, which is why the mechanism is called “passive stereoscopy”.
13.4.3 13.4.3: 3D Projectors
The majority of 3D projectors can only do active stereoscopy. The most common way to achieve passive stereoscopy with projectors is to use two non-3D projectors, one for each eye and each correctly polarized.
13.4.4 13.4.4: How to configure MiddleVR
The way the images are displayed is completely determined by the hardware in place and is a design choice when creating the VR system.
MiddleVR can only be configured to specify how the images are generated and transmitted.
14 14: Advanced topic - How to configure a VRPN server
Make sure to read: What is VRPN?
14.1 14.1: Configuring a VRPN server
14.1.1 14.1.1: The general case
Locate the ‘vrpn_server.exe‘ file. It is typically found in:
C:/Program files (x86)/MiddleVR/bin/vrpn
In the same folder you should have the ‘vrpn.cfg‘ file. This is the configuration file that needs to be edited in order to specify which devices should be accessed and how.
Edit this file. You will notice a big file where all lines start with a ‘#’:
#vrpn_Tracker_Intersense Tracker0 AUTO IS900Time
#vrpn_Tracker_Dyna Tracker0 1 /dev/ttyS0 19200
#vrpn_Tracker_Flock Tracker0 2 COM1 38400 1 N -x
As VRPN supports a lot of different devices, you have to identify your device in the list. When you have identified your device, uncomment the corresponding line by removing the ‘#’.
- The first word on the line is the driver type to use,
- the second word is the name for this device and that you will use for the client connection.
This name, typically Tracker0, can be changed to better match the semantic of your tracker. This is also required if you use different trackers with the same VRPN server. You could for example choose “HeadTracker” or “HandTracker”.
Remember this name correctly (and note it is case sensitive), it will used by the client when connecting to the server to identify the tracker (see configuring a VRPN tracker).
- the rest of the line represent options for that specific driver.
There is a lot of documentation inside the file that will help you identify the device and its options.
14.1.2 14.1.2: Specific devices
Here are specific articles about configuring popular devices:
14.1.3 14.1.3: Running the VRPN server
Once you have correctly configured vrpn.cfg, you just have to run the VRPN server by double-clicking on the vrpn_server.exe file.
This will start a DOS box, which might be empty or display information depending on the configured devices:
14.1.4 14.1.4: Troubleshooting VRPN
See the following article: Troubleshooting VRPN.
15 15: Advanced topic - How to show a viewport on a specific display
In a general way, every time you plug a cable in your graphics card, the graphics driver will create a new “Display”. Each display is shown in a different part of the global windows desktop.
You can check that in your NVidia drivers. For example, there are two displays on this machine:
We can see that display 1 (Apple Laptop Display) is the Main display.
Note: The main/primary display is always positioned at the origin of the desktop. This means its position is always (0,0);
Now the second display (Rift DK = Oculus Rift HMD display) is displayed here to the left of display 1. You can drag the icons in the view to change the position of any display.
Any display that is not a main display will have a position relative to the main display. For example here we can see that display 2 has an x position of -1280. The width of this display being exactly 1280, it means that the right side of display 2 touches display 1.
You can drag the display so that it is to the right of display 1:
You can notice that now its x position is 1280, which is also the width of display 1.
15.1 15.1: Displays in MiddleVR
You will find the exact same diagram in the MiddleVR configuration tool, in the Viewports tab:
You can see on the left side the “System Displays” information. You can click on each display to see its information (geometry, refresh rate etc).
15.2 15.2: Viewports in MiddleVR
Now if you want to show a viewport on the first display (Apple Laptop Display), you need to make sure that the coordinates and size of the viewport cover that display:
Notice that the Top and Left values of the viewports are 0 because “Display 1″ is the main display, and its size is 1280×800, the same size as the display.
If you want the viewport to be displayed on the second screen, you need to change the position and size to cover the second display:
The Left value of the viewport is now 1280, because that’s where the second display is on the desktop.
16 16: FAQ / Troubleshooting
The troubleshooting section has been moved to our online knowledge base: http://www.middlevr.com/kb.
17 17: Known limitations and bugs
17.1 17.1: MiddleVR for Unity
- A camera can only be assigned to one viewport.
- Skyboxes are not rendered correctly when using stereoscopic rendering. You can replace the Skybox with a giant sphere or cube.
- Active stereo cannot have an anti-aliased rendering with deferred rendering.
- When using Active stereo, post-processing effects might not work. See our online knowledge base for more information: http://www.middlevr.com/kb/vr-compliant-post-processing-effects-in-unity/
- You cannot use Unity’s keyboard/mouse management while using active stereo rendering. You should use MiddleVR’s keyboard and mouse management.
- Some shaders, like the water shader, don’t work correctly in stereoscopy or with asymmetric cameras. Those shaders should be modified to take into account the actual position of the eyes. With asymmetric or stereoscopic cameras, the camera’s frustum is offset and the shaders wrongly assume that the eye is in centered.
- Unity’s GUI system is problematic when using multiple cameras, especially when using stereoscopy. We are working on a workaround. Contact us for more information.
18 18: Revision history
18.1 18.1: Upgrading to MiddleVR 1.6
From MiddleVR 1.4
The applications that you built using MiddleVR 1.4 will work without any modification with MiddleVR 1.6.
If you want to benefit from the new features (new interactions etc), you will need to upgrade the MiddleVR Unity package.
From 1.0 or 1.2
You will need to upgrade the MiddleVR Unity package for each of your application and re-export it.
18.2 18.2: Version 1.6.2 changelog
- MiddleVR
- New: Allow maximum resolution of Free and HMD licenses to 2160x1200 to support Oculus Rift CV1 and HTC Vive.
- New: Ship 64-bit version of MiddleVRConfig.exe and MiddleVRDaemon.exe, which are now used by default.
- Fix: Anti-aliasing with DX11.
- New: Add a way to change the coordinate frame in several tracking systems.
- New: Compute geometry of physics rigid bodies from their children, if activated.
- New: Let the user select how a Haption haptic device should be attached to a physics rigid body.
- New: Script VRPhysicsDisableAllCollisions can now disable/re-enable all collisions for MiddleVR physics system (i.e. currently for Haption devices) by activating/deactivating this Unity component at runtime.
- New: Added the “–mvr-logfile-path” command line option to output MiddleVR logs to a custom file.
- Fix: Avoid crashing of VRDameon when its server port cannot be listened on.
- Fix: Managing of scales for physics constraints.
- Fix: Merging child geometries of physics rigid bodies could lead to crashes with Haption devices. The merging mechanism is now replaced with a system that only keeps geometries aggregated until they are given to an underlying physics engine.
- Devices
- New: Update Oculus SDK to version 1.4, with support for the Oculus Touch.
- New: Add support for OpenVR (used by HTC Vive HMDs) version 1.0.0.
- New: Update Haption SDK to version 2.20.15.
- New: Update zSpace SDK to version 4.0.0.3.
- New: Native support of Vicon tracking systems.
- New: Added support of ART fingertracking.
- New: Update Leap Motion SDK to version 2.3.1.
- New: Remove the beta status of the Leap motion SDK 2.
- New: Add support of the internal optimized mode for HMDs of the Leap Motion SDK2.
- Fix: Enable support of the Colibri driver on 64-bit platforms.
- Fix: Enable support of the Spacepoint Fusion driver on 64-bit platforms.
- Fix: Rename property ‘DefaultNbVibrations’ of the zSpace driver to ‘DefaultVibrationsNb’. Note: previous configuration files that use ‘DefaultNbVibrations’ will have the corresponding value loaded but it will then be saved as ‘DefaultVibrationsNb’.
- Unity
- Fix: Leave play mode when a compilation is automatically launched because the user saved a script she modified. In this case, a popup is displayed and a warning message is printed in the Unity console. This workaround avoids unexpected behaviors and printing of many null references.
18.3 18.3: Version 1.6.1 changelog
- MiddleVR
- New: DirectX11 support.
- New: command line parameter to set which DirectX DLL to load as system’s one. Example:
Simu.exe [args...] --mvr-proxydll-original-dll-path="C:\\Windows\\System32\\d3d9.dll" - Fix: multi-GPU support with heterogeneous cluster configurations.
- Devices
- New: Support of Haption haptics devices.
- New: Add support for the Oculus VR SDK version 5.0.1.
- Fix: Reduced latency and judder with Oculus DK2.
- New: Update Leap Motion SDK2 to version 2.2.5 (26752).
- New: Add IsTracked property support to trackers of the following drivers:
- Leap SDK 1 & 2.
- Oculus Rift DK2 (tell that the Rift is visible by the camera).
- Kinect Microsoft SDK 1 & 2.
- A.R.T. DTrack.
- TrackIR.
- InterSense.
- OptiTrack NatNet: IsTracked says whether the vrTracker is associated to an OptiTrack rigid body but not whether this rigid body is visible by the infrared cameras.
- Unity
- New: Unity 5 support.
- New: Setting fullscreen mode to exclusive in the PlayerSettings for Unity4.6 and above.
- Fix: Unity 5 compatibility: Rendering plugins are now located in
Plugins/x86andPlugins/x86_64. - Fix: Fly parameter in VRManager prefab.
- Fix: MiddleVR_FirstPersonController package correctly disables Wand navigation.
- Fix: Physicalized objects no longer accumulate speed when manipulated.
- Fix: Exception when disabling the MiddleVR menu.
- Fix: Compilation of MiddleVR scripts in MonoDevelop.
- Fix: The
HandNodenode now correctly keeps its position and orientation from the configuration file.
- Configurator
- Fix: Long delay when starting MiddleVR Config on some computers.
- Fix: Duplicate configurations in the configurations list.
- Immersive GUI / Web View
- Fix: Unity 5 Personal Edition now uses the MiddleVR Rendering plugin for web views.
- Changed: VRMenu web page files are now located in
Assets/MiddleVR/WebAssets/VRMenuinstead ofAssets/MiddleVR/.WebAssets/VRMenu.Assets/MiddleVR/.WebAssetswill be left unchanged when importing the new MiddleVR Unity Package but will not be updated in new versions of MiddleVR. - Fix: vrWidgetList now correctly handles its initial value.
- New: Added automatic exporting of
Assets/WebAssetsandAssets/.WebAssetsfor user web assets.
- Cluster
- New: Added advanced parameter
ClientConnectionTimeout.
- New: Added advanced parameter
18.4 18.4: Version 1.6 changelog
This version requires new licenses! If your maintenance contract is valid, you can receive the updated license for free.
- MiddleVR
- New: “HMD” edition: You can now create an application for HMDs and deploy it on any computer without a player license.
- Free and HMD edition: Removed limit on number of cameras and screens.
- Free and HMD edition: Raised limit of viewports from 720p to 1080p.
- Free and HMD edition: Raised limit of number of viewports from 1 to 2.
- Free and HMD edition: Cluster, ForceOpenGL and Active Stereo not allowed.
- Pro licenses can create HMD applications that don’t require a player license.
- New: Multi-GPU support: Useful to get the best performances with multiple graphics cards in one computer in conjunction with clustering. Assign each Unity Player to a particular GPU.
- New: CPU Affinity: Assign each Unity Player to a particular CPU.
- New: Standard interactions:
- Navigation: Joystick, Elastic, Grab world
- Manipulation: Ray, Homer
- Virtual hand: Gogo, Direct
- New: Screen proximity warning interaction to prevent user’s physical collisions with the system’s screens.
- New: “Window always on top” option. Allows a window to be hidden by windows in front or not.
- New:
LMX_EXTENDEDLOGautomatically set if not manually set. This allows to automatically log license messages in %tmp%/MiddleVR_Licence_Log.txt - New: The concepts of VRRootNode and CenterNode are fused to VRSystemCenterNode which represents the zero of the physical world of a VR system.
- New: “World” positions and “VirtualWorld” positions methods refer to the same space as the Unity world (virtual world positions).
- New: “VRSystemWorld” positions methods refer to the VRSystemCenter node space (physical world positions).
- New: Documentation in HTML format.
- Fix: Scale of the world.
- Fix: MiddleVR active stereo crash when using Unity plugin AVPro.
- New: “HMD” edition: You can now create an application for HMDs and deploy it on any computer without a player license.
- Devices
- New: Oculus Rift DK2 and Oculus SDK 0.4.4 native support.
- New: A.R.T. DTrack native support.
- New: Leap Motion SDK2 native support. As beta version because SDK 2.x is itself in beta.
- New: Microsoft Kinect2 native support.
- New: NaturalPoint OptiTrack native support (via NatNet).
- Fix: zSpace stereo should now set-up eyes the correct way every time.
- 64-bit: Support for ART Dtrack2, InterSense IS-900, Kinect 1, Kinect 2, Motion Analysis (beta), Oculus Rift DK1, Oculus Rift DK2, OptiTrack NatNet, LeapMotion (SDK1/2), Razer Hydra, SpaceMouse, TrackIR, VRPN, Vuzix, zSpace.
- Configurator
- New: Added Quick Links to Simulation View.
- New: Added User arguments to Simulation View.
- New: Added possibility to drop .exe or .vrx files anywhere in the window.
- New: Added multiples keyboard shortcuts.
- New: Added Configurations sorted by Categories.
- New: Panning in 3D View using right click.
- New: Various improvements in GUI layout.
- Fix: When a device was not available, closing and reopening the “Add devices” window kept adding the device.
- Immersive GUI / Web View
- New: VRWebView:
- Display any web page in your VR experience.
- Quickly create interactive menus using vrWidgets.
- Design your own HTML5 interface that can interact with your C# code.
- New: Automatic synchronization of the rendered image of a web view on the cluster.
- New: VRMenu: Extensible MiddleVR default menu.
- Unity
- Removed: Support for Unity 3.5, 4.0 and 4.1. MiddleVR now works for Unity 4.2 and newer only.
- Removed: VRShare* and VRReceive* scripts from Samples folder. Use the VRSharedValue type or the VRClusterCommand sample script instead.
- New: VRSharedValue type and VRClusterCommand sample script to easily share data and events across the cluster.
- New: When adding MiddleVR package, automatically disable Direct3D 11 (player setting useDirect3D11 = false).
- New:
MVRTools.FromUnity(vrVec3/vrQuat). - New: Experimental support for Unity 5.
- New: GameObjects with VRActor script have now their corresponding synced vrNode3D.
- Optim: Performance improvement when dealing with many MiddleVR objects.
- New: Support particles synchronization on a cluster.
- New: Renamed namespace MiddleVRTools to MVRTools.
- New: VRManager Custom License option: Ability to checkout a custom player license.
- Optim: General performance improvement in C# wrapper.
18.5 18.5: Upgrading to MiddleVR 1.4 from 1.0 or 1.2
- Install the new MiddleVR version.
- In Unity, for each of your applications, import the new MiddleVR package.
- Re-export application.
18.6 18.6: Version 1.4 changelog
- Improvements
- MiddleVR configuration editor
- New predefined configurations:
- HMD-Oculus-Rift.vrx
- HMD-Oculus-Rift-Razer-Hydra.vrx
- HMD-Vuzix-VR920-Mono.vrx
- HMD-Vuzix-Wrap-1200-VR.vrx
- HMD-Sensics-zSight-60.vrx
- Leap-Motion.vrx
- RazerHydraHeadHand.vrx
- New predefined configurations:
- License
- Important note: A new license is needed for this version
- Unity
- Optimized OpenGL Quad-Buffer support
- Optimized Unity nodes updates
- Cluster
- A cluster node can now have multiple viewports
- NvidiaSwapLock fully functional
- Sample scripts to share values over cluster: MiddleVR/Scripts/Samples/: VRShare/Receive*
- LoadLevel is now correctly supported
- If configurator cannot connect to one daemon, it will kill all the successful connections
- Improved KillAll: Kills Cluster Server and kills all previously run applications
- LogInSimulationFolder: Ability to write the logs in the .exe folder (in a new folder: MiddleVRLogs/). Note that when used in a cluster system and if cluster clients use a shared network folder, all clients will be writing the logs over the network Depending on the loglevel, this can slow the application down significantly
- MiddleVR
- New: Vuzix tracker support
- New: NaturalPoint TrackIR tracker support
- New: Leap Motion support
- New: Oculus Rift support
- Free version
- Pro version and Academic version are now the same build. Depending on the license, one or the other version will be activated
- Doc
- Updated documentation.
- Integrated the advanced cluster documentation.
- Troubleshooting section has been moved online: http://www.middlevr.com/kb
- MiddleVR configuration editor
- Changes
- MiddleVR configuration tool
- New: Ability to start “Pro Trial” or “Academic Trial”
- New: Configurator now warns if a camera is used on multiple viewports.
- Unity
- Disabling Unity Multi-thread rendering
- New DLL in Plugins\MiddleVR_UnityRendering
- New: Warn when application is not built with Unity Pro and is used with Force OpenGL or OpenGL Quad-Buffer. Unity Pro is required for building an application with these features
- Cluster
- New: VRDaemon prints more information on startup (version, IP address).
- When spawning a cluster client application, the execution folder was not correct. It is now set to the folder of the .exe
- MiddleVR
- Migrated to Visual Studio 2012 compiler
- With drivers older than 265, try to achieve stereo and force OpenGL conversion anyway if driver has experimental support
- Ability to disconnect drivers that only support one connection (Kinect, Oculus Rift)
- Enabled trial license in virtual machines
- MiddleVR configuration tool
- Bug fixes
- MiddleVR configuration tool
- Fix: Drivers not available in free version are grayed out
- Fix: Better handling of list selection when removing a cluster node, a viewport, a configuration or a simulation
- Fix: Various GUI improvements
- Fix: Improving taskbar issues
- MiddleVR
- Fixed a memory leak
- Fix: GameTrak script doesn’t crash on exit
- Fix: Oculus randomly failed to connect
- Fix: Oculus + Kinect incompatibility
- Fix: VRPN tracker coordinate system could have wrong axis in special cases
- Fix: The position of a Camera with a Screen slowly drifted
- Fix: Better handling of Colibri’s orientation
- Doc
- Updated doc
- Fixed typos
- MiddleVR configuration tool
18.7 18.7: Upgrading from 1.0 to 1.2
- Install the new MiddleVR version.
- In Unity, for each of your applications, import the new MiddleVR package.
18.8 18.8: Version 1.2.2 changelog
- MiddleVR
- Fix stereoscopy on Windows XP.
18.9 18.9: Version 1.2.1 changelog
- MiddleVR
- Fix focus bug which cause window to appear behind the windows taskbar.
- Fixed a bug where Side-by-Side rendering could be wrong in multi-viewport configurations and using a template camera.
- Added Kinect rotations.
- Configurator
- Added new View menu to disable display of cameras’ frustums. This allows for better clarity of the 3D view and speed improvements.
- Added options in the Node properties to select which transformation (X/Y/Z, yaw/pitch/roll) of a tracker should be applied to the node.
- Improved Help menu by adding links to documentation, class reference, tutorials, knowledge base and support website.
18.10 18.10: Version 1.2 changelog
This new version contains many improvements in ergonomics. It also includes full cluster support.
- Improvements Warning: All the logs are now stored in %tmp%/MiddleVR!
- MiddleVR configuration editor
- New “Simulations” tab:
- You can now execute your applications directly from the interface and choose which configuration to use.
- It will also manage the copy/removal of the d3d9.dll proxy automatically.
- Ability to drag and drop .exe and .vrx files in the Simulations window.
- Ability to set Wand Axis Horizontal/Vertical scale to normalize wand inputs.
- New 3D nodes calibration methods: Set Neutral Transformation, Set Neutral Position, Set Neutral Orientation, Calibrate Parent.
- Simplified the process of acquiring a license file with an activation key.
- VRPN trackers: Can now change the coordinate system configuration after the device was created.
- Added “Kill all cluster clients” button in Simulations window.
- Don’t maximize window when starting.
- New menu: Open predefined configuration. Predefined configurations include:
- Cube-5-Sides
- HMD-NVIS-SX60
- HMD-NVIS-SX111
- HMD-Sony-HMZ-T1
- Holostage
- Kinect
- SimpleStereoActive
- SimpleStereoPassive
- VirtualCluster
- Wall
- TV3D-32inch-82cm
- TV3D-46inch-117cm
- zSpace-VRPN for the zSpace station (http://www.zspace.com)
- New “Simulations” tab:
- Unity
- When importing the VRManager, automatically apply the VR player/quality settings: DefaultIsFullScreen=false, DisplayResolutionDialog=HiddenByDefault, RunInBackground=true, CaptureSingleScreen=false, VSyncCount=0 for current quality settings.
- New button in VRManager options: “Pick configuration file” opens a dialog window to look for a configuration file.
- VRWandNavigation: New parameters: NavigationSpeed, RotationSpeed.
- VRWandNavigation: New parameter “TurnAroundNode” allows to choose the pivot of rotation.
- VRFPSController: First person controller:
- In addition to the wand, you can now also move with the Unity defined horizontal/vertical/jump inputs and with the left/right/space keys.
- Ability to sidestep with the “Strafe” option.
- VRManager script exposes a public Log function so that javascripts can also use MiddleVR log facilities.
- VRManager script exposes VRWand axis/buttons values so it can be easily accessed from javascripts.
- VRManager script exposes methods IsKeyPressed, IsMouseButtonPressed and GetMouseAxisValue so that javascripts can also get that information.
- New VRManager option: DontChangeWindowGeometry. Let Unity handle the geometry of the window as without MiddleVR.
- New VRManager Option: SimpleCluster: Will automatically add ClusterObject script to objects that need to be synchronized. See documentation, section Clustering.
- VRInteractionTest.cs now demonstrates how to access wand axis values and buttons states.
- Unity4: borderless windows now require the -popupwindow argument on the command line. This is automatically added by the Config Editor in the Simulations window.
- New options: ForceQuality and ForceQualityIndex. Sometimes in a cluster environment, Unity uses a random quality. This forces the given quality index.
- New option: World Scale. Will scale the movements of VR Nodes so scene appears smaller or bigger.
- Cluster
- Faster connections.
- If a client can’t immediately connect to a master, it retries every 3s until 30s.
- New GUI option: DisableVSyncOnMaster. By default, VSync is disabled on master.
- General improved performances
- ClusterObject script: Automatically add scripts to share position/orientation of a GameObject and optionally its children.
- New Unity VRManager Option: SimpleCluster: Will automatically add ClusterObject script to objects that need to be synchronized. See documentation, section Clustering.
- Improving NVSwapLock support.
- MiddleVR
- Floating licenses. Run the bin/lmx-serv-iminvr.exe to run the server. Make sure to open TCP & UDP ports 6200 on the server machine.
- Support for new device: Trivisio Colibri
- OpenGLQuadBuffer and ForceOpenGLConversion: improved performances.
- Anti-aliasing for active stereo and ForceOpenGLConversion modes (!! Forward rendering only!!): set MIDDLEVR_AA environment variable to 0,2,4,8,… You will have to restart a file explorer so the variable is updated. If you run your application from the GUI with the new Simulations tab, you will also start to restart the GUI every time you change the variable. If you run your application in cluster mode, you will also need to restart all the VRDaemons.
- Added: VRDeviceMgr::IsMouseButtonPressed, IsMouseButtonToggled, IsKeyPressed, IsKeyToggled. For easier mouse and keyboard handling.
- Exposing DeltaTime in the VRManagerScript so can be used in Javascripts.
- Popup error when can’t find configuration file.
- Licensing: License can be linked to a USB Hardware key (HASP dongle).
- Doc
- Updated documentation.
- Updated class reference.
- Explain how to access wand data from C# and JavaScript.
- MiddleVR configuration editor
- Changes
- MiddleVR configuration tool
- Better handling of config load/save/run.
- You can now set the loglevel and enable the crash handler in the Viewports tab.
- Wand properties are moved into the devices list.
- The displays are now refreshed in real time. If your screen resolution is changed or if a new display is plugged, the displays layout will be updated accordingly.
- Most Recently Used configuration files.
- Display of drivers properties
- Help > View log folders
- Pre-defined configurations are now read-only
- Unity
- Player now complains when VSync has not been deactivated.
- DisableExistingCameras: Disable only camera component, not camera objects so the scripts running on them are still active.
- Framerate (FPS) computed with a better average algorithm over 0.5s.
- Exposing DeltaTime in the VRManagerScript so can be used in Javascripts.
- New options: ForceQuality and ForceQualityIndex. Sometimes in a cluster environment, Unity uses a random quality. This forces the given quality index.
- Cluster
- FirstPersonControllers are now correctly synchronized over the cluster.
- If a client can’t immediately connect to a master, it retries every 3s until 30s.
- When a device doesn’t exist on a cluster client, typically a joystick that is only connected on the master, it is automatically created on the clients.
- Improving NVSwapLock support.
- Share server time. You can access it through vrKernel::GetTime();
- Removed SoftSwapLock option, is now always ON.
- MiddleVR
- Shipping VRPN 7.30 with Intersense support.
- Logging system description.
- Added: VRDeviceMgr::IsMouseButtonPressed, IsMouseButtonToggled, IsKeyPressed, IsKeyToggled. For easier mouse and keyboard handling.
- MiddleVR configuration tool
- Bug fixes
- MiddleVR configuration tool
- Fixed: Crash when adding a new device and clicking on a device category.
- Fixed: You can now remove VRPN Tracker/Axis/Buttons.
- Fixed: Can’t remove virtual tracker category.
- Fixed: Couldn’t load files when a path contains spaces
- Unity
- Fixed: Aspect ratio of side-by-side stereo could be wrong if using “Use Viewport Aspect Ratio” flag.
- Fixed: Viewports position could be wrong in certain conditions.
- MiddleVR
- Fixed: Joystick could get wrong values after some time.
- Fixed: Rounding errors in Camera’s near/far when setting an aspect ratio.
- Fixed: Stereo invert eyes in side-by-side mode.
- Fixed: Aspect ratio of mono cameras.
- Fixed VSync in OpenGL Quad-Buffer.
- Fixed ForceOpenGL.
- Fixed: Mouse cursor is visible even if option Show Mouse Cursor is disabled in OpenGL Quad-Buffer mode.
- Doc
- Updated doc.
- Fixed typos.
- MiddleVR configuration tool
18.11 18.11: Version 1.0
28 March 2012: Version 1.0
19 19: Devices constants
19.1 19.1: Keyboard keys
Usage example: keyboard.IsKeyPressed( MiddleVR.VRK_SPACE );
VRK_A, VRK_B,... VRK_Z
VRK_0, VRK_1,... VRK_9.
VRK_F1, VRK_F2,..., VRK_F15
VRK_ESCAPE
VRK_MINUS /* - on main keyboard */
VRK_EQUALS
VRK_BACK /* backspace */
VRK_TAB
VRK_LBRACKET
VRK_RBRACKET
VRK_RETURN /* Enter on main keyboard */
VRK_LCONTROL
VRK_SEMICOLON
VRK_APOSTROPHE
VRK_GRAVE /* accent grave */
VRK_LSHIFT
VRK_BACKSLASH
VRK_COMMA
VRK_PERIOD /* . on main keyboard */
VRK_SLASH /* / on main keyboard */
VRK_RSHIFT
VRK_MULTIPLY /* * on numeric keypad */
VRK_LMENU /* left Alt */
VRK_ALTLEFT /* left Alt */
VRK_SPACE
VRK_CAPITAL
VRK_NUMLOCK
VRK_SCROLL /* Scroll Lock */
VRK_NUMPAD0, VRK_NUMPAD1,..., VRK_NUMPAD9
VRK_SUBTRACT /* - on numeric keypad */
VRK_ADD /* + on numeric keypad */
VRK_DECIMAL /* . on numeric keypad */
VRK_OEM_102 /* <> or | on RT 102-key keyboard (Non-U.S.) */
VRK_KANA /* (Japanese keyboard) */
VRK_ABNT_C1 /* /? on Brazilian keyboard */
VRK_CONVERT /* (Japanese keyboard) */
VRK_NOCONVERT /* (Japanese keyboard) */
VRK_YEN /* (Japanese keyboard) */
VRK_ABNT_C2 /* Numpad . on Brazilian keyboard */
VRK_NUMPADEQUALS /* = on numeric keypad (NEC PC98) */
VRK_PREVTRACK /* Previous Track (VRK_CIRCUMFLEX on Japanese
keyboard) */
VRK_AT /* (NEC PC98) */
VRK_COLON /* (NEC PC98) */
VRK_UNDERLINE /* (NEC PC98) */
VRK_KANJI /* (Japanese keyboard) */
VRK_STOP /* (NEC PC98) */
VRK_AX /* (Japan AX) */
VRK_UNLABELED /* (J3100) */
VRK_NEXTTRACK /* Next Track */
VRK_NUMPADENTER /* Enter on numeric keypad */
VRK_RCONTROL VRK_MUTE /* Mute */
VRK_CALCULATOR /* Calculator */
VRK_PLAYPAUSE /* Play / Pause */
VRK_MEDIASTOP /* Media Stop */
VRK_VOLUMEDOWN /* Volume - */
VRK_VOLUMEUP /* Volume + */
VRK_WEBHOME /* Web home */
VRK_NUMPADCOMMA /* , on numeric keypad (NEC PC98) */
VRK_DIVIDE /* / on numeric keypad */
VRK_SYSRQ
VRK_RMENU /* right Alt */
VRK_ALTRIGHT /* right Alt */
VRK_PAUSE /* Pause */
VRK_HOME /* Home on arrow keypad */
VRK_UP /* UpArrow on arrow keypad */
VRK_PRIOR /* PgUp on arrow keypad */
VRK_LEFT /* LeftArrow on arrow keypad */
VRK_RIGHT /* RightArrow on arrow keypad */
VRK_END /* End on arrow keypad */
VRK_DOWN /* DownArrow on arrow keypad */
VRK_NEXT /* PgDn on arrow keypad */
VRK_INSERT /* Insert on arrow keypad */
VRK_DELETE /* Delete on arrow keypad */
VRK_LWIN /* Left Windows key */
VRK_RWIN /* Right Windows key */
VRK_APPS /* AppMenu key */
VRK_POWER /* System Power */
VRK_SLEEP /* System Sleep */
VRK_WAKE /* System Wake */
VRK_WEBSEARCH /* Web Search */
VRK_WEBFAVORITES /* Web Favorites */
VRK_WEBREFRESH /* Web Refresh */
VRK_WEBSTOP /* Web Stop */
VRK_WEBFORWARD /* Web Forward */
VRK_WEBBACK /* Web Back */
VRK_MYCOMPUTER /* My Computer */
VRK_MAIL /* Mail */
VRK_MEDIASELECT /* Media Select */
20 20: Class hierarchy