Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the twentyfifteen domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/p0ppyspy/public_html/wp-includes/functions.php on line 6114
April 2017 – An Interactive Technology Blog

Mixing Realities

I’ve been working on a real time mixed reality setup with the goal of having an easy way to modify student VR applications to be displayed in real time to multiple outputs simultaneously.  This is to be used for recording or real time demonstration.  Below Dan Mapes a Director I work with at Full Sail University demonstrates some software I integrated to display in mixed reality.  The application shown was used regarding student internships for visualizing database information relationships in VR.

What is used here is a few things that have been developed in recent times that I incorporated together to make this happen.  One of these was a Stereo 3D depth sensing camera called the ZED camera, made by stereo labs.  This camera uses parallax between computer vision recognized images to build a depth per pixel at 1920×1080@30hz.  The other was the recently released vive tracker to properly track the ZED camera position and orientation in the real world.

WP_20170407_12_26_26_Rich

What happens is the real time 3D graphics use a perspective fov projection that matches that of the ZED camera fov based images.  The virtual camera is also positioned and oriented in the 3D space based on that of the vive tracker.

The ZED camera pulls in a typical RGB color, but also provides a depth per pixel.  This depth is matched to the units of the 3D virtual world.  The scales of images rendered inside the 3D world and captured from the camera are mathematically kept track of and matched so that when Dan scales the world around him, he and the virtual objects properly illustrate which is in front or behind.

Besides using depth to mix realities, Dan is also surrounded by green curtains that are used to filter in overarching background 3D graphics around him.  This isn’t always necessary if the depth of the virtual 3D objects are in front of the depths detected from the ZED camera.  Sometimes you may want to formally show more of the real world from the camera and only show virtual objects sitting on top or in front of the real objects.

Because the virtual camera is being tracked in real time, it can actually be moved while the application is running as if a camera man wants to change angles.  Below demonstrates a student vr “fear of heights” experience in real time that I also integrating with the mixed reality setup.

The recording of Dan’s VR perspective & the Mixed Reality in the first video were captured both in HD at a combined 3840×1080 resolution.  This was just barely achieved on a modern 4 core/8 thread i7 processor due to video encoding processing along side the application processing sometimes brushing up against 100% CPU utilization with rapid movement or changes on screen.  The issue can be rectified by using a processor capable of more threads, or using a 2nd GPU for encoding as to not interfere with the VR and Mixed Reality being rendered currently.  With that said, I plan on stress testing the mixed reality setup by live streaming the video capture with a slightly better computer.