Designing Technology for the Real World Human Condition.

Designing interactive technology for the real world human condition is incredibly hard. While I often work with advanced AR/VR Interfaces assuming many human conditions are working properly. Humans in the real world are not always fully equipped for this expectation. The above reddit post is my testimonial and experience when helping my 97yr old Grandmother cope with Quarantine in a Nursing Home during the COVID-19 Pandemic. I essentially designed a video conferencing solution so she could properly visit given her very special needs at this age. I realized after creating this, that I learned a lot about the human computer interaction that I didn’t expect. That catering to someone like my grandmother with such extreme limitations was an extremely good learning experience.

What does it feel like to grab and scale the entire world?

Alice could tell you about Wonderland.

VR locomotion as a whole is a tricky thing, because essentially straying outside 1 to 1 tracking motion often means you will be inducing unexpected motion on the VR user. Often causing what can sometimes be called simulation sickness. Throwing off the users vestibular system system.

Most artificial or unnatural motion schemes just leave the user lacking the proper expectation of the motion they will under go. Unless given the time to learn such expectations. This can make many VR experiences either take a longer time or understand, or limit their overall effective length. Reasons many early VR experiences targeted a short length. Though I’d say younger users can wrap their brain around such motion interfaces faster, what could we do to attempt to alleviate unexpected motion?

Once such thing is enabling motion through grabbing and pulling which can surprisingly help the user expect what they are about to do. While some may get disoriented at first, the brain can often quickly connect to the expectation of grabbing and pulling through space. In my opinion much easier then just pressing a button or joystick to artificially move forward or rotate around an environment.

Why? well the user can’t possibly know at first how fast a button will move them, or how fast a joystick might rotate them. They’ll have to wrap their head around it first. While grabbing the world around them will instead give them an anchor to base their relative movment around.

While it may be too much freedom for many applications, it’s certainly powerful… being able to quickly maneuver and visualize from a desired perspective and even scale. It goes to say that 3D spatial understanding is not just complimented in VR, but is extend beyond what we are normally capable of experiencing.

Tools for computer aided design appear to stand to benefit from such free form locomotion.

Teaching 3D Interactive Mathematics inside VR/AR

In the past I’ve taught various 3D mathematics related to different forms of interactive media and graphics programming.  Usually communicating such things with more traditional methods such as paper and white board drawn representations.  I often found it difficult to help some people visualize some of the 3D concepts.

Introducing an interactive collaborative 3D Linear Algebra Teaching tool displayed in Mixed Reality. Designed at Full Sail University in the VR/AR Lab to better visualize and solve 3D mathematics inside VR and AR device displays. The Interactive application essentially teaches mathematic related to VR computer sciences inside a virtual environment where networked users can collaborate.

Tracking Calibrations

I teach in the AR/VR Lab at Full Sail University in the Simulation and Visualization BS where we commonly cover bringing tracking systems together so that they can properly coexist within the same interactive application.  This usually involves some type of calibration routine.  Below are 2 recent projects that incorporate such calibrations to bring other tracking methods into the same tracked space as the VR HMD tracking equipment.  It is our goal to enlighten our students with skills and techniques necessary to prototype simulation applications.  This technique is very important to making simulation based applications such as training applications.

Both projects do this calibration with a point based sampling calibration that involves sampling various points from both tracking systems to create frames necessary to incorporate 1 tracking method into another’s frame or vice versa.

I integrated both student projects with software to display them in 3rd person mixed reality so that they can demonstrate their systems.

In this Dental Simulation below designed by a masters game design student, we have a bachelors student who has brought Polhemus magnetic tracking together with the HTC Vive tracking.

In this Virtual Pilot Trainer a Bachelor student uses a leap motion as an interactive control panel someone can touch with their hands in a Boeing cockpit.  The leap motion hand tracking is then brought together with the HTC Vive tracking with this calibration.

 

Standalone Vive Tracking Unity Plugin

The Lighthouse Tracking system used in the HTC Vive is very good, however it usually coincides along side using a VR HMD.  Since I have applications that sometimes don’t require a VR headset, but still want the tracking, this led me to build software support for usage of the Standalone Vive Tracker without requiring a HTC Vive HMD.

This includes a calibration routine to build the Cartesian origin for a new tracking space enabling tracking using just 1 tracker and 1 lighthouse base emitter.

I also brought the software over into the Unity Engine as a C++ dll.  This is because other methods for using the tracker cause Unity to Initialize a VR back end for rendering functionality.  I afterwards packaged this as a plugin utility on the Unity Store to allow development of non VR HMD applications that desire the tracking functionality.  You can find it at this Link.  Currently only supports Windows x64, but it can be re-factored to support other platforms if desired.

Animated Puppetry Controlled with VR.

I recently integrated mixed reality display into another interesting VR application.  The puppeteer controls the virtual puppet with the Oculus Rift, and is able to see and hear his audience with a vr only seen web camera and communicate with them effectively across a 2nd display that will not show the web camera image.  The video above demonstrates a rendition of this puppetry with an audience in a real time mixed reality display instead of just a typical 2nd display of the puppet that the audience would be viewing.

Multi-Threaded Modules running A* Navigation Mesh based Path Searches

I spent a great deal of time researching state searches regarding the usage of search algorithms such as A*.  Using this algorithm to find valid paths within simulated or virtual environments has become popular and somewhat of a standard.  It has become part of enhancing spacial understanding within artificial intelligence.

Some of the challenges involve properly informing and processing such state searches efficiently and effectively depending on the resolution and detail of how much awareness at what frequency is needed.

Like a lot of other things artificial intelligence revolves around the idea of immersion in interactive media such as simulation or games.

Looking at a lot of path finding solutions within simulation and games, it’s obvious that a lot of applications have such complex searches throttled in processing to accommodate other systems that occur during software runtime.  It’s only necessary to process such searches either only on demand or at frequencies necessary as to not break immersion regarding the AI being interacted with.

Not all applications require massively detailed spaces or high numbers of AI entities that require individual path maneuvering, but it goes to say that this is something that I am interested in building scalable systems for.

Informing the algorithm with better spacial values is important because it can result in less steps during the search runtime.  Such solutions have evolved over time to involve the standard term of Navigation Meshes or Geometry that provides geometric understanding of the spaces that are being searched.

Architectural systems such as message or event systems can trigger a search.  In the event of a search running, the A* algorithm to its benefit can begin its incremental iterations as to potentially pause or suspend its behavior on any one iteration mid search if the total search computational complexity is too resource heavy that it impedes on other operations.  While this is a solution to the problem of needing to dispatch an expensive search, by splitting it across many real time interactive frames, it causes somewhat of a dependency on other systems being run on a frame to frame basis.  This of course limits how large a search that might be able to be squeezed along side other systems as to result in the search finishing quick enough.

Threaded searches are something that I explored to contend with the idea above.  If a computationally expensive search was needed, being able to rely on an unhinged thread especially on multi core systems would be greatly beneficial and the idea of throttling thread activity between iterations of the search can still be used to allow other threads to operate.  In essence putting an expensive A* search on a unique thread and controlling how much time can be allocated for said search.  If a search doesn’t finish in allocated time limit, then the search can be resumed when desired.  Another benefit is that sometimes environmental path re planning is necessary while in mid search.  In this event the unique A* thread can be reset immediately to accommodate this occurrence instead of needing to wait for prior search results that are now invalid to finish.  Dynamic changes to environments are often the cause of needing to search for a valid path again as well as interactive changes such human user input that result in an AI response.

In contrast with a really complex search, executing many smaller searches have about the same problem on a single thread.  A look at dedicating A* threads for each search individually reveals that there is the potential for at least spreading the CPU resource evenly among searches executing simultaneously.  Context switches between many threads between scheduling being the primary concern, and could be combated if the complexity of each search was known ahead of time as to partition search work evenly across the maximum threads supported by the remaining hardware resources.  However it’s more likely that any one given search complexity will not be known ahead of time.

I’ll publish some more updates regarding research and conclusions regarding this…

To be continued