Probably everyone reading this article has heard the buzz about augmented and virtual reality. So you may now be wondering how real these technologies are and how will the impact a range of applications and markets – maybe even yours. Let’s take a look at what is happening and see how this might play out.
The virtual reality camp got reignited about 2 years ago when Facebook acquired little Oculus for $2B. Since then, a host of companies have jumped in with new VR headsets with perhaps the biggest being Samsung with their GearVR headset.
Virtual reality is an immersive experience so you are isolated from the world and viewing a computer generated world or even a video-based experience. The technology has been around for decades and is now getting to the point of actually being a decent experience.
What has enabled this is a convergence of needed technologies like high resolution screens and inertial measurement capabilities. Many of these headsets use the guts of smartphones or the smartphones themselves as the engines for these devices. Being able to track head movement and display a new frame very quickly with minimal latency is critical to a good, nausea-free experience. Fast responding, high resolution displays, like OLEDs, enhance this capability. Your phone or similar technology can now provide the processing, storage and playback capability for a self-contained and untethered experience.
But an immersive headset is not much good unless you have immersive content for it. Computer generated content is fairly easy to develop using conventional game engines and this has been developer focus in many professional simulation and consumer gaming applications.
What is new and quite revolutionary is the use of video content. In just the last year or two dozens of companies have spring to life offering various forms of 180 or 360 degree panoramic video capture or even full spherical capture. These typically employ 3 to a dozen cameras, some inexpensive and some expensive, in 2D and 3D rigs. Stitching software then combines these multiple camera images into a single video file format that can be played back in the VR headset.
The results can be compelling. Imagine a ring-side seat at a sporting match, concert or performing arts event. You are in the environment with the ability to look all round and see what is happening in any direction with the scene tracked to your head movement. Add in some immersive sound and the experience can be jaw dropping – if the production quality is high enough and the delivery system robust enough.
That’s part of the rub. Inexpensive acquisition played back on mediocre hardware does not create a very good experience. If you want something that is more than a cool novelty, you need to spend some money and effort to do it right.
Where can high end VR be useful?
- Certainly wealthy individuals who want that ring-side or court-side experience might pay for it.
- Automobile or other vehicle designers can use it too. Being able to sit inside your new car design and look around in very high quality is a big advantage – and cost saver.
- The military has used simulation for decades often with very expensive and cumbersome headgear. The advent of these new headsets is opening eyes to widening the scope of applications because of the much lower price point.
- Architects and engineers can also use the technology to review designs and think about workflows and ergonomics in new ways. It is also a good tool for all kinds of training requirements.
- Remember CAVEs – those 5 or 6-sided 3D visualization rooms for exploring scientific research? High quality VR headsets can pretty much do the same thing for a lot less money.
- VR takes telepresence or meeting rooms to a whole new level when the a panoramic camera captures the room allowing remote participants to engage in a more meaningful way
On the AR side, the idea is to overlay information onto see-through optics mostly implemented as monocular or binocular eyeglass style headsets. AR is a bit behind VR as development of the technology and applications is far more demanding. Google Glass started the AR (monocular display) market a few years ago and is supposed to reemerge later in 2015. But perhaps the biggest buzz is around the Microsoft Hololens, which combines a binocular wide field of view display with head and positional tracking – much like VR. This gives you both the sense of presence and better stitches 3D computer graphics realistically into the real space you are in.
Again, convergence is helping a lot here with wireless capabilities, voice and gesture control, cameras and vision processing (like that pioneered by Microsoft Kinect games system, adding to the enablers list for VR applications.
One might divide up AR applications as follows:
Soft AR = Image overlay “ghosted” on top of the real world view. Google glass, Vuzix M100 and Head-up Displays (HUDs) are examples.
Hard AR = Injecting an CG image into a scene and mapping it three dimensionally such that it stays perfectly referenced to the real surrounding view. The image is solid, not ghosted as the real world is stenciled out making the blend more like an image injection rather than overlay. For 3D displays, focus and accommodation are also dynamically processed in real time 120Hz + with minimal latency <20ms. Microsoft HoloLens (Nokia Technology Waveguides) and Magic Leap (Virtual Retinal Display, University of Washington HIT lab) and DigiLens (SBG Technology) are all three examples of Hard AR optical platforms (all three in development for at least 10years).
Applications for AR vary from the simple to the complex and include items like:
- Display a list to help in picking up items or working through a series of tasks. This can mean finding items in a warehouse or electronic parts in an assembly facility, to gathering medical information at a clinic.
- Video consulting uses the built-in camera to allow a remote person to see what you see. This can aid in repairs and maintenance of any type of equipment and any place where some expert help can be useful.
- Contextual data display means overlaying information on your real world view based on image processing that has identified something of interest in the camera’s field of view. If you are a field service technician, the camera can identify that a value reading is too high and provide some visual feedback to alert the tech, for example. This can also be used to show how parts are assembled or disassembled.
- Adding information that is positioned in the 3D space you are looking at is the most complex task (Hard AR). A surgeon might want to overlay a CT scan and MRI scan on the patient for better visualization of the procedure, but the registration of these images has to be very good – and maintained as he moves his head for different view.
There are many more VR and AR applications being considered, developed and rolled out in consumer, commercial and professional markets. Some will fail, some will succeed and some will refreshed with a new version. The key point here is that you should be thinking of how these technologies can and will change you business or industry so you can get out ahead of the curve.
If you want to learn more about the technology, markets and applications that are driving the AR and VR revolution, come to Display Summit, June 15-16 just before InfoComm in Orlando. This is a thought leadership event focused on the latest advancements in imaging and display technology.
At the event, there will be presentations and demonstration by three companies:
- DigiLens will showcase waveguide based optics that can perform advanced functions like wide field of view, eye-tracking, light field focusing and image stenciling, for use in AR headsets, HUDs and more
- Immersion-VRelia will show VR headsets based upon a smartphone platform along with their Alterspace development platform
- Kverve Optics will showcase what is probably the world’s first collimated head-mounted display module that offers both VR and AR functionality.
To learn more or to register, please go to www.displaysummit.com.