Human binocular vision and acuity, and the accompanying 3D retinal processing of the human eye and brain are specifically designed to promote situational awareness and understanding in the natural 3D world. The ability to resolve depth within a scene whether natural or artificial improves our spatial understanding of the scene and as a result reduces the cognitive load accompanying the analysis and collaboration on complex tasks.
A light-field display projects 3D imagery that is visible to the unaided eye (without glasses or head tracking) and allows for perspective correct visualization within the display’s projection volume. Occlusion, specular highlights and gradient shading, and other expected depth cues are correct from the viewer’s perspective as in the natural real-world light-field.
The traditional light-field display consists of one or more SLMs whose pixels are rays of light that are angularly distributed through an optical system. Pixel density of the SLM source and the accompanying micro-lens design/structure have a profound effect on the 3D fidelity of the projected light-field image. This presentation will review the light-field display spatial/angular trade for common use cases considering such parameters as hogel size, projection field-of view and intended viewing distance.
Thomas Burnett, Co-founder, Fovi3D
For the past 15 years, Thomas has been developing static and dynamic light-field display solutions. While at Zebra Imaging, Thomas was a key contributor in the development of static light-field topographic maps for use by the Department of Defense in Iraq and Afghanistan. He was the computation architect for the DARPA Urban Photonic Sandtable Display (UPSD) program which produced several large-area, light-field display prototypes for human factors testing and research.
Thomas co-founded FoVI3D to continue research and development of light-field display technology. FoVI3D has architected and demonstrated both high-resolution monochrome and color light-field displays (http://www.fovi3d.com/video), developed a novel display agnostic 3D API and simulated a single pass radiance image rendering pipeline that greatly reduces the SWaP cost of light-field rendering.