Abstract: Light fields mean different things to different groups, ranging from refocusable pictures to polygons to reflectance fields. The choice of representation affects how light fields are captured or constructed, how easily they can be compressed for transmission and delivery, and decompressed for display. We will focus — pardon the pun — on live-action capture of real-world content, since this represents a major area of interest, as well as one of the most difficult regimes for light fields.
We discuss a unifying way to understand these data sets as a quasi-continuous plenoptic function, as well as considerations for large scale capture and deployment of light field data for consumer use. We propose an alternative framework to the popular array-of-images + polygons representation; then discuss how these competing frameworks may perform in bitrate, encoding and decoding complexity, and fidelity. Finally, we examine the implications for networks and display devices.
We will review of major categories of light field:
- purely volumetric (surface data, but roughly Lambertian / nondynamic)
- IBR (volume + specularity, non-Lambertian but coarsely quantized)
- “plenoptic function”
- micro light fields
Speaker: Ryan Damn, Co-Founder, Visby
Bio: Ryan is a camera geek, cinematographer, hacker, and frequent light field speaker. After 20 years of shooting video and building camera systems, he cofounded Visby in 2015 to make real-world capture of ‘holographic’ images a reality. Visby is pioneering new image formats and new capture systems for holographic displays.