January 15th, 2016
After Mill artists shared their predictions on what’s to come in for VR in Looking Ahead: 2016 VR Forecast, we wanted to take a look at the coming technologies that will help us evolve and improve VR in 2016. Gawain Liddiard, VFX Supervisor at The Mill in LA, discusses two key technologies that could revolutionize VR in 2016: cloud-based rendering and light field displays.


Cloud-Based Rendering

Almost all modern tech is based on the fundamental increase in computer power. From self driving cars full of computer sensors, to the GearVR leveraging the power of modern smartphones, it is all based on processing speeds and volume.

As Moore’s Law has correctly predicted, the power of computers continues to exponentially rise but in recent years that speed increase has come from adding more processing cores and bandwidth rather than the individual cores getting faster. We are still seeing a rapid increase in speed and chip sets are slowly reducing in size, but we need more space to gain more capacity. To bridge this gap, I see cloud based rendering power as one of the key technologies of 2016.

We are already seeing this shift in visual effects. Cloud based computing power has opened up more flexibles way of accessing the extreme amounts of computational capability needed for modern computer graphics. For example, the groundbreaking film ‘HELP’, for which we rendered 15,000,000 frames, generated 200 terabytes of data. This acute need for vast rendering drove a lot of R&D into cloud based rendering, allowing The Mill to tackle  large scale VFX projects simultaneously.  

The hot topic of VR and its wider adoption through the use of modern smart phones is still inhibited by the amount of computer hardware you can cram into a pocket-sized device. Cloud rendering totally elevates that. If we can open up fast enough connections to online computing resources, every smart phone in everyone's pocket becomes a super computer capable of allowing us to dive into real-time rendered VR games, films, worlds.   

Google ATAP 'Help'

Light Field Displays 

Light field cameras are rapidly changing the way we think about photography, and the benefits of light field capture have been demonstrated by companies such as Lytro.

Light field cameras capture the colours of light that hit each pixel on the camera film, as well as the direction each photon of light is coming from. Many different styles of light field cameras are emerging; including plenoptic lens systems, high-density camera arrays and sparse camera arrays that depend on more computational post processing. 

LytrowebsiteLytro Immerge

This technology also promises to solve the issue of filming true virtual reality with not only the freedom to rotate your view, but to move around within a limited space. Further more, this tech is rapidly spreading and with cameras such as Pelican Imaging, we will see light field cameras appearing smart phones in future generations.

Whilst capturing light field data is an amazing step forward in technology, the next link in the chain is light field display. A light field display releases the same data set that a light field camera captures. That is, rather than just pixels that glow different colors and intensities making a flat image, a light field display reproduces the direction of the photons being released. This adds amazing new qualities to the image and is a paradigm shift in the way we view screens. 

Currently, screens show us nothing but a flat image (current generation 3D screens are not really more than one flat image per eye), where as when you look out a window into the real world, you can see the depth of the whole world outside. Two people looking out of the same window see different points of view and both can chose where to hold their focus. A light field display promises to give the true illusion of that window into a virtual world.

With no extra 3D glasses or head-tracking, a true light field image would allow you to look out in full 3D depth, shift your head from side to side to get the perspective you want, allow your eyes to naturally find their own focus depth and convergence.  This is because the full data set for all the light information coming through that theoretical window is being reproduced, not just a 2D rendering of it.

How does this seemingly farfetched technology fit into our particle world?  Unfortunately, I don't expect to see light field TVs turning up in people's homes any time soon, yet companies such as Nvidia are doing very real work with near eye light field displays. These small light field screens can sit very close to the viewer’s eyes but as the eye can find its own focus depth in the image, the viewing experience has the potential to be more confutable and natural.

This could drastically reduce the size and improve the immersive nature of VR head mounted displays. And when full sized light field displays become available I can not wait to generate content for them, giving people windows into a new universe of media. 

We asked several Mill artists to share their predictions on what’s to come in for VR in: Looking Ahead: 2016 VR Forecast