Stanford and Nvidia working on a strain-free VR headset

7 Aug 2015

The first light-field stereoscopic prototype headset from the Stanford Computational Imaging Group

One of virtual reality’s challenges (apart from dodgy Time covers) is creating a comfortable long-term experience for users. Could this light-field display be the solution?

Whether it’s Time magazine’s cover image or its cover story, everyone’s talking about virtual reality – and Oculus VR is by no means the only player in the game.

Nvidia, the computer components manufacturer bucking the downwards trend in the PC market, has teamed up with researchers at Stanford University to develop a more realistic virtual-reality experience using light-field technology.

Using stereoscopic imaging in VR

Today’s current crop of VR headsets generally use a stereoscopic technique to create a virtual 3D world, beaming images at slightly different angles to each eye.

Stereoscopic imaging creates an illusion of depth but variations of this technique can cause eye fatigue or strain.

“The way we perceive the natural world is much more complex than stereoscopic,” Gordon Wetzstein, an assistant professor of electrical engineering at Stanford, explained to Forbes. “Our eyes can focus at different distances. Even one eye can see in 3D. It does that by focusing the eye.”

Wetzstein is the leader of the Stanford Computational Imaging Group, an interdisciplinary research group set to demonstrate a new prototype VR headset at the SIGGRAPH 2015 Emerging Technologies conference this week in Los Angeles.

This early-stage device has been assembled using off-the-shelf components and 3D-printed parts. Basic as it is, it should allow the wearer to see depth in a virtual world without strain.

Natural focus in a virtual world

The Stanford Computational Imaging team has augmented the stereoscopic effect by layering two transparent LCD screens within its headset.

Light field VR headset

A schematic for the light-field display headset. Image via Stanford Computational Imaging

To achieve this, the VR headset displays a combination of 25 images at once, helped along the way by a PC with Nvidia’s Maxwell graphics card architecture and an algorithm based on Nvidia’s CUDA programming language.

This algorithm computes the images displayed on the headset, doing all the hard work while allowing the eyes to perform their innate talent of refocusing.

The intended result is a comfortable and immersive virtual experience with significant improvements in resolution and retinal blur quality.

Elaine Burke is the host of For Tech’s Sake, a co-production from Silicon Republic and The HeadStuff Podcast Network. She was previously the editor of Silicon Republic.

editorial@siliconrepublic.com