close
Hi Tech & Innovation

Researchers create a novel way for generating natural focus blur in a virtual reality headset.

Parts of the world around us shift into and out of focus as we perceive it. This visual cue is one that humans use naturally, but it can be difficult for technology to mimic, especially in 3D displays. The Split-Lohmann multifocal display is a new technique developed by Carnegie Mellon University researchers for creating natural focal blur in virtual reality headsets.

The eye has a lens that can change its focal length to focus on objects at a certain depth and resolve them to their finest details, while points at other depths become blurry. A crucial cue that needs to be met by 3D displays is this eye feature known as its accommodation.

According to professor of electrical and computer engineering Aswin Sankaranarayanan, “most head mounted displays lack the ability to render focus cues.” A 2D screen and a lens in conventional headsets give the impression that the screen is fixed distance from the eye. However, the images that are displayed remain flat. We have developed a lens that can simultaneously place display pixels at various eye-distances.”

“Such 3D displays will benefit domains other than entertainment. The display could be employed in robotic surgery, where the display’s multi-focal capabilities give a surgeon with realistic depth signals.”

Matthew O’Toole, an assistant professor in the Robotics Institute and a co-author on this work.

The method returns to thoughts previously created during the 1970s, where a strategy for center tunability called Lohmann focal points was first developed. These are made up of two optical elements called cubic phase plates. These plates can be moved in different directions from one another to change the focus. However, this method necessitated mechanical motion and slower operating speeds, which are unsuitable for AR/VR displays.

The Lohmann lens is split by the new display, putting the two cubic plates in different parts of the optical system. The optical translation, as opposed to the mechanical one, is made possible by a phase modulator positioned in between them. There is an added benefit to this system. Different parts of the scene can be positioned at different eye-level distances because the underlying arrangement allows for different amounts of translation.

“The advantage of this system is that we do not need to use high-speed focus stacks to create a virtual 3D scene that is pleasing to the eye. And everything works in real time, which is great for near-eye displays,” says Yingsi Qin, the lead author of this work and Ph.D. student in electrical and computer engineering.

Many fields, from gaming to photography, will benefit from this advancement in focusing technology, but medical technology may benefit the most.

“There are spaces past diversion that will profit from such 3D showcases,” says Matthew O’Toole, an associate teacher in the Mechanical technology Establishment and a co-creator on this work. ” The multi-focal capabilities of the display provide a surgeon with realistic depth cues, making it suitable for robotic surgery.

This August, the team will present the results of their research at the SIGGRAPH 2023 Conference.

More information: View the team’s project page for more information.

Topic : Article