Virtual reality and immersive 3D displays are highly evolved. Put on a VR headset and you are inside the display, the images seem lifelike. However, a 3D display that requires no glasses and headsets has recently hit the market. Stand in front of a flat screen and as long as you are within the 50-degree cone emanating from it, what you see is in three dimensions. You can move around, share the screen with others, and the image stays real.
We asked the Looking Glass Factory CEO and co-founder Shawn Frayne to enlighten us about his unique product.
Tech Briefs: Could you describe your products and what is unique about them?
Shawn Frayne: They are holographic 3D immersive displays in three different sizes: 8.9”, 15.6”, and 32”, none of which require the user to wear a headset. Our new Looking Glass 8K is the world’s largest and highest resolution immersive holographic display. At 32’’, it is four times the size of anything else on the market.
Tech Briefs: Could you give me an idea of how you developed the technology?
Frayne: My background is in what is technically referred to as holography. I worked on it in high school as a hobby and then I went to MIT and did some holography there. But what I was really hoping for all along, was something dynamic and alive, not a laser photograph, which is in many ways what conventional holography is in my opinion. So, I assumed that somebody in a lab at MIT would’ve had something, but they didn’t.
Then, Oculus got acquired by Facebook and Magic Leap started to get into the news, raising billions of dollars, and I saw a lot of folks who I followed who had written research papers that I held in high esteem were moving into a headset-only-based future. But I thought you ought to be able to get a group of people around a field of light and it would be dynamic and moving and you could view it from different angles like you would around a campfire or a radio or a television.
So, my partner, Alex Hornstein, and I started Looking Glass Factory about six years ago, and we named the product line "The Looking Glass". The technology we ended up using, after having developed a number of prototypes over the preceding five years, is our version of a light field display.
Tech Briefs: Could you describe the specifics of your technology.
Frayne: We start with conventional LCD or OLED screens and then apply several optical overlays that enable us to add directionality to the subpixels of the underlying screen. Then, we can control the subpixels not only for intensity and color, but also the direction of the overall mass of subpixels. We refresh it about 60 times a second, so that we have an updatable light field that very closely approximates a real-world light field, or plenoptic function. So, we’ve added a third element to our display — directionality.
In the case of the Looking Glass 8K, we’re controlling those three parameters – intensity, color, and directionality – for 100 million subpixels. Which means we can fire out 100 million “rays” into a 50° viewing zone. Anyone who’s in that zone gets exposed to various parts of the light field. It’s fully stereoscopic because you’re getting different views to each of your eyes; the light field is dense enough so that you don’t see any of the breaks that have been characteristic of the prior generation of auto-stereoscopic displays, where you would have a small discrete number of views.
We designed the Looking Glass so that anything it shows looks real. It comes down to a combination of hardware and software that’s fully under our control. We design all of our driver boards, all of our own optics, everything except for the original source of the pixels and subpixels. Our hardware modulates the light from the underlying pixels and subpixels. We also designed the software that lets us faithfully display this universe of 3D content, whether it’s a holographic map, or a CT scan, or in the near future, a light field video feed. So, by adding directionality to each of the subpixels of our Looking Glass systems, we add a spatial sense to the interface in addition to the temporal effects that have been worked out over the last 100 years in the development of cinema.
We have been able to develop our display because of all of the modern technology that is available to us. There are super-fast computers that can generate the intensity, color, and direction of 100 million points every 1/60th of a second. There are game engines like Unity and Unreal that we can plug into so folks can make applications for the Looking Glass. And really dense LCD and OLED screens that we can modify optically, electrically, and with software.
Although, we think this will define the next generation of how people will communicate and interact with information, there’s nothing complex about the core thing we’re doing. You can think of our system as a lensing-based, refractive optics-based system. We assign a given subpixel to a certain direction in space, then we just update that full set of a 100 million points every 1/60th of a second.
Take the coffee cup I’m holding in my hand. To me, it feels very different from the way it would look to you on a 2D display. The light bouncing off the ceramic of this cup is bouncing not only with intensity and color but also with directionality. That directionality impinges my eyes in such a way that I can deduce that it has some 3D characteristics to it. But, even more than that, I can see specular reflections and other details that the real world has, that you simply cannot reproduce in a conventional display.
What happens in your brain when you’re looking at our display, is close to, although not yet identical with, the exact process it would go through in order to deduce three-dimensionality in the real world. While we do pay a lot of attention to the power of stereoscopy, that is not the only thing that conveys three-dimensionality to a brain. Other key aspects like depth of field, specular reflection, and so on, go into making up what makes an image feel real, in addition to purely left-eye right-eye stereoscopy.
Tech Briefs: Can this be done with pictures of actual things or just with images one creates on a screen?
Frayne: It can be pictures of real things. You could take a number of photographs of a real thing and then pipe those through our software — the thing could be static or moving. But a lot of what folks are doing with the Looking Glass is based on synthetic, rather than real-world information. Real-world tools that would capture volumetrically or with light fields are not yet fully available. However, there are millions of folks who every day go to work and manipulate a 3D model of some sort.
For example, we have a partnership with one of the leaders in the drug-discovery space: Schrödinger. When you’re a molecular modeler who’s designing a new drug — you want to show somebody: “Oh here’s where this molecule’s going to bind to the rest of the body's chemistry,” and explain it. Having a field of light that represents a million-fold three-dimensional magnification of what you’ve drawn is a great collaboration tool. When you’re standing right next to somebody, they can look over your shoulder: “Yeah, I get what you’re talking about.” But we’re also moving towards people being able to collaborate remotely — you can talk about the same thing without having to be in the same physical space together.
Tech Briefs: Could you describe how the Looking Glass would be implemented by a user?
Frayne: We’ve built our software stack so that once a user has created any type of 3D content, they can preview it on the Looking Glass. We have a viewer, for instance, that enables you to easily drop a model into our application, and it’ll run in the Looking Glass for a quick preview.
We’ve also made plug-ins into popular 3D development engines, like Unity and Unreal that people already use to develop apps for phones and other devices. They can also use those to develop holographic apps for the Looking Glass.
The third way that folks use the Looking Glass software pipeline, is through a series of integrations, using our new toolset we call HoloPlay Core. That lets any existing piece of 3D software someone’s using on their regular 2D laptop or workstation integrate directly with the Looking Glass. With HoloPlay Core, all someone has to do is plug the Looking Glass into the laptop or workstation they’re using and instantly, they have a live view, holographically, of exactly what they’re working on. Since they can use the software they’ve already trained themselves on for maybe 20 years, they can have an instantaneous holographic view without having to learn anything new.
Tech Briefs: How do you connect to the computer?
Frayne: Over an HDMI or a display port cable, then there’s a separate USB cable that carries some other information. Basically, two cables plug in and that’s it.
Tech Briefs: Sounds great — when can I get one?
Frayne: We have two desktop models. One is about the size of a really thick book and the other is about the size of a laptop — 8.9 and 15.6-inch, which are available now. And we just started shipping our biggest Looking Glass, at 32”, which we call the Looking Glass 8K. This is meant for environments where you would want to have a larger view for all sorts of reasons. It started shipping to some of the initial customers, just a couple of weeks ago. However, there are currently 1000s of our Looking Glass displays in use.
This article was written by Ed Brown, associate editor of Photonics & Imaging Technology. For more information visit https://lookingglassfactory.com .