Augmented Reality: The Tech Behind AR Glasses
Chances are you’ve seen and used Augmented Reality (AR) apps without knowing that you have: from Pokémon Go to Google Maps overlays, from trying on clothes virtually to measuring objects, smartphones have been using AR for quite some time now. But AR glasses – wearable glasses that incorporate this technology, displaying digital information as an overlay on top of reality, into your field of vision – seem always to be a few years away from becoming ubiquitous. Why is this? What technology enables AR glasses to work, and what technology needs to improve for them to become mass-market products?
Paradoxically, it’s much easier to produce Virtual Reality (VR) headsets than it is to produce AR glasses, even though a large part of what a user of AR experiences is the world around them. A large part of what makes a VR headset work is its screen, which sits close to the user’s eyes and fills the user’s entire field of view. The better the screen, the more immersive their experience (more or less...). AR glasses, although they also convey information visually, use a different and more complex set of technologies to overlay digital information onto the real world. Like a VR headset, a set of AR glasses also uses a display panel close to the user’s eyes, but it also uses a piece of optical technology called a beam splitter to project the information this display shows.
By themselves, beam splitters are not new pieces of technology, and they have been used to project images since at least the 19th century when an illusion known as Pepper’s Ghost sparked a craze for ghostly and supernatural-themed stage productions. In 2013, Google used a simple beam splitter in their Google Glass wearable AR device (which is still available, albeit not as a retail product). Google Glass was criticised for being bulky and intrusive when it was first released, and while more modern wearables use technology that is similar, they leverage the power of optics to create smaller and sleeker devices with see-through displays.
The approach that the manufacturers of most recent AR glasses take is to use waveguides to project the information displayed. There are different kinds of waveguide, but they use the principle of total internal reflection to provide far greater clarity and a much wider field of view for the user, and they have a much smaller footprint. No matter how they work, manufacturing these pieces of optical kit is painstaking – they require nanometre-scale techniques akin to those used in manufacturing semiconductors and silicon chips.
But that’s only part of the story. Waveguides by themselves do not make AR devices work: they need something to project, and that’s where display technology comes in. The first wave of AR glasses used LCD displays not unlike those used for everything from TVs to laptops to mobile phones, but these had drawbacks. LCD displays are not very efficient. They draw a lot of power and are not overly bright, as most of the power they receive isn’t converted to light but to heat – a waste product. This isn’t a huge problem for a TV that sits in the corner of a room, but it becomes more problematic for a device that sits millimetres from a user’s eyes.
One approach might be to use OLED displays, which are brighter and draw less power. However, these are complex and expensive to manufacture as the organic components of the displays degrade when they come into contact with oxygen and so need to be shielded from the atmosphere. MicroLED displays present another opportunity for advances – microLED displays are far brighter than traditional displays and draw far less power. However, the technology is currently very costly and there are few examples of microLED displays of any form on the market.
As well as a display and optics, AR devices need a processing unit. These are commonly based on mobile phone processors, and Qualcomm has created a specific line of chips that are similar to those used in mobile phones, with a few extras necessary to make AR features work. These include dedicated processors for camera input, a GPU, and sensing hub for inputs such as gyroscopes, compasses, audio, and proximity sensors.
AR glasses contain a number of rapidly evolving technologies, and one area that poses difficulties is powering the devices. Batteries are a bottleneck for mobile devices generally, as battery technology develops much more slowly than the technologies that those batteries power. The technologies that make AR glasses work are power-hungry and so AR devices need to have large batteries. One example of this is the Microsoft Hololens, which has a battery that holds four times the power of an iPhone and has a correspondingly form factor, but which only provides two or three hours of runtime. A solution to this problem, beyond developing smaller batteries and more efficient components, might be local edge rendering – offloading the processing that an AR device does to the cloud.
There might be a number of hurdles in bringing AR devices to market and making them as ubiquitous as smartphones. But the technology is developing rapidly and the transformative potential that this new AR technology has is vast.
AR tech has the potential to change the way we work, play, train, and interact with one another. Want more insights into applications of AR technology for levelling up learning and training? Check out our Fireside Chat on using AR tech to teach brain anatomy – a fiendishly difficult topic to teach and learn – and watch this space for more intel.