By Chandra Devam & Scott Edgar

A sneak peek from my new book, CONVERGENCE, HOW THE WORLD WILL BE PAINTED WITH DATA, being released this month at SXSW.


Welcome to the future of medicine: Augmented Reality. Virtual Reality is already in use and has even been shown in studies to be more effective than opiates in pain relief. AR, however, has the opportunity to revolutionize medicine as we know it. We are as different on the inside as we are on the outside. This seems obvious when you think about it — we are all shaped differently, with different heights and frames and sizes. For example, we all have eyes and mouths, but they are all set differently on our faces. None of us have exactly the same organ placements, and this deeply affects medicine. Medical error is the third leading cause of death in the United States, according to the Mayo Clinic. Using AR, we can have x-ray vision to see inside the human body, allowing medical professionals to see our individual differences and better diagnose, plan, and operate on us.

AR and Medical Education

AR will, and is, changing how medical students learn. It’s incredibly important that doctors be exposed to all kinds of body types and anatomical make-ups during their training. One cadaver, or a few, isn’t as helpful as hundreds or thousands. Before AR, doctors had to wait until they were in the field, seeing patients, to get a full range of critically-needed expertise.

Consider how helpful it would be for students to all be viewing and interacting with multiple Digital Imaging and Communication in Medicine (DICOM) based models during a lecture, thus augmenting the lesson. Practical mark up could be done on DICOM rendered models. A procedure or diagnoses could be practiced multiple times and perfected, without any patient danger.

AR medical theaters that allow multiple people to interact with the same model, in real time, will greatly increase the capacity for learning. Certain things like code blue situations can even be programmed and practiced. Imagine how helpful this would be for stress mitigation among clinicians in real life operating rooms. Entire medical schools will be virtual, and students will download modules with lectures and a holographic professor. Remote students will use haptic devices to increase presence and interactivity. Training for a new procedure will be similarly seamless: virtual practice and guidance make perfect.

Many more applications are beginning to emerge in psychology, patient education, telemedicine, resource location, surgical planning, and even live surgical procedures. The display of virtual medical data, anchored to the patient, will allow doctors to see patients’ vital statistics without looking away. It also allows doctors to show patients where important aspects of their conditions are on their bodies. Surgeons benefit from virtual surgical plans and anatomy, which will be painted on the patient during surgery. Physicians can virtually dissect the body for closer inspection.

In the next five years, we will see AR become a key technology for the medical field. Use of AR for things ranging from pain management to vitals monitoring to telehealth will become commonplace.

In ten years, AR will become ubiquitous in operating rooms worldwide. In twenty years, we will look back on how we used to do medicine and wonder how we ever managed without this revolutionary technology. It will become a standard for everything.

Use Cases

One aspect of medical AR is its clinical application. There are a number of companies operating in this space, among them AccuVein, Aira, Amalgamated Vision, Aris MD, Atheer, Brain Power, EchoPixel, EmTeq and Maestro Games.

AccuVein, based in New York, uses near-infrared light to display veins on the surface of the human body. AccuVein states that 40 percent of IVs miss the vein on the first attempt. This technology allows doctors and nurses to perform injections, draw blood, and set up IVs with precision. Founded in 2006, AccuVein is now in over 125 countries.

Aira is using AR to help those with vision problems. Using deep learning algorithms — a form of artificial intelligence — paired with AR glasses, or even a phone camera, Aira can talk the user through a variety of situations that are difficult for those who are blind or have limited vision. Users of the app can recognize faces or avoiding obstacles without seeing them. Based in San Diego, California, Aira has raised $15.3 million in early-stage venture funding to power its development and growth.

Amalgamated Vision is developing next-generation optical systems to power non-obstructive, ultra-near-to-eye virtual retinal displays. Using lasers to “paint” images directly on the retina produces an image quality equal to the resolution of the human eye. There is no screen between the light source and the viewer; the image is perfectly clear even for people who need corrective lenses. The company’s proprietary optics create a practical working field of view in a compact, lightweight device that sits below the sight line and does not impair normal vision, like reading glasses. This allows healthcare workers to stay present in the moment, switching their attention from patients and co-workers to full color, stereoscopic display of the patient’s medical imaging and real-time data, just by looking down into their smart glasses. The technology has the potential to revolutionize the way images are displayed in XR.

Aris MD uses diagnostic images (DICOM) to create 3D visualizations of patient anatomy, displaying it over the patient to give surgeons a view of a patient’s individual internal makeup. This allows surgeons to make fewer mistakes related to anatomical variances, defined as the differences between each person’s individual anatomy and improves efficiency in the operating room. In addition to the ability to visualize patient anatomy, Aris MD’s automated segmentation technology allows images to be separated into individual organs and parts without the need for a radiologist to manually markup images, and without the use of 3D modeling techniques.

The use of AR for diagnostics and surgical applications is very compelling; transforming images traditionally viewed in photographic slices into 3D, in real-time without pre or post-processing facilitates organic viewing and more intuitive understanding. This takes the burden of reconstructing 3D images off of the diagnosing physicians and allows them to focus solely on making a diagnosis.

Atheer’s AiR Glasses and enterprise suite allow users to use gestures and voice commands to monitor vitals, make annotations, and communicate with remote experts with AR video. These features allow the doctor to focus on the task at hand without needing to spend time directly managing other devices or leaving the room to speak with experts. Founded by Soulaiman Isani in 2011, Atheer has used it’s $35.3 million in venture funding to develop and grow this cloud-based platform.

Boston-based Brain Power uses its software suite — The Empowered Brain — to help children and adults with autism learn life skills. The Empowered Brain’s aim is to help with social skills, language, and positive behavior reinforcement. It also uses data collection and analytics to customize feedback for each individual patient. Brain Power has been operating since 2013.

“I wanted to do something that would impact people in their daily lives,” said Brain Power founder Ned Sahin. “There was a huge unmet need here. It was staggering when I realized how little progress we’ve made in autism. Parents tell me, ‘I just wish my child could look me in the eye. I wish my child could understand what I’m thinking, what I’m feeling.’ And we’re giving them that.”

EchoPixel uses specialized displays, such as the 3D zSpace display, allowing users to view 3D models of patient anatomy floating outside the display. Users can inspect patient-specific organs by rotating them and zooming in and out to see different features of the selected organ. The Los Altos Hills, California-based company has been operating since 2012 and raised $14.3 million in early-stage funding.

Zlatko Devcic, an interventional radiologist at Stanford University Medical Center, said the EchoPixel’s system “allows you to view a patient’s arterial anatomy in a 3D image, as if it is right in front of you, which may help interventional radiologists more quickly and thoroughly plan for the equipment and tools they’ll need for a successful outcome.”

Emteq has developed an insert for VR headsets that uses facial movements to detect emotional responses, as well as being able to monitor medical conditions that affect facial expressions such as facial paralysis, depression, bipolar disorder, and Parkinson’s disease. Based in Brighton, UK, and founded by Dr. Charles Nduka and Graeme Cox, Emteq has leveraged grants in order to develop its hardware and software solutions.

The company has patented and prototyped a glasses-based system for a facial expression sensing platform, which allows emotional responses to be measured in the real world. It enables facially expressive avatars. The UK government is funding a study to create a social interaction training system for autistic teenagers.

Maestro Games uses classical music to simulate the experience of conducting an orchestra, with the environment responding to your movements and the music. This gives a sense of empowerment and calm and has been used to treat PTSD.

SimX uses AR to replace training mannequins used by doctors with virtual patients. Not only is this more versatile, with customizable scenarios for training, but it costs less than one-tenth of the price of a traditional training mannequin. Trainees are able to collaborate in a shared simulation, mimicking real-world situations.

Obstacles & Opportunities

While these technologies are developed or under development, there are many obstacles related to AR in medicine. Due to the newness of this technology, regulatory bodies and insurance companies have not yet caught up. FDA approvals take time, slowing adoption of groundbreaking technologies that could save lives. Concerns surrounding sterilization of equipment, with many head-mounted displays being lined with foam or made of plastic, becomes a complicated process. Insurance companies, for the most part, are not set up to pay for or reimburse the use of these types of technologies, therefore leaving the patient responsible for the associated costs.

There is also a subset of AR in medicine for training. If a picture is worth a thousand words and a video worth a million words, an AR experience is worth a billion words. The ability to see and interact creates a depth of experience that is almost as valuable as the real experience.

The hardware is not something to be overlooked. There are a number of competitors in the space, with new devices being released yearly. While it is certainly too early to choose a winner in terms of medical use, there are a few stand-outs.

Google Glass was the original device in this category. First shown at a Foundation Fighting Blindness event it has been used to stream live video of surgeries, consultation with remote experts and even for recording and transcription of electronic health records from patient videos.

Microsoft’s HoloLens has received FDA clearance for use in surgical planning. HoloLens has also been used by Case Western Reserve University and the Cleveland Clinic to develop a mixed reality education system for teaching anatomy. The device itself operates using a combination of voice commands and gestures, allowing the user to interact naturally with the augmented images being displayed.

Form factor and physical properties of HMDs can be especially challenging in medicine. In addition to working in environments that require special precautions, such as sterile conditions or regulatory approvals, health care providers have a strong imperative to interact directly with patients and colleagues. HMDs that obstruct or impair the primary visual field interfere with communication and collaboration, as do products that are bulky or limit movement.

The advantages of using AR will show definitive benefits in cost, safety, and efficiency as adoption rates increase, leading to hospitals and insurers pushing for purchases of new technology, as well as training for medical practitioners in these new technologies. Surgeons, anesthesiologists, emergency medical technologists, and nurses all benefit from these advancements.

There is no clear winner in the hardware space and with the newness of the technologies involved, the path to receive clearance from the FDA for use in medical practice can be arduous. Technology often moves faster than regulation, and that curve is accelerating quickly. This can even lead to technology being developed and/or tested in locales that do not have the same level of regulatory oversight. As pioneering may happen outside the US and other developed nations, this testing has the potential to shift the latest medical practices to new locations around the world.

There are also concerns surrounding the use of AR in medicine. Healthcare professionals are acclimatized to the instruments and methods with which they have been trained and are familiar. Augmented displays can cause information overload. Professionals can also be reluctant to try new methods over the tried and true “safe” methods with which they are familiar; healthcare is by necessity a risk-averse business.

Doctors who are used to finding information in a given place, for example, may be reluctant to use new methods of looking at critical data. This reluctance could also slow adoption of AR, with younger doctors driving the uptake. Doctors commonly practice their profession in the way they were trained, with the technology at the time of their original training. In this way, changes in medicine are more generational than practitioners would like to admit.

AR/VR Consultant, Columnist, Author of the AR-enabled books “Metaverse, A Guide to VR & AR” (2018) & “Convergence” (2019).