Photo by Nan Palmero/FlickrCC

[spacer height=”20px”]

By Caroline Cunningham/Washingtonian

The patient is lying on the table in front of me, bright fluorescent lights shining down on her blood-stained clothes. I know she’s been in a car accident, but I have no idea of the extent of the damage yet.

I step closer to the table and glance at her vitals on the screen next to her. My scrubs-clad team stands nearby, waiting for my instructions. I wonder where to begin, trying to shut out the sounds of the emergency room around me.

Suddenly, the patient begins to convulse—she’s vomiting. If I don’t make a move soon, she’ll choke and die. Her skin is turning a sickly bluish-gray.

Finally, I remember what I’m supposed to say: “Check for breathing.” One of the doctors jumps forward to follow my command.

Gently, I lift the goggles off my eyes, blinking as I come back to reality. The ER, the patient, and the other doctors have all vanished, and I’m standing in the MedStar Institute for Innovation office, surrounded by the people who built this virtual-reality program from scratch.

With the help of headphones, a microphone, VR goggles, a handset controller, and laser sensors on the wall that followed my every move, I’d been thrust into an “emergency” where the commands I gave and the clinical decisions I made determined whether the patient lived or died…

Read the full story at