HTC recently announced a new set of trackers for its Vive virtual reality headsets, including one that tracks facial expressions and mouth movements. It’s part of a growing trend to make VR more responsive and interactive.  “Having more-realistic experiences is important to fully transition VR from a novelty or niche tool to a widely used consumer technology,” Ellysse Dick, a policy analyst at the Information Technology and Innovation Foundation, a think tank for science and technology policy, said in an email interview. “It will allow for more meaningful interactions and rich experiences that will expand the market in areas such as education, workplace collaboration, and entertainment.”

Watching Me Watching You

HTC says the VIVE Facial Tracker can track up to 38 different facial movements, and when it’s paired with the VIVE Pro Eye, users can enable full-face tracking. The device has a sub-10 millisecond response time and uses dual cameras to capture the movements of the bottom half of your face. It also can track in low-light environments due to infrared illumination. VIVE said that the Facial Tracker will be available on March 24 for $129.99. “A hint of a scowl,” the company writes on its website. “A sneer. A smile. VIVE Facial Tracker captures expressions and gestures with precision through 38 blend shapes across the lips, jaw, teeth, tongue, cheeks, and chin.” But the Facial Tracker won’t work on all of HTC’s headsets. The company says it will be compatible with the professional-level Vive Pro line, but not the consumer-focused Vive Cosmos.  Other companies already incorporate face tracking into virtual reality headsets. The Magic Leap One and Microsoft’s HoloLens both feature expression tracking.

A Radical Leap

In the future, face tracking could lead to radical changes in virtual reality, experts say. For example, a headset could understand your reaction to interfaces and adjust them accordingly, Jared Ficklin, a pioneer developer of software user interfaces and the chief creative technologist at argodesign, said in an email interview. Or, a future version could monitor what you are watching.  “Imagine looking at an object and making a small gesture to select it rather than trying to steer a cursor around,” he said. “Or using voice combined with gaze and gesture allows a very natural interaction. Put this there. One can also imagine the other direction and the development of smaller, less socially visual cues.” Clenching your cheeks or pulling the corner of your mouth could be mapped to interactions like select and back, allowing some to compute in public without calling attention to the fact you are computing in public, Ficklin said. “If you are using a wearable mobile computer, especially in the form of eyeglasses, this socially acceptable or socially hidden computing will become very important.”

The Face Unlocks Communications

Reading expressions is the key to communications both virtual and in real life. The human face generates 21 basic facial expressions that could be read by VR, body language expert Patti Wood said in an email interview. Of the 36 muscles used to create facial expressions, only a fraction of those are used in smiling,  “The precise number of muscles varies depending on how researchers define a smile,” Wood said. “Most experts say six pairs of muscles are directly involved with smiling.” Facial expression tracking could radically change the VR experience for users, particularly those using avatars during a telepresence session where you chat with other people, Ficklin said.  “It is all about the importance of non-verbal communication,” he said. “Mapping facial expressions onto the user’s avatar means those that are in a meeting with them have more of the non-verbal communication channels we have come to expect from in-person or video chat. They can see how the person is reacting.”