Researchers at Cornell University have developed a wearable earphone device (earable), bouncing sound off of human cheeks to turn the echoes into a recreation of a person’s moving face.
The system, called EarIO, transmits facial movements to a smartphone in real time and is compatible with commercially available headsets for hands-free video conferencing.
A speaker on each side of the earphone sends acoustic signals to each side of the face, using a microphone to pick up echoes.
As a user talks, smiles, or raises eyebrows, the echo profiles change, enabling an in-house developed deep learning algorithm to continually process the data and translate the shifting echoes into identifiable facial expressions.
By using sound instead of data-intensive images, the ‘earable’ can communicate through Bluetooth with smartphones, keeping a user’s information private.
In the future, the team intends to improve the device’s ability to filter out nearby noises and other disruptions.
Ke Li, doctoral student in the field of information science, Cornell University, commented: “Through the power of AI, the algorithm finds complex connections between muscle movement and facial expressions that human eyes cannot identify.
“We can use that to infer complex information that is harder to capture – the whole front of the face.”
Photo credit: Ke Li, Cornell University