Researchers at Pohang University of Science and Technology (POSTECH) have developed a breakthrough wearable technology that can convert silent speech into audible voice by reading subtle neck muscle movements. The study, led by Professor Sung-Min Park and Dr. Sunguk Hong, was published in Cyborg and Bionic Systems, marking a significant step forward in human-machine communication.
From Muscle Movements To Spoken Words
The innovation is built on a simple but powerful idea: speech is not just about sound. When a person speaks – or even attempts to speak silently – tiny movements occur in the muscles and skin around the neck. These movements form a kind of “invisible map” of intended speech.

To capture this, the researchers created a wearable device called a multiaxial strain mapping sensor. The system combines a miniature camera with flexible silicone embedded with reference markers, allowing it to detect even the smallest skin deformations. Designed for daily use, the sensor can be comfortably worn on the neck and automatically recalibrates when repositioned.
The collected data is then processed using artificial intelligence, which interprets the strain patterns and reconstructs the intended words or sentences. By pairing this with voice synthesis trained on the user’s vocal profile, the system can generate speech that closely resembles the person’s natural voice – even when no sound is produced.
A Practical Leap Over Existing Systems
Traditional voice restoration methods rely on technologies like Electromyography (EMG) or Electroencephalography (EEG), which often require bulky equipment and can be uncomfortable for extended use.
The POSTECH team’s approach eliminates these barriers by offering a lightweight, wearable alternative. In testing, the system demonstrated high accuracy in reconstructing speech, even in noisy environments such as industrial settings where conventional microphones struggle.
Real-World Impact And Future Potential
The implications of this technology are far-reaching. It could provide a new communication pathway for patients who have lost their voices due to vocal cord damage or laryngeal surgery, enabling them to “speak” again using their own voice profile.

Beyond healthcare, the system could enable silent communication in environments where speaking aloud is impractical – such as libraries, meetings, or high-noise workplaces. It also opens the door to more natural human-AI interfaces, where intention can be translated into speech without physical vocalization.
Looking Ahead
The researchers aim to refine the technology for broader real-world deployment, improving accuracy and expanding language capabilities. Future iterations may integrate more seamlessly with consumer devices, potentially transforming how people communicate in both personal and professional settings.
As AI continues to merge with wearable technology, innovations like this signal a shift toward more intuitive, unobtrusive forms of interaction – where even unspoken words can finally be heard.