While he East already possible to control computers with hand gestures, this usually involves the use of peripheral electronic devices or special on-board hardware. The Typealike system, however, brings such functionality to existing computers, no additional electronics are required.
Currently under development at the University of Waterloo in Canada, Typealike consists of two components: a small, downward-tilted mirror that is placed over the computer’s webcam, and machine learning-based software that running on this computer.
When the user wants to turn up the speaker volume – just as an example of the system’s capabilities – they simply make a thumbs-up gesture with one hand that stays next to the keyboard. The software recognizes this gesture and increases the volume accordingly.
“It all started with a simple idea of new ways to use a webcam,” says Nalin Chhibber, a recent master’s graduate from Waterloo’s Cheriton School of Computer Science. “The webcam is pointed at your face, but most interaction that happens on a computer happens around your hands. So we thought, what could we do if the webcam could pick up hand gestures?”
Additionally, by aiming the webcam view towards their hands, users won’t have the privacy issues that would come with Typealike constantly looking at their face.
Because everyone’s hands look and move differently, the researchers used 30 volunteers to create a video database of the 36 gestures currently used by the system. A machine learning algorithm was then trained on this database, learning to consistently recognize every gesture despite variations in hands and lighting conditions.
The system is currently 97% accurate in identifying gestures. It is believed that when further developed, Typealike could also be used in virtual reality environments, making handheld controllers unnecessary.
“We always strive to create things that people can easily use,” says Assoc. Professor Daniel Vogel. “People look at something like Typealike, or other new technologies in the field of human-computer interaction, and they say it makes sense. That’s what we want.”
The research is described in a recently published article in the journal ACM human-computer interaction.
Source: University of Waterloo