Creating a barrier free user experience with the Myo armband

A couple weeks ago we talked about the evolution of the Myo armband’s hardware design and walked you through the series of prototypes that have led up to the production design. In much the same way that the hardware has been built from the ground up, the software that makes the device work has also undergone a series of transformations from an initial proof of concept to a fully realized system.

Needless to say when it comes to the Myo armband, the software responsible for gesture recognition is key and as such, has been the subject of tweaks, adjustments, modifications, and the occasional complete overhaul throughout the course of these past two years. A central component of this software, known as the classifier, is tasked with identifying which gesture is being performed. In particular, it is this component that has been subject to the most dramatic changes and you’ll soon learn why.

In its infancy, the Myo armband required a training sequence before it could classify gestures. This meant that the user needed to perform examples of each gesture in order to demonstrate to the device what they ‘look’ like from a signal perspective. This training sequence consisted of up to ten repetitions of each gesture and by using these ‘examples’, the Myo armband was then trained to recognize the patterns of a particular person’s gestures. However, if a different person put on the armband, the classification would no longer work. In fact, if the original person even rotated the device, they would need to retrain the Myo armband because the signals would appear to be coming from different positions on the arm. As you can imagine, this proved to be a long and arduous process for the user.

one-gesture-two-signals

Through improvements to the sensors and the classifier, the number of repetitions in the training sequence was eventually whittled down to just a single example of each gesture. But the training sequence still needed to be performed every time the Myo armband was placed in a new position or on a new arm. With the goal of effortless interaction in mind, we needed the Myo armband to work out of the box with the smallest amount of setup possible. Naturally, this meant doing away with the training sequence altogether. The new paradigm, which we referred to as ‘trainingless’, needed to classify just as well as its predecessor without the person ever having to demonstrate each of the gestures prior to use.

The first order of business for classification is figuring out whether the Myo armband is being worn on a left or right arm, what its orientation is (logo up or logo down), and how much the device is rotated around the user’s arm. These three pieces of information define the Myo armband’s state. With the introduction of trainingless classification, all of this information is extracted from a single intuitive sync gesture, which is performed when the device is placed on a new arm.

This crucial gesture consists of waving one’s hand out (bending the hand at the wrist) and then pivoting the forearm at the elbow. While words make for a slightly unwieldy description, the video below clearly shows what the sync gesture looks like. Over the past two years, a process which initially consisted of demonstrating each gesture up to ten times has been distilled into a single motion.

Once the Myo armband’s state has been determined from the sync gesture, the classifier makes adjustments accordingly and then begins processing the signals coming from the electrodes in real time. Due to the nature of these adjustments, it’s incredibly important that the information extracted while the sync gesture is being performed is reliable and accurate.

As explained in our previous blog post on EMG data collection, the structure and physiology of our muscles can vary from person to person which ultimately means that every individual’s EMG signals are different. This of course, poses a large challenge when attempting to implement a trainingless classifier that works for every user. However, by applying the usage data gathered via our data collection, we are able to create a generalized model of what each gesture’s EMG patterns look like. When it’s being worn, the Myo armband is constantly using this model to figure out if the user is performing a particular gesture. Once the classifier identifies that a gesture is being performed, a gesture event is sent over Bluetooth to whatever device the Myo armband is communicating with. Upon receipt of the gesture event, the software application running on the receiving device will respond to the command sent from the Myo armband.

The team here at Thalmic Labs is always trying to improve the gesture recognition capabilities of the Myo armband. Whenever we develop new methods or want to optimize existing algorithms, we use benchmarking to monitor the effects of our changes. Benchmarking is the process by which we evaluate the performance of various parts of the system. Conceptually, it’s not unlike handing out a classification exam to each of the different classifiers we’re testing out. Through benchmarking, we can compare their scores to see the ways in which one approach is better than another, which ensures we’re always heading in the right direction. The benchmark tests also give us an idea of which parts of the system to focus on next for improvement. As more and more user data is collected, we will not only be able to create models that generalize better, but our benchmark tests will be better indicators of performance in the real world.