"Do what I mean, not what I say" is a familiar hypocritical expression, but this is exactly what we expect from our computers every day. People and machines speak very different languages, and only when we talk to technology do we realize how much we depend on wordless communication.

For decades, individuals had to learn the language of technology to use it at all. Prior to the invention of the GUI even the most basic computing required specialized knowledge.

But the landscape has shifted rapidly, and we futurists try to turn this idea on its head. Rather than training people to talk to machines, we make machines that are paying attention to more than just what they're told.

One of the most important sources of information we've found is context. It turns out that where we are and what we're doing is about half the game when it comes to communication (maybe more than half). This is why I get so excited about articles, like this one from the Harvard Business Review (HBR), that argue for a gesture control environment so simple and intuitive "it becomes something we barely think about".

Designing a technology your user "barely thinks about," like the Myo armband, requires a lot of thinking. The HBR article mulls many of the relevant issues; cultural context (gestures are a kind of language and they change from place to place), and physical context. But this is just the beginning; context offers a staggering amount of information.

Imagine I'm hosting a party. In the basement, people are previewing a documentary produced by a friend. In the livingroom, others are talking and listening to music. Because a contextual control system knows I'm at the party (through Facebook, Google Now, or my calendar) -- and that the music and video players are running -- it can assume I'll be using gestures to control my music or the documentary and can safely disregard other controls (like gaming). If I walk into the livingroom, contextual awareness could give me automatic music control -- ditto the documentary in the basement.

The upshot is that I'm in control of technology around me and barely thinking about it. If I want to turn the music down, contextually-aware gesture control would only quiet the speakers of the system disrupting me.

This is just one example; the true power of context is that it's always relevant. Maybe when I come home from work and sit down in front of my television, the technology can assume I probably want to play my favorite game because this has been my routine all week. If I don't feel like playing, it knows I probably want to watch Netflix (since this is my second choice 99% of the time I sit in front of my television).

With a rich understanding of context, the possibilities are endless. Eventually my devices will know I like more control over my music in the morning than the evening. That I watch television instead of movies when I've worked longer than 11 hours. That I like to hear up-tempo music while biking and news headlines while walking. That while I'm hiking it should keep totally silent until I'm back in civilization (unless it's an emergency, context it also understands).

It sounds like science fiction, but this is the direction we're headed. New contextual computing tools enter the market daily, and as the HBR article points out gestural inputs are the most intuitive choice for controllers. We can't wait for the arrival of a truly rich contextual computing ecosystem ripe for gesture control.