There's a whole lot of hype and a massive amount of R&D going into voice recognition systems. Being able to talk to your hardware and have it execute your wishes has to be the most intuitive way to compute ever.

There's just one minor problem- voice recognition is sluggish. I love Siri as much as the next guy, but she drives me nuts as I wait while she ponders my words for what seems to be several small eternities.

Where voice recognition used to be beset with accuracy issues (I can remember testing voice recognition PC software that produced wonderful haiku's that bore little to no relation to what I was saying), the challenge nowadays isn't so much one of accuracy, but is more the computational requirements needed.

Because of this, virtually all the voice recognition capable dodads made by the likes of Samsung, Apple or LG send out a highly compressed recording of your voice commands to servers that are usually located in data-centers thousands of kilometres away so more powerful hardware can process them into usable commands. It's this round trip that makes voice recognition so sluggish and oh so incredibly annoying in use.


This may soon be about to change thanks to the boffins at Intel who've developed a solution that'll interpret your voice commands without sending them out to the cloud and waiting for a response.

Intel have partnered with an unnamed third party to put voice recognition capabilities onto Intel processors that have enough grunt to translate voice commands. Most importantly, this silicon is also small and energy efficient enough to fit in a mobile device so no round trip to the cloud is required, making voice commands a whole lot more responsive.

In a nod to Tony Stark and the Iron Man movies, Intel have showcased a prototype wireless headset incorporating the technology called "Jarvis" that connects to a smartphone. Jarvis listens for commands and can respond in its own synthetic voice, to act as a very cool personal assistant (unfortunately there's still no word from Intel on a working arc reactor prototype though).

Sheer responsiveness aside, the biggest benefit with Intel's voice recognition tech is that it also works even when there is no data connection, making it possible for use in remote locations.

If the idea of slightly more responsive voice recognition has you going "meh" consider this.
When we talk to other people, our conversation just flows, normal conversation simply doesn't work when people have to wait three seconds for a response or before they can respond.

Similarly most of us find it unintuitive when we ask our smartphone for the latest weather forecast and have to wait for a response. To develop a level of confidence in voice recognition that'll ensure we talk instead of type, responsiveness is needed and that'll require hardware that can respond to voice commands as quickly as a human would.

Intel's voice savvy silicon however could be a game changer for wearable tech and smartphones, with people talking instead of typing. This could in turn see new wearable form factors such as discrete headsets becoming more important (and less bulky) than keyboard or touch screens.

This said, manufacturers will need to up their game if this is going to translate into any meaningful uptake beyond the US. While there is little doubt that voice recognition has improved massively, the benefits have remained largely confined to those with North American accents.

Us poor Kiwis more often than not rarely get to see the tech strutting its stuff because the economics of developing a Kiwi accent friendly version just don't stack up. Similar problems also face other English speaking minorities such as the Scottish speech recognition elevator for instance. Oh well here's hoping.