"I'm sorry Dave, I'm afraid I can't do that."
That sentence will send a chill through anybody who has seen the 1968 movie 2001: A Space Odyssey.
HAL 9000, the on-board computer, created a wave of fear around intelligent computers that we could talk to as if they were human.
Last week at CES, the world's largest consumer electronics trade show, that fear seemed long forgotten - voice recognition technology was the dominant feature in many of the products on display.
Voice-enabled technology has several advantages, for example I'm writing this article through dictation software on my computer - software I discovered last year after breaking my hand and needing a solution that could help me to type as fast as I could before my injury.
With scientific jargon-filled sentences in my daily work, I needed a solution that could be taught unusual language and adapt to regional accents.
That's the great thing about voice recognition technology: it's trainable and can learn from its mistakes, linking speech recognition to complex natural language processing systems to figure out not just what you say, but what you actually mean.
Using voice commands rather than physical typing through a keyboard is empowering those with dyslexia to write without typing, those with physical disabilities to speak through their machines and interestingly those who are too young to learn to spell to verbally type a written document.
With new laws being enforced around distracted driving, voice command software usage is growing as a simple hands-free, fast solution that doesn't need a menu structure to navigate or a security code to unlock.
Google says, at present, 20 per cent of its mobile searches are initiated using voice command, however, as we become more comfortable using the technology and its capabilities improve, this number is expected to grow.
Competition in the marketplace is pushing the capabilities of the technology; there has been more progress over the past 20 months than in the first 20 years of voice recognition technology.
As Amazon, Apple, Google and IBM push each other's software to understand speech as well as humans can, Microsoft has just claimed to have achieved the lowest word error rate of just 6.3 per cent.
This beat IBM's 6.9 per cent set only a few months before and - when compared with the 43 per cent word error rate set in 1995 - shows how far we have come.
Our relationship with voice-activated systems is also changing thanks to software designed for human companionship.
Microsoft's Xiaoice is a voice activated chatbot designed to come up with responses that will keep a conversation going for longer, and Jibo is a social robot that can read interactive bedtime stories to children.
Voice technology is not without its challenges though, as seen last week through Amazon's intelligent personal assistant, Alexa.
Because the software is always on and always listening, the Echo speaker device overheard a 6-year-old girl ask for a doll's house, and promptly ordered a $230 doll's house online because her mother had enabled voice purchasing on the device.
On a more serious note, US police investigating a murder were able to extract recordings from a voice controlled speaker that had been playing at the home of the suspect, bringing up questions around privacy of data and questioning whether a portable speaker could be considered a key witness to a murder.
As more of our devices are converted to speech power, commonplace objects such as remote controls will soon be a thing of the past.
The next time you search for yours behind the couch, remember that this might be the last one you ever own!