Deep inside, most IT people know their services won't be needed sooner rather than later. Take developers, for instance.

Writing programs, cutting code, hacking, call it what you like, used to be quite an effort that required lots of time and expertise.

Time is a precious commodity and experience and knowledge are rare, and over the years, coding has become less of an art form with more and more smart tools that provide helpful coding suggestions from existing libraries to speed things up.

That still requires a developer to look at things and to figure out whether or not to use the suggested code snippets and that could be a slow process.


Clearly, the developer is in the way of speed and efficiency. Get rid of developers then?

It's actually on the cards, almost, as Cambridge University and Microsoft researchers have taken the first step towards an artificial intelligence system with machine learning that can write its own programs.

"A dream of artificial intelligence is to build systems that can write computer programs," the researchers start the introduction to their paper about the impressive DeepCoder work.

Maybe, but for programmers, it probably sounds like a Terminator-style nightmare and the first steps towards Skynet becoming sentient.

DeepCoder learns from source code what works and what doesn't, and the researchers want to make it quick and even more capable.

Hairs stand up on the back of your neck yet?

In fairness, DeepCoder is very primitive still.

It can only complete very simple programming tasks, about five lines of code long.

It could definitely have great uses if it removes the finicky complexity of programming that requires people to understand code that looks like chicken scratchings, and let them use a system like DeepCoder to create programs quickly that do what they want.

AI won't go away, the promise it holds is far too great and applies to many more things than automagically writing computer programs.

Now, it might come as a surprise that AI wasn't conceived by IT geeks.

At the recent Webstock conference in Wellington anthropologist Dr Genevieve Bell pointed out that Burrhus Frederic Skinner was one of the formative thinkers behind AI.

Yes, that's the behaviourist B F Skinner, the psychologist who discounted people's free will and biology in favour of negative and positive reinforcement, based on observations learned from animal experiments.

It's easy to see why Skinner and other behaviourists' principles would appeal to AI developers and engineers, because that model is less complex to understand and fits better with binary computers.

That might not be the right way to approach AI though.

Skinner's greatest critic, Noam Chomsky, certainly thinks AI is heading in the wrong direction.

Chomsky's criticism isn't against AI per se but he's against how it is being built currently, as it's unlikely to make us properly understand intelligence and thinking.

That by itself is a disconcerting thought, that complex AI systems could be thought of as being intelligent in the same way people are, whereas they're anything but.

Given that AI will be used to extend and enhance - we hope - our cognitive capabilities and even take over some tasks that people do currently, it's vital that we don't create a monster.

The direction in which AI and self-learning systems go as they develop themselves and start thinking by themselves is something we can't afford to get wrong but it's not guaranteed that we won't.

Terminators won't be visiting us on eradication missions as it appears time travel is only possible into the future, but that's no excuse to slack off and not ensure AI will serve humanity instead of the opposite.