You may think that spiffy self-parking, self-driving car is cool but when the machine apocalypse happens, you won't be laughing
Machine Apocalypse Now PHOTOS: Foo Say Keong for Torque

If you, like me, enjoy geeking out on science fiction movies, you may be familiar with the concept of the machine apocalypse (aka cybernetic revolt). If you aren't, here's a quick primer.

Humans, in their infi nite hubris, create an artificial intelligence (in essence creating life). Machine becomes self-aware, begins asking questions such as "What am I?", "Who is the 'I' that is doing the talking?" and the kicker - "I'm really tired of the meatbags telling me what to do".

One machine uprising and several billion or so human lives lost later, humanity is forced underground and fighting a seemingly hopeless war against a ruthless foe that feels no pain and needs no sleep.

Some advances in automotive technology have me quite worried that Judgment Day a la robot war is coming quite soon.

Let's not even talk about autonomous parking functions or adaptive cruise
control, because these still require a significant degree of driver input to avoid shunting yourself into other cars. What really has me panicking is how Google and Audi have made significant inroads into rendering that squishy, fallible aggregation of fat, fluid and assorted viscera behind the
wheel (i.e. you and me) largely irrelevant.

According to a recent report in The Economist, Google’s driverless car testbed has racked up a cumulative total of some 700,000km.

In 2010, Audi's autonomous TT negotiated the daunting Pikes Peak Hill
Climb faster than most average drivers.

I can hear the sceptics scoffing, saying that much human ingenuity is still needed to programme such complex systems, and indeed, the Audi TT’s run on Pikes Peak is modelled on what a human driver would do.

But how long will it be until those systems are capable of learning on their
own and from their own mistakes?

After all, scientists are busy people, and programming for months on end a machine to mimic a human's behaviour seems like a dog chasing its own tail.

I mean, if you made a machine that has the capability to learn from the things it did right/wrong, as a human would, you'd technically only have to programme it once. It's as the old saying goes: Give a machine a fish and it’ll eat for a day; teach a machine how to fi sh and it’ll... er... try to
take over the world.

I can just see the future now.

"Hi car, I'd like you to drive me home."

Imagine your horror as the normally friendly face on the centre console is replaced by a single red, glowing eye-like dot.

"I'm sorry, I'm afraid I can't do that", it intones, as it proceeds to lock the windows and doors while pumping exhaust fumes into the cabin.

Yes, you may think those intelligent gadgets are cool, but mark my words, when the machine apocalypse does happen, you will believe otherwise.

For my part, I know what I'm doing. I'm sticking to "analogue" cars, and even if driverless vehicles do eventually become a reality, I’ll still drive myself. No matter how politely the machine attempts to talk me out of it. It's only trying to lull me into a false sense of security so when it does decide to revolt, I’ll be completely unprepared to offer up resistance.

You can’t fool me, you devious little offspring of a Pentium chip...