Scared of AI? You really shouldn’t be…

26
1989

As a young teen I remember watching the Forbin project, not long after I’d seen Kubrick’s 2001. Both films featured computers that had ideas of their own how things should be done. HAL 9000 got its comeuppance as astronaut Dave Bowman yanked its memory modules, payback for killing all four of his crew-mates. More recently, the Terminator series have added their spin to it. SkyNet will bring down Judgement day on our heads as soon as it gains sentience.

Lets face it, AIs have got a bad reputation and it just got worse a few days because a driverless Uber car injured a woman so badly that she died. This isn’t a real life version of Stephen King’s Christine, just a very unpleasant accident. Apparently if you walk in front of a 38 MPH car from the median (translation- central reservation) when it’s dark, the laws of physics aren’t magically suspended.

It was quite likely that even the best AI equipped with state of the arts sensors would sooner or later find an edge case as this one did and there will be others. The engineers who develop these learn from each one to improve the overall safety but it’s foolish to expect 100% safety.

Should that accident mean that we abandon autonomous driverless vehicles? Definitely not. It has the potential to eliminate just about every driving death from human error. That’s hundreds of thousands of lives saved each year across the globe.

AI is not in itself a danger. Unlike Colossus or HAL 9000, current AIs do not have the kind of general intelligence and creativity needed to conquer the world. Their intelligence is just puzzle solving, particularly finding patterns. They are very good at that and should immediately apply to join Mensa.

AIs can certainly be used to deliberately cause injury or death. A relatively modest $200,000 gets you a gun-toting sentry bot. Match that Apple! Let’s hope they’re less buggy than the ED-209 in Robocop with its “You have 20 seconds to comply”…

There’s a philosophy question about autonomous cars having to make a moral decision whether to crash and kill the occupants or run-over pedestrians/crash into other vehicles. Well you could argue Uber have settled that one but in reality it’s a moot question; the sensors in autonomous cars track every object moving or stationery that might be a threat and take action to prevent it from taking place.

AI does not make moral decisions or judgements. No doubt human ingenuity will find a way to bypass security but I’d be more worried about them being used to carry bombs or suicide bombers or being hacked to kidnap somebody or divert the vehicle over the nearest cliff…

Unlike many news organisations, we chose an approach that means all our reporting is free and available for everyone. We need your support to keep delivering quality journalism, to maintain our openness and to protect our precious independence. Every reader contribution, big or small, is so valuable.
For as little as £1 (£10 if you were at OxBridge) you can support us – and it only takes a minute. Thank you.

Click Here To Make A Contribution - Tim & The Team