Last Updated on August 8, 2021 by MyGh.Online
Many people, influenced by films such as The Terminator or The Matrix, think True Artificial intelligence – the technology to give robots or computers their own independent personality – could be dangerous.
AI developers are already discussing how to place limits on future “thinking machines” so they will always act in humanity’s interest.
AI consultant Matthew Kershaw told the Daily Star that it’s even possible the technology will reach worrying heights within the lifetime of the younger among us.
“It might just be in our lifetime,” he says, adding: “If you’re young enough!”
His comments come after Professor Stephen Hawking, seen by many as the greatest scientific genius of the modern era, warned: “The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.”
Meanwhile, SpaceX entrepreneur Elon Musk agrees with him, saying: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful”.
But Matthew says that true General AI will require computers powerful enough to hold a comprehensive model of the world “which isn’t going be anytime soon”.
“Given that we don’t really understand what it means to be conscious ourselves, I think it’s unlikely that General AI will be a reality anytime soon,” he adds. “We just don’t know what it actually means to be ‘conscious’.”
Matthew says that while existing limited AI enabled computers to do incredible things, they still don’t learn as well as children do: “A human child doesn’t need to see more than five cars to learn how to recognise a car. A computer would need to see thousands.”
The kind of self-aware artificial intelligences we see in the likes of Star Wars and Westworld are nothing new in science fiction. As far back as 1920, Karel Čapek’s play RUR predicted a robot uprising.
But what scientists call “artificial general intelligence” remains for now a scientific dream.
Robonaut, the robotic astronaut NASA installed on the International Space Station in 2011 broke down and had to be returned to Earth after astronauts struggled to fit it with some legs.
In 1950, computing pioneer Alan Turing devised a test to determine if an Artificial Intelligence could pass for a human. As yet, no AI system has passed it – although a few have come close.
Perhaps the closest was in 2018, when Google’s Duplex AI telephoned a hairdresser’s salon and successfully made an appointment
But that, like the annoying automated systems that answer the phone when you try to book a cinema ticket, Duplex was working on a very specific task. A true General AI would have been able to continue chatting to the hairdresser about where it was going on its holidays.
Artificial Intelligence is everywhere these days – in autonomous weapons systems, self-driving cars, even in toothbrushes. But while those systems are inhumanly good at their dedicated missions, none of them as yet can learn to do something different without human help.
In polls, most AI experts say that we will see a General Artificial Intelligence by the end of this century. The most optimistic estimates tend to be around 2040 while some pundits put the date somewhere in the 2080s.
But even that might be a bit over-optimistic. AI pioneer Herbert A. Simon predicted “machines will be capable, within twenty years, of doing any work a man can do” way back in 1965.
Many pundits say that true artificial consciousness will never be achieved, because we don’t fully understand our own. Straying from pure science into something like mysticism, a lot of scientists say that the human mind is something independent of the physical brain.
Anil Seth at the University of Sussex speculates that human consciousness could be “substrate-independent” – that it’s more than just the brain-cells it occupies.
Phil Maguire, from the National University of Ireland, told New Scientist: “Machines are made up of components that can be analysed independently.
“They are dis-integrated. Dis-integrated systems can be understood without resorting to the interpretation of consciousness.”