EDITORS CHOICE 17.07.20

Over half of UK adults call for more regulation to make AI safer

shutterstock_638342005
Does AI need to be made safer with regulation? Image: Tatiana Shepeleva / Shutterstock.com

A recent survey has been conducted by Fountech.ai into how people really feel about artificial intelligence (AI). Amy Wallington reviews the results.

AI isn’t new. In fact, the term was originally coined back in 1956 at a conference at Dartmouth College in New Hampshire, attended by MIT cognitive scientist Marvin Minsky and others who were said to be optimistic about the future of AI. Minsky believed that the human mind was fundamentally no different to a computer and spent his life working on making machines intelligent. 

However, the idea of AI and inanimate objects becoming intelligent has been traced back even further to ancient Greek times with myths about Talos, described as a giant bronze man that Hephaestus, the Greek god of invention and blacksmithing, built. 

Naturally, humans fear the unknown, but if AI has been around for so long, why are people so fearful of it? 

AI has been advancing rapidly since the beginning of the century and has gained a lot of attention from the media. Headlines like “AI is stealing our jobs” and “Machines will take over the world” have sparked fear in the public eye. 

But the issue lies much deeper than the fear that films such as The Terminator will come true and machines will overtake humanity. Rather, it seems people don’t trust AI technology and would much rather humans and AI work together if the technology is to exist at all. 

shutterstock_1228189597
The 1984 sci-fi film, The Terminator, shows AI in a developed form that starts taking over the world. Image: RichartPhotos / Shutterstock.com

Fountech.ai is an organisation that describes itself as “an international think tank, attempting to push away barriers that are preventing AI’s progress”. It commissioned the independent survey of 2,000 UK adults to find out more about the public’s attitudes towards the current state of AI development. The results told a story of trust issues, fears and scepticism around the technology.

Most people surveyed (64%) want to see more regulation introduced so that the technology is safer to use and does not pose threats to society. Perhaps unsurprisingly, those aged over 55 appear more sceptical of AI, with 73% keen to see additional guidelines introduced to improve safety standards. This is in comparison to just over half (53%) of those aged between 18 and 34 who held this view. 

It is not surprising that the older generation are more sceptical, having not been brought up surrounded by technology as many young people are today. However, the survey shows that even the younger generation believe safety is an issue with AI technology. 

In terms of AI in a home environment, security is key. Safer AI will give security among homeowners. User-personalised choices and algorithms should be secure and protected. Over the last few years, smart speakers and voice assistants have dominated the home technology market but there are major concerns about how safe these devices really are. 

Headlines in the media have uncovered the fact that these devices do much more than you ask them to, often listening in to private conversations without being triggered as well as experiencing false triggers. Amazon officials have admitted to listening in to people’s homes through the devices and even hearing recordings of abusive acts. However, these concerns with the AI device that the media publish are actually caused by human interference. Companies have chosen to listen in on conversations, causing a lot of controversy around security. 

shutterstock_1007322109
Smart speakers and voice assistants have dominated the home technology market in recent years. Image: Daisy Daisy / Shutterstock.com

What’s more, people have cameras in their houses that can be hacked without the homeowner knowing anything about it. So, it’s no wonder people have trust issues with AI technology when it comes to the home. 

In many critical environments where human error can cause catastrophe, AI technology is being used to eliminate such risks. But the survey found that people still do not trust machines not to make mistakes with 61% of respondents saying that AI should not be making any mistakes when undertaking an analysis or making decisions. Furthermore, 69% think that a human being should always be monitoring and checking decisions that are made by AI – it certainly saves some of the job losses AI is causing!

When questioned about the chances of AI making a miscalculation, 45% of UK adults believe it is harder to forgive mistakes that are made by machines than it is human mistakes, the figure being more or less the same across all demographics. 

But who takes accountability when a machine gets it wrong? Seventy-two percent of people believe that companies that develop AI should be held responsibility for any mistakes that the technology makes. Another split in generations, 81% of those aged over 55 held this view while only 60% of millennials agreed. 

shutterstock_680929729
Machine learning can help eliminate human error in critical areas. But the public aren't convinced. Image: Phonloamai Photo / Shutterstock.com

Do we expect too much from artificial intelligence? Should there be more regulation around the technology? AI can be really useful in many areas of life, especially critical situation, but are we becoming too reliant on it?

Nikolas Kairinos, founder of Fountech.ai said: “We are increasingly relying on AI solutions to power decision making, whether that is improving the speed and accuracy of medical diagnoses or improving road safety through autonomous vehicles. As a non-living entity, people naturally expect AI to function faultlessly, and the results of this research speak for themselves: huge numbers of people want to see enhanced regulation and greater accountability from AI companies.

“It is reasonable for people to harbour concerns about systems that can operate entirely outside human control. AI, like any other modern technology, must be regulated to manage risks and ensure stringent safety standards. That said, the approach to regulation should be a delicate balancing act.”

Kairinos concluded: “AI must be allowed room to make mistakes and learn from them; it is the only way that this technology will reach new levels of perfection. While lawmakers may need to refine responsibility for AI’s actions as the technology advances, over-regulation AI risks impeding the potential for innovation with AI systems that promise to transform our lives for the better.”
 

A recent survey has been conducted by Fountech.ai into how people really feel about artificial intelligence (AI). Amy Wallington reviews the results.