When the creators of a seminal new technology call for development to be paused, there’s a reason to sit up and listen.

In July 2023 some of the biggest names in artificial intelligence (AI) called for a pause on development so that an international consensus on how it could be ethically developed and used could be developed.

While recognising that AI has the potential to deliver significant benefits, the open letter also noted the risks: “…AI can also force human moderators to witness horrible content, disadvantage the disenfranchised, amplify data privacy and safety concerns, steal intellectual property not created for the process, and perpetuate biases that cause harm.”

This raises a pretty existential question for AI – should we trust it?

 

Where does the data come from?

 

My starting point for whether I trust and rely on any data is always provenance – where does the data come from? After that, it’s important to understand the quality of the data.

 

The risk with AI is that it can be confidently wrong, and more than that, because of the way many AI systems are trained, if they lack a nuanced understanding of your context they could well bend the facts to suit their purposes.

Google gives us a perfect example, having recently warned employees to Google results from its own AI, Bard, to be certain they are accurate. And earlier this year Microsoft Bing’s ChatGPT-fused AI refused to give a Twitter user any times for showings of Avatar – The Way of Water, claiming it hadn’t been released (it had), that the year was 2022, and eventually telling the Twitter user they were “wrong, confused and rude”.

There are many other more concerning examples of bias being built into AI systems due to the source data they draw from. If what goes in isn’t great quality, you can’t expect what comes out to be.

Yet AI isn’t the only example of source data not being quite what’s needed. One of the huge scientific breakthroughs of the 21st century has been decoding the human genome. This information was then used to inform lots of medical research. However, it was only recently revealed that this decoding was based on just 20 people from the same area of North America, which can by no means be representative of the world’s population.

 

Technology ahead of regulation

 

What we’re seeing with AI is technology developing ahead of regulation. Some countries had AI regulation on the table as far back as 2017, but many more countries are having to run to catch up due to the speed at which AI is being developed and commercialised and increasing concerns about the risks of using it. We’re forced to debate what’s happening while it takes place, rather than being able to take a considered route to creating a framework of how AI should be developed, used and regulated. Having this in place already would have helped people trust AI much more.

The EU has moved quickly to get some legislation underway. It published its regulatory framework in April 2021 and this summer started negotiations with member states on the details of its AI Act.

 

It proposes different rules for different risk levels, from banning systems considered to create unacceptable risk – things like systems which could be used to manipulate people’s behaviour – through high risk – including systems handling biometric data and linked to education, training and employment – down to low risk. There will also be specific rules for generative AI, like ChatGPT, which will have to comply with transparency rules.

 

In the UK, the prime minister will host a global summit on AI regulation in the autumn of 2023. Despite saying he wants the UK to become the “geographical home” of new safety rules, we are already behind other areas, such as the EU.