US and UK sign advanced AI pact

  • The US and UK will collaborate on the development of safety tests for advanced AI
  • Agreement comes four months after an international AI Safety Summit was held in the UK… 
  • And one month after US State Department warned AI poses “a catastrophic risk” to security
  • Of greatest concern is artificial general intelligence (AGI) technology and its potential to exceed the capabilities of mankind
  • Self-regulation of AI companies and organisations will end. But will it be soon enough? History suggests probably not

The US and UK have signed a Memorandum of Understanding (MoU) under the terms of which the partners will work together on the development of safety tests for the most advanced AI large language models (LLMs) of the types developed by OpenAI, Google and many others. The deal, which came into force on 1 April (the day it was formally agreed), calls for AI safety institutes in the two countries to synchronise their work and collaborate “seamlessly” together on research and the development of suites of evaluation models as a framework via which to test and provide guidance on what constitutes AI safety. 

The MoU was signed in Washington DC by the US Secretary of Commerce, Gina Raimondo, and Michelle Donelan, the UK’s Secretary of State for Science, Innovation and Technology. The agreement is evidence that something concrete has been achieved following the discussions that took place last autumn at the AI Safety Summit hosted by the UK and held at Bletchley Park, the home of Britain’s secret, and highly successful, code breaking and computer development efforts during World War II (so successful that the work of the Bletchley Park team has been internationally recognised as having shortened the war by at least two years).

At the AI Safety Summit, countries and blocs other than the UK and the US, including China, the EU, France, Germany and India, signed the Bletchley Declaration under the terms of which they committed to work together on formulating agreed AI safety standards. However, they were further galvanised into action after the release in March of a disturbing report commissioned by the US State Department which warns that AI poses a “catastrophic” risk to US national security and says time is running out for the federal government to avert disaster caused by AI technologies. 

The report, from Gladstone AI, was ordered by the State Department back in October 2022, just a month before generative AI (GenAI) sensation ChatGPT emerged to astonish the world. The document was to assess the proliferation and security risks attendant on “weaponised and misaligned AI.” Whilst the report was in preparation, the emergent AI environment mutated into a worrying, runaway world-embracing phenomenon. The report openly and unequivocally states that under some circumstances, advanced AI could “pose an extinction-level threat to the human species.”

Following publication of the report, in an interview with CNN, the CEO and co-founder of Gladstone AI, Jeremie Harris, said: “AI is already an economically transformative technology. It could allow us to cure diseases, make scientific discoveries, and overcome challenges we once thought were insurmountable, but it could also bring serious risks, including catastrophic risks, that we need to be aware of.” 

He added: “A growing body of evidence, including empirical research and analysis published in the world’s top AI conferences, suggests that above a certain threshold of capability, AIs could potentially put themselves beyond human control. There is a clear and urgent need” to intervene now.” 

Hence the rather belated but welcome moves by national governments to attempt to legislate to control and regulate AI, or as the Biden administration puts it, “to seize the promise and manage the risks of artificial intelligence”. And that is what the new concordat between the US and the UK is all about. Another section of the Gladstone AI report cites “private concerns within AI labs that at some point they could ‘lose control’ of the very systems they’re developing”, with “potentially devastating consequences to global security.”

 

Artificial General Intelligence (AGI) seen as having great potential but it also carries the greatest risk 

The new MoU between the UK and the US will examine various scenarios including the possible introduction of “emergency” AI regulations and limits on the extent of computer power that could be used to train AI models. Of particular concern is the emergence and proliferation of artificial general intelligence (AGI) technology. As TelecomTV explained in an previous article, AGI (which is commonly also called ‘deep AI’ or ‘strong AI’)  is based on a framework called Theory of Mind AI that underpins research into the training of machines to learn human behaviour and, through that, understand the fundamental aspects of consciousness.

The promise (and greatly increasing concern) is that AGI will be able to plan, learn cognitive abilities, make judgments, handle uncertain situations, and integrate prior knowledge in decision making. Because AGI is such an unknown factor, but known to be evolving very quickly, it is possible, if left untrammelled by proper controls, it could soon achieve a superhuman ability to learn. The pessimists say this could happen as soon as 2028, while the optimists think such a possibility is many years in the future and there is time to control it. Forewarned is forearmed.

Across the English Channel, the European Union (EU), of which the UK used to be a member, is also moving to regulate AI systems. In March this year the European Parliament legislated the first world’s first widely-applicable law, the Artificial Intelligence Act, to promote the “uptake of human-centric and trustworthy AI, while ensuring a high level of protection for health, safety, fundamental rights, and environmental protection against harmful effects of artificial intelligence systems.”

Meanwhile, under the terms of the newly-signed MoU, the UK and US AI Safety Institutes are to perform joint testing exercises on a publicly accessible AI model. Furthermore, there will be exchanges of personnel working at the two Institutes.

Commenting on the importance of the agreement, Raimondo stated: “AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society. Our partnership makes clear that we aren’t running away from these concerns – we're running at them. Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance. By working together, we are furthering the long-lasting special relationship between the U.S. and UK and laying the groundwork to ensure that we’re keeping AI safe both now and in the future.” 

Let’s hope that is what will happen: Currently  in the UK and the US, most AI companies regulate themselves – if regulate is the right word – and it’s time that higher authorities took over before it’s too late.

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.