UK prime minister Sunak stakes claim to AI fame

UK prime minister Rishi Sunak delivers his London Tech Week speech.

UK prime minister Rishi Sunak delivers his London Tech Week speech.

  • The UK government wants Britain to become the global centre for the regulation of international AI
  • Prime minister Rishi Sunak discerns a silver lining for our “mid-sized country” as a trusted locus for AI control 
  • He presented the UK’s AI credentials during an opening keynote speech at London Tech Week
  • The first step towards nirvana? A global AI safety summit in Britain this autumn
  • Meanwhile, EU plans for the regulation of AI will become a reality in the second half of next year

UK prime minister Rishi Sunak, seemingly increasingly desperate for some positive news to present to his country and the world, is on a mission to position the UK as the global centre of trusted and informed artificial intelligence (AI) regulatory development and innovative R&D. 

He opened London Tech Week on Monday morning with a typically ambitious pitch: The UK is “an island of innovation,” he stated in his speech. “But at a moment like this, when the tectonic plates of technology are shifting – not just in AI, but in quantum, synthetic biology, semiconductors, and much more – we cannot rest, satisfied with where we stand. We must act, and act quickly, if we want not only to retain our position as one of the world’s tech capitals but to go even further and make this the best country in the world to start, grow and invest in tech businesses. That is my goal. And I feel a sense of urgency and responsibility to make sure that we seize it, because one of my five priorities is to grow our economy, and the more we innovate, the more we grow,” blusted the PM, who then went on to somewhat spoil his pitch by claiming to be a great leader.   

He continued: “If our goal is to make this country the best place in the world for tech, AI is surely one of the greatest opportunities before us. The possibilities are extraordinary. But we must – and we will – do it safely. I know people are concerned. The very pioneers of AI are warning us about the ways these technologies could undermine our values and freedoms through to the most extreme risks of all. And that’s why leading on AI also means leading on AI safety. So, we’re building a new partnership between our vibrant academia, brilliant AI companies, and a government that gets it.”

Well, make of that what you will, but Sunak is definitely banging the AI drum hard, and not just at home. 

Last week, he made a quick two-day trip to Washington DC to commune briefly with US President Joe Biden. It was Sunak’s first visit to the US as prime minister and he would have liked to have stayed for more than just 48 hours or so, but some pressing problems at home necessitated a quick return to the UK, where his political troubles are growing by the day.

In what 10 Downing Street called “a heavily business-focused trip”, Sunak made an effort to burnish Blighty’s credentials as being the ideal (and safe) global locus in which, and from which, to define and regulate international AI. He claimed that the UK’s experience and abilities in artificial intelligence are so well known and remarkable that the rest of the world will be happy to kow-tow to our “mid-sized country” (which is, in fact, a tad smaller than the US state of Oregon). 

After the disaster that has been, and continues to be, Brexit and all the Boris Johnson/Donald Trump bluster about the quick and easy passing of a US/UK free trade agreement that, entirely predictably, was soon shown to be no more than wishful thinking and an ill-founded fantasy, Sunak finds himself in the unenviable position of trying to tie up a series of small trade deals that will somehow swell on viewing to convince the world that they are greater than the sum of their parts and that Britain still has a leading role to play on the world stage. 

The prime minister was also pitching the US president for his support (and perhaps even his attendance) in a mooted “global summit” on AI safety that may be held in the UK later in the year. That planned gathering was announced just one day before Sunak travelled to the US last week and the rationale behind it is that attendees from big AI technology companies and the governments of “like-minded countries” will debate how to regulate AI by quietly closing the stable door long after the ChatGPT stallion has legged it over the horizon and is procreating lustily as it gallops along its merry way.

A Downing Street mouthpiece told assembled journalists that the summit will be for “likeminded countries who share the recognition that AI presents significant opportunities, but realise we need to make sure the right guardrails are in place.” Meanwhile, a noisy chorus of Jeremiahs say that AI, globally, is already out of control, continues to grow at an incredible rate and could, very soon, within a matter of a few years, destroy humanity. 

Europe will have its first operational AI regulations and controls in place by mid-2024

However, the sackcloth and ashes doom-mongers apart, governments in many parts of the world have quite quickly come to realise and accept that regulation of AI is both necessary and overdue. Hence the sudden emphasis on the development of new policies and regulation that will have to be undertaken with strong input from AI companies, academics, scientists, psychologists and lawyers. Furthermore, the regulation and concomitant safety of AI will be intimately intertwined with a much broader and deeper regulation of algorithms, the process of logical coded steps that are always followed in the same way, be that in software or hardware, when applied to the solving of a problem or performing a computation. This will be a particularly difficult process to undertake given that companies spend a lot of time, effort, resources and cash in developing their proprietary algorithms and will fight to keep them secret.

Over recent years, as AI has risen up the technology agenda, nations and economic/political blocs, such as the European Union, and bodies, such as the IEEE, have written guidelines for AI ethics aimed at providing a framework on which to build a superstructure of control over the technology. The general tenor of these publications is that the companies and organisations that deploy the technology will have to play a major part in the creation of “trustworthy AI”, be accountable for that trustworthiness, and be subject to regulation, policing and sanction if their AI is deemed untrustworthy. 

The European Commission (EC) is developing its own legal framework on AI that will, in due course, be codified for adoption by every member state. The whole AI edifice will be an amalgam of the EU’s co-ordinated plan for AI and its AI regulatory framework that should (or might) ensure that Europeans can trust AI and be (moderately) sure that it is safe. 

Thus, the intent is to address risks specifically created by AI applications; come up with a list of perceived high-risk applications; set clear requirements for AI systems for high-risk applications; define specific obligations for AI users and providers of high-risk applications; propose a conformity assessment before the AI system is put into service or placed on the market; propose enforcement after such an AI system is placed in the market; and propose a governance structure at European and national level.

This risk-based approach comprises four levels. Unacceptable risk, high risk, limited risk, and minimal or no risk. Seemingly, all AI systems designated as being a clear threat to the safety, livelihoods and rights of people will be banned, although identifying and actually prohibiting them could well be problematic, especially once they are embedded in a global system.

That, however, is neither reason nor excuse for not attempting to stop things before they get out of hand. Thus, the EC has identified high-risk AI technology to include critical infrastructures (such as transport), educational or vocational training, the safety components of products (such as AI as applied to robot-assisted surgery); employment, the management of workers and access to self-employment; essential private and public services (such as credit scoring); law enforcement that could interfere with a citizen’s fundamental rights; migration, asylum and border control management (including the verification of the authenticity of travel documents); the administration of justice and democratic processes.

That’s quite a list and one that’s likely to grow as AI becomes more prevalent and ubiquitous. These and other designated high-risk AI systems will be subject to strict obligations before they can be introduced and put on the market. What will happen if any are made available  and used without having gone through the necessary lengthy processes to ensure their safety and trustworthiness is not yet clear, but it is a worrying possibility.

The current European timetable has it that some form of regulation of AI safety could be in operation by “the second half of 2024”, i.e. a year or so from now when the first “conformity assessments” will be conducted. We can be sure that, thereafter, this is going to be a permanent cycle of action and assessment until either the Terminator and his cohorts kill us all or we do manage to take complete control of AI technology. Now might be the time to get the best odds on an each-way wager.

- Martyn Warwick, Editor in Chief, TelecomTV