UK’s fractured approach to AI regulation looks risky
Mar 31, 2023
- As the AI temperature rises, the UK adopts a piecemeal approach to regulation
- The UK’s ‘world-leading’ approach to regulating AI involves not having a regulator
- Multiple existing bodies will all pitch in
- The approach raises more questions than it answers, says expert
The fever pitch around artificial intelligence (AI) is rising following the publication this week of an open letter signed by the likes of Elon Musk and Apple co-founder Steve Wozniak that repeated the warning that “AI systems with human-competitive intelligence can pose profound risks to society and humanity” and called on “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4,” which was released earlier this month. The letter’s signatories are worried about recent developments, as “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
Such warnings are putting pressure on national regulators in different parts of the world to articulate plans to keep AI on a tight rein, many of which have been working on the development of legislation and regulations for years. In the UK, though, the response of the government has been, shall we say, a tad confused and confusing, but certainly not short of hype (as usual, phrases such as “world leading” are bandied about without any back-up to support them).
High-tech fanboy, Rishi Sunak, the UK’s latest iteration of a prime minister, has identified AI somewhat confusingly as a “technology of tomorrow”, even though, according to the UK’s Department for Science, Innovation and Technology (DSIT), the AI sector already employs about 50,000 people in the UK and contributed £3.7bn (US$4.6bn) to the national economy in 2022 alone.
Those with greater experience and less marketing spin say AI is here now, is potentially dangerous, and is in real need of urgent regulation before things go seriously awry.
That might be a while in coming in the UK, as the latest response to the challenges (and, of course opportunities) of artificial intelligence is a new 91-page whitepaper, A pro-innovation approach to AI regulation. The paper has been published by DSIT, which claims it lays out a “world-leading” approach to regulating AI and is “part of a new national blueprint for our world-class regulators to drive responsible innovation and maintain public trust in this revolutionary technology.”
The paper opens with a foreword from secretary of state for science, innovation and technology Michelle Donelan, pointing out that the UK government has been pumping funds into AI’s development in recent times, including £110m for an “AI Tech Missions Fund, £900m to establish a new AI Research Resource and to develop an exascale supercomputer capable of running large AI models – backed up by our new £8m AI Global Talent Network and £117m of existing funding to create hundreds of new PhDs for AI researchers.”
But with AI-enabled developments, such as Chat GPT, starting to affect businesses and society more each day, the latest government largesse appears less than appropriate. The AI regulation whitepaper is accompanied by the rather measly sum of £2m to “fund a new sandbox, a trial environment where businesses can test how regulation could be applied to AI products and services, to support innovators bringing new ideas to market without being blocked by rulebook barriers.”
That won’t go too far, and to put it into perspective, it’s about the going rate for four or five half-hour after-dinner speeches by former prime minister Boris Johnson.
The paper also outlines something of a fudged approach to AI regulation, though the UK government sees it differently, of course. “The government will avoid heavy-handed legislation which could stifle innovation and take an adaptable approach to regulating AI. Instead of giving responsibility for AI governance to a new single regulator, the government will empower existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors,” it notes.
What this will mean in practice is that AI, one of the most important technological developments of our time, will not be overseen, regulated and policed by a single, powerful, overarching AI regulator able to take advice in a streamlined way from official bodies with experience in and knowledge of events and developments in their various spheres. Instead, existing regulators, such as the Competition and Markets Authority, the Human Rights Commission and the Health and Safety Executive, will adopt their own approaches within the terms of existing legislation pertaining to their particular sectors and they will not be granted new powers.
And this is only the beginning of the process: The DSIT is now running a consultation on the whitepaper and taking feedback from those in the AI sector until 21 June, after which it will analyse that feedback before taking its next steps and before you know it, we’ll be welcoming in 2024.
The DSIT team should brace itself for some less than complimentary responses. Michael Birtwistle, associate director of AI law and regulation at the independent research body, the Ada Lovelace Institute, believes the proposed strategy will result in potentially dangerous gaps in the government’s approach, and that it is “underpowered relative to the urgency and scale of the challenge.”
He also warned that the proposals in the whitepaper “will lack any statutory footing. This means no new legal obligations on regulators, developers or users of AI systems, with the prospect of only a minimal duty on regulators in future. The UK will also struggle to effectively regulate different uses of AI across sectors without substantial investment in its existing regulators.”
He added: “The UK approach raises more questions than it answers on cutting-edge, general-purpose AI systems like GPT-4 and Bard, and how AI will be applied in contexts like recruitment, education and employment, which are not comprehensively regulated. The government’s timeline of a year or more for implementation will leave risks unaddressed just as AI systems are being integrated at pace into our daily lives, from search engines to office suite software. We’d like to see more urgent action on these gaps.”
And there you have it – a lazy, laissez-faire, light-touch approach that runs contrary to efforts to regulate AI in Europe, which is in the final stages of pinning down its Artificial Intelligence (AI) Act, and many other parts of the world. That won’t help much when the Terminator comes knocking.
- Martyn Warwick, Editor in Chief, TelecomTV
Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.