UK’s CMA shares its seven proposed principles for AI

  • The UK’s Competition and Markets Authority (CMA) has published a report on AI foundation models
  • Quite quick off the mark for a government agency, but then the need for action is pressing
  • Foundation models are AI systems that act as a blueprint that companies can adapt to specific uses
  • They can be “fine-tuned” to provide fully-customised commercial versions of AI
  • Not clear what will happen if some organisations fail to adhere to the new guidelines

All the fluster and bluster surrounding AI in general – and generative AI (GenAI) in particular – has had an aperient effect on the UK’s Competition and Markets Authority (CMA) and spurred a bureaucratic organisation into taking some quick action.

The British government agency has now completed an initial review of AI foundation models (FMs) and has just published its proposed principle to guide competitive AI markets and protect consumers based on concepts of responsible development and the use of such foundation models.

FMs are AI systems based on a wide range of known and identified capabilities that can be tweaked and adapted to various specific purposes. FM’s have been under development for some time now and, with ChatGPT and its ilk quickly appreciating their properties and capabilities, are now recognised as valuable adaptable blueprints to promote technological innovation and, via that, economic growth. 

According to the Ada Lovelace Institute, “a defining characteristic of foundation models is the scale of data and computational resources involved in building them. They require datasets featuring billions of words or hundreds of millions of images scraped from the internet. Foundation models also rely on ‘transfer learning’ that is, applying learned patterns from one task to another.”

The institute added that, “Current foundation models’ capabilities include but are not limited to the ability to: translate and summarise text; generate reports from a series of notes; draft emails; respond to queries and questions; and create new text, images, audio or visual content based on text or voice prompts.” An FM “can be accessed by other companies (downstream in the supply chain) that can build AI applications ‘on top’ of a foundation model, using a local copy of a foundation model or an application programming interface (API).” 

A case in point is that following the launch of its foundation model, GPT-4, OpenAI permitted other companies to build products underpinned by the GPT-4 models. They include Microsoft’s Bing Chat11 and educational apps, such as Duolingo. FM providers will also allow downstream companies to create a customised version of the FM via a process known as ‘fine-tuning’. 

Not compulsory to follow the principles but further regulation is sure to follow 

The CMA’s report indicates how individuals and organisations could benefit, provided the development and use of such models follows a benign and sensible route-map. Such benefits would include innovative and improved products and services, easier access to information, developments in science and medicine and health breakthroughs and (no sniggering at the back there!) lower prices. The authority added that FMs could enable a broad range of smaller and novel enterprises “to compete successfully and challenge existing market leaders.” 

It pointed out that the caveat if competition fails to emerge and take-off, or if big AI companies flout consumer protection law (which they certainly are doing in some parts of the world) or freeze-out smaller ones, is that the market will be dominated by monopolistically minded behemoths in a hierarchy of market power – to the detriment of almost everyone. 

Such worrying possibilities are self-evident and it is noteworthy that other important issues raised by the emergence of FMs, including copyright and intellectual property, online safety, data protection, security, privacy etc, are not addressed in the CMA’s initial review.

The focus is solely on competition and consumer protection and, essentially, what the CMA is doing in its new paper is to look backwards to look forwards. It does so by drawing on “lessons learned from the evolution of other technology markets and how they might apply to FMs, as they are developed.” Whether that is the right approach and analysis of what is a new and unique problem remains to be seen. 

The CMA’s proposed guiding principles are 1) Accountability – where FM developers and deployers are accountable for outputs provided to consumers). 2) Access – the provision of “ready access” to key inputs, without unnecessary restrictions. 3) Diversity – which is the sustained diversity of business models, including both open and closed. 4) Choice – so businesses can decide how to use FMs. 5) Flexibility – providing unrestricted plasticity to enable the switch and/or to use multiple FMs according to need. 6) Fair dealing – ensuring no anti-competitive conduct including anti-competitive self-preferencing, tying or bundling will happen. (And the best of British luck with that one.) 7) Transparency – allowing consumers and businesses alike the access to information about the limitations and risks of FM-generated content and to make informed choices about them.

The next step for the CMA will be, over the coming months, “to undertake a significant programme of engagement (and collaboration) with a wide range of stakeholders across the UK and internationally, to develop these principles further, working together to support the positive development of these critical markets in ways that foster effective competition and consumer protection for the benefit of, people, businesses, and the economy.” 

The “wide range of stakeholders” comprises the usual suspects, including the likes of Google, Meta, Microsoft, OpenAI and Nvidia as well as government, regulators and other agencies, together with scientists and academics. The next update paper from the CMA will be published early in 2024.

Time is of the essence. As the immediate and potential societal impact of the sudden explosion of generative AI models began to become apparent, a group of more than 350 concerned scientists and comms technology and service provider CEOs issued a joint statement saying that governments and infotech companies must recognise that out-of-control AI is such a palpable threat that it should be recognised as global priority of the same (or even greater) magnitude as global pandemics and the threat of nuclear war. 

In March this year, a 1,000-strong group of high-tech leaders called for a moratorium on AI development until regulations governing its safe use are devised: This has not happened. 

However, demands for governments, agencies and organisations globally to collaborate and agree to binding principles to regulate AI are growing in strength and we must all hope that something will come of it. If not, a rogue state, a rogue company or even just an individual rogue person could develop a form, or forms, of AI to the point that they could become an existential threat to humanity.

- Martyn Warwick, Editor in Chief, TelecomTV

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.