AI is already outpacing US legislators

  • Senators listen to the story of AI from companies with the biggest interest in promoting it
  • Principles of AI regulation enshrined in the Blumenthal-Hawley Framework – a slow process is likely
  • However, laws will be needed before next year’s presidential election, when deep fakery is likely to be pervasive 
  • Yet another case of trying to close the stable door after the horse has bolted?

This week on Capitol Hill, Washington DC, more than 60 senators congregated in the Kennedy Caucus Room at the “AI Insight Forum” where they “listened to” the likes of Meta’s Mark Zuckerberg, Elon Musk of X, SpaceX and Tesla, Bill Gates, sometime CEO of Microsoft, Sundar Pichai, CEO of Google, and other industry bigwigs as they told them all about artificial intelligence. Exactly what was said remains unknown because the event was held in closed session, with the general public and media barred from attending. Quite why the senior representatives of the population of the Land of the Free agreed to such strictures remains unexplained but we do know that Congress is considering drafting bipartisan AI regulations and legislation but is moving so slowly that the AI industry is already running rings round it. Expectations are mounting that, whatever it eventually does, it will be too little, too late.

The fixed point around which the debate on the regulation of AI is revolving rejoices in the name of the “Blumenthal-Hawley Framework” which Senators Richard Blumenthal (a Democrat) and Josh Hawley (a Republican) have constructed as the bi-partisan “principles” governing AI legislation. Central to these principles is the establishment of a “licensing regime” that is to be applied to those bodies “engaged in high-risk AI development” (of which there is, as yet, no definition), as well as the creation of an independent body comprising experienced individuals who use AI as part of their daily work. This panel would collaborate with other regulatory bodies to ensure that the limits and application of AI would be completely transparent, such as via the “watermarking” of AI-produced content. Simultaneously, AI developers would be legally liable should their work and products “cause consumer or civil rights-related harms”.

Senator Blumenthal is very bullish about the principles and says, “Make no mistake, there will be regulation, the only question is how soon and what. Risk-based rules… that is what we need to do here. We need to act with dispatch, more than just deliberate speed… we need to learn from our experience with social media.” Quiet. Over a decade and more now, regulatory and legislative efforts regarding social media have been partisan, partial, dilatory and generally ineffective.

The chairman of the Senate Commerce Committee, the Democrat, John Hickenlooper, reckons the development of AI and the growing unease over the data scraping and training applied to large language models (LLM) would best be remedied by overarching privacy legislation, something consumers and privacy groups have been advocating for years.

Hickenlooper commented, “There are too many open questions about what rights people have to their own data and how it’s used, which is why Congress needs to pass comprehensive data privacy protections. This will empower consumers, creators, and help us grow our modern AI-enabled economy.” A laudable aim, but already too late to be of much immediate practical benefit.

Unsurprisingly, the president of Microsoft, Brad Smith, a lawyer by training, said he supports the Blumenthal-Hawley Framework because “it doesn't attempt to answer every question by design.” He says the priorities should be safety and security and the establishment of a new federal agency to regulate and issue licences for the development and use of high-risk AI models and that the licensing, and its enforcement should apply not only to AI developers but also to the deployers of various AI models.

Users should know who and with what they interact

The senators listened to three hours of exposition (and probably excuses) from the leading lights of AI and apparently were in unanimous agreement that individuals should have the legal right to know if they are interacting with an AI system or AI-generated content. It was noted that Microsoft is collaborating with other AI developers in an effort to convince the sector of the value of an “Authentication of Media” (AMP) system that would provide “a unique signature for authentic content by ‘stamping’ the content by the physical device that generated it.”

There is mounting concern that it is getting more and more difficult for people to determine what is real and what is fake where AI is concerned and pressure is mounting for regulations and safeguards to identify and negate deep fake content ahead of next year’s presidential election where attempted interference with the process is expected to be at an all-time high. That’s a very big task with a very short timetable.

Meanwhile, Woodrow Hartzog, professor of law at Boston University, said that attempts to control AI would fail if “half measures” were introduced. He noted, “AI is not a neutral technology” and cautioned against the blandishments of “industry-led approaches”, and lobbying to promote transparency, the mitigation of bias and “corporate principles of ethics”.

It is evident though that nothing much will, or can, happen in the near term. For example, the Senate majority leader, the Democrat Chuck Schumer, organiser of the forum, said, “We are beginning to really deal with one of the most significant issues facing the next generation and we got a great start on it today, [but] we have a long way to go.” 

His co-host, the Republican Senator Todd Young, opined that the senate is “getting to the point where I think committees of jurisdiction will be ready to begin their process of considering legislation.”

However, another Republican senator, Mike Rounds, warned, “Are we ready to go out and write legislation? Absolutely not. We're not there.” 

Meanwhile, AI companies continue along their merry way, making hay while the sun shines and in the knowledge that any AI legislation and regulation that does eventually make it on to the statute books will be so far behind the reality of the industry that it’ll be the equivalent of trying to control the technology of autonomous vehicles by using regulations designed for a Ford Model T.

Interestingly and significantly, Sam Altman, the chief executive of ChatGPT’s OpenAI, and either a hero or villain in that regard depending on your point of view, told the senators that legislation is urgently needed and begged them not to fall into the same patterns of partisan blocking manoeuvres that have so delayed and distorted regulation of the internet and social media platforms. He said: “My worst fears are that we – the field, the technology, the industry – cause significant harm to the world.” 

Actually, Sam, it’s already happening.

- Martyn Warwick, Editor in Chief, TelecomTV

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.