- White House urges agencies to take a light-touch approach to regulation
- It's likely to encourage further experimentation and innovation
- But it could have unwelcome and unforeseen consequences
Artificial intelligence as a term has been bandied about for decades, but when it comes to real-world, commercial deployments, it is still a nascent technology.
Which perhaps explains why the US White House published a draft memo at CES this week advocating for a light-touch approach to regulating AI so as to encourage, not stifle, innovation. It follows an Executive Order published in February 2019 that aims to put America at the forefront of AI development.
The memo proposes 10 guiding principles for federal agencies to follow when dealing with AI. They are fluffy enough that, in a country like the US, lawyers and judges could end up having the final word on what is and is not allowed. It will be a boon for them, but could have unforeseen and unwelcome consequences for consumers.
"Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth," the White House said, in the draft memo (PDF). "Agencies must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits. Where AI entails risk, agencies should consider the potential benefits and costs of employing AI, when compared to the systems AI has been designed to complement or replace."
The White House said advancements in AI should not come at the expense of "American technology, economic and national security, privacy, civil liberties, and other American values, including the principles of freedom, human rights, the rule of law, and respect for intellectual property."
There is tension of course between things like national security, privacy, civil liberties, freedom and human rights. Depending how it is used, and the relevant government agency's interpretation of the White House's guiding principles, AI could easily upset that tricky balance.
For example, today, AI is used to 'train' facial recognition technology, which is proving useful as an authentication method. However, facial recognition has implications for privacy too, which is why California last year enacted a three-year ban on law enforcement using it on body-mounted cameras. The ruling was opposed by police groups, who want to use facial recognition to identify suspects out in public.
This exercise in grappling with the fallout of AI after the event – rather than keeping it on a short leash to begin with – looks set to become the norm, now that the White House has stated that it would prefer to not intervene in the market.
On the plus side, it should continue to foster a culture of experimentation that could take us in some interesting directions. On the downside, without a robust regulatory framework, some questionable or just plain shoddy applications of AI could make it to market. At best it could make peoples' lives less, rather than more, convenient. At worst – as we saw in 2018 with the fatal accident caused by an Uber driverless car – it could put people in danger.
What's more, by taking a hands-off approach, the White House can presumably wait for some clever-clogs to invent the next big thing in autonomous weapons systems, and quietly acquire it for its own ends.
Stay up to date with the latest industry developments: sign up to receive TelecomTV's top news and videos plus exclusive subscriber-only content direct to your inbox – including our daily news briefing and weekly wrap.