The UK will 'do its own thing' on AI regulation – what does it mean?
Jacques Silva | Noor Photos | Getty Images
LONDON — Britain says it wants to do “its own thing” when it comes to regulating artificial intelligence, suggesting it may differ from the approach taken by its leading Western nations.
“It's really important that the UK does its own thing in terms of regulation,” Feryal Clark, the UK's government minister for artificial intelligence and digital, said in a CNBC interview broadcast on Tuesday.
She added that the government already has “good relationships” with artificial intelligence companies such as OpenAI and Google DeepMind, which voluntarily open their models to the government for safety testing.
Clark added: “It's important that we consider safety from the outset of model development… which is why we will work with the department on any safety measures.”
Her comments echoed comments made by British Prime Minister Keir Starmer on Monday that Britain “now has the freedom to regulate in a way that we think is best for the UK” after Brexit.
“There are different models around the world, there's the EU approach and there's the US approach, but we have the ability to choose the model that we think is in our best interests and we intend to do that,” Starmer said, after announcing the UK would become an AI The global leader answered questions from reporters after announcing his 50-point plan.
Differences with the United States and the European Union
So far, the UK has not introduced formal laws to regulate artificial intelligence, but instead follows individual regulators enforcing existing rules on companies regarding the development and use of artificial intelligence.
This differs from the EU, which has introduced comprehensive pan-European legislation aimed at harmonizing technical rules across the EU, taking a risk-based approach to regulation.
Meanwhile, the United States lacks any AI regulation at the federal level, instead employing a patchwork of regulatory frameworks at the state and local levels.
During Starmer's election campaign last year, Labor pledged in its manifesto to introduce regulation of so-called “cutting-edge” artificial intelligence models – referring to large-scale language models such as OpenAI's GPT.
However, the UK has so far not confirmed the details of the proposed AI safety legislation, instead saying it will consult with industry before proposing formal rules.
“We will work with the industry to develop this project and move forward with it in line with what we said in our manifesto,” Clark told CNBC.
Chris Mooney, partner and commercial director at London-based law firm Marriott Harrison, told CNBC that while the EU is moving forward with its AI bill, the UK has taken a “wait-and-see” approach to AI regulation.
Mooney told CNBC via email: “While the UK government has stated it has a 'pro-innovation' approach to AI regulation, our experience working with clients is that they find the current position uncertain and therefore unsatisfactory. ”
One area where Starmer's government has made a public commitment to reforming AI rules is copyright.
Late last year, the UK launched a consultation to review the country's copyright framework to assess possible exceptions to existing rules for AI developers to use the work of artists and media publishers to train models.
Businesses face uncertainty
Sachin Dev Duggal, CEO of London-based AI startup Builder.ai, told CNBC that while the government's AI action plan “shows ambition,” its implementation without clear rules “borders on recklessness.”
“We’ve missed critical regulatory windows twice — first with cloud computing and second with social media,” Dugar said. “We can't make the same mistake with artificial intelligence because the risks of artificial intelligence are exponentially greater.”
“Britain's data is our crown jewel; it should be used to build sovereign AI capabilities and create a British success story, not just to power overseas algorithms that we cannot effectively regulate or control,” he added.
Details of Labour's plans for AI legislation were initially expected to emerge in King Charles III's speech opening the UK Parliament last year.
However, the government has only pledged to establish “appropriate legislation” for the most powerful AI models.
“The UK government needs clarification here,” John Buyers, international head of artificial intelligence at law firm Osborne Clarke, told CNBC, adding that he had heard from sources that consultation on a formal AI safety law “is pending.” release”.
“By releasing consultations and plans piecemeal, the UK misses the opportunity to get a comprehensive picture of the direction of its AI economy,” he said, adding that failure to disclose details of the new AI safety law would lead to uncertainty for investors.
Still, some in the UK tech community believe a looser, more flexible approach to AI regulation may be right.
Russ Shaw, founder of advocacy group Tech London Advocates, told CNBC: “It’s clear from recent discussions with the government that there is a lot of effort going on around AI safeguards.”
He added that the UK was well positioned to pursue a “third way” in AI safety and regulation – “sector-specific” regulation for sectors as diverse as financial services and healthcare.