AI Roundtable for a focused approach to legislation in the UK
UK Technology Secretary, Peter Kyle, (pictured above) has assured leading technology firms that the forthcoming artificial intelligence (AI) bill will be specifically targeted at the most sophisticated models, and will not become an expansive regulatory measure for the emerging industry.
The AI bill, anticipated later this year, will concentrate solely on two aspects: transforming existing voluntary agreements between corporations and the government into legal obligations, and converting the UK’s newly established AI Safety Institute into an independent government entity. This information comes from individuals who were privy to the discussions.
In response to apprehensions that additional regulations might be appended to the bill during the legislative process, Kyle assured executives from Google, Microsoft, and Apple that the bill would not become a catch-all piece of legislation.
Kyle and Chancellor Rachel Reeves convened with executives from several prominent tech companies and investors, including Facebook’s parent company Meta, to discuss how the new government can bolster the tech and AI sectors to enhance UK growth.
While it was anticipated that Sir Keir Starmer would announce an AI Bill in the King’s Speech earlier this month, it was not included among the 40 specified pieces of legislation. Instead, the King stated that the Labour government would “endeavour to establish the appropriate legislation to impose requirements on those developing the most potent artificial intelligence models”.
High-ranking officials are optimistic that the bill, which will focus exclusively on ChatGPT-style foundation models (large AI models capable of parsing and generating text and multimedia, created by a select few companies), will be ready for its first reading by year’s end.
The AI bill proposed by Sir Keir Starmer signifies a shift from the approach taken by former Prime Minister Rishi Sunak, who was hesitant to advocate for legal interventions in the development and deployment of AI models too early, fearing that stringent regulation might hinder industry growth.
Post-Sunak, what to expect
Sunak’s government launched its AI Safety Institute last year, which assesses AI models for risks and vulnerabilities. During the UK’s AI Safety Summit in November, leading companies, including OpenAI, Google DeepMind, Anthropic, Amazon, Mistral, Microsoft, and Meta, signed a significant but non-legally binding agreement with governments, including the UK, US, and Singapore.
Under this agreement, signatory governments would have the ability to test the companies’ latest and upcoming models for risks and vulnerabilities prior to their release to businesses and consumers. These companies made additional voluntary commitments earlier this year in Seoul, including a pledge “not to develop or deploy a model at all” if severe risks could not be mitigated.
Senior UK government officials believe there is an urgent need to make these voluntary agreements legally binding to ensure that companies already committed to the agreements cannot back out of their obligations if it becomes commercially advantageous to do so.
A consultation on the contents of the bill is expected to be initiated in the coming weeks and should last approximately two months. Making the AISI an independent entity would enhance its role as an independent body and reassure companies that the government is not micromanaging its operations.
Starmer’s government is eager for the AISI to play a leading role in establishing global standards for AI development that could be adopted by governments worldwide. Additional regulations to address and safeguard against potential harms associated with AI, including the unauthorized or uncompensated use of intellectual property to train models, will be considered separately from this bill.