As world leaders gather for the UK’s AI Safety Summit in Bletchley Park, critics warn the event lacks key perspectives and is dominated by corporate big shots.
The summit, announced earlier this year by UK Prime Minister Rishi Sunak, aims to bring together 100 political and industry leaders from “like-minded countries” to ensure the benefits of AI are “harnessed for the good of humanity.”
US Vice Prime Minister Kamala Harris, China’s Vice Technology Minister Wu Zhaohui, and European Commission President Ursula von der Leyen are just some of the political leaders attending.
Other notable figures include former X CEO Elon Musk and OpenAI founder Sam Altman – who have both previously warned about the risks of AI – as well as executives from other AI companies, including, Meta, Anthropic, and Google’s UK-based Deepmind.
But the handful of global leaders and tech representatives attending the summit won’t be enough to deal with the huge, unworkable challenges that come with the rise of AI
And the lack of representation of civil society groups, labour unions, or other affected stakeholders in this exclusive guest list means key voices are being left out – much to the annoyance of over 100 signatories of an open letter warning of the summit’s imminent failure.
“Your ‘Global Summit on AI Safety’ seeks to tackle the transformational risks and benefits of AI, acknowledging that AI “will fundamentally alter the way we live, work, and relate to one another,” the open letter reads
The Summit is a closed-door event, overly focused on speculation about the remote ‘existential risks’ of ‘frontier’ AI systems – systems built by the very same corporations who now seek to shape the rules.
There are much more pressing concerns, they warn. One is a misrepresentation of minorities and AI bias that still plagues many AI systems due to the lacklustre data they’re trained on. Type “doctor” or “leader” into any AI image generator and you’ll be shown a row of white, male faces.
Another crucial issue is the large-scale job displacement caused by AI. The ethical debate surrounding the rise of generative AI and its impact on the creative industries has angered creatives and led to industrial action around the world – from Hollywood writers to artists.
“The agenda’s focus on future, apocalyptic risks belies the fact that government bodies and institutions in the UK are already deploying AI and automated decision-making in ways that are exposing citizens to error and bias on a massive scale,” said Abby Burke Open Rights Group Policy Manager for Data Rights and Privacy.
“It’s extremely concerning that the government has excluded those who are experiencing harms and other critical expert and activist voices from its Summit, allowing businesses who create and profit from AI systems to set the UK’s agenda.”
‘Photo opportunity’
Professor Mark Lee at the University of Birmingham, UK, believes the AI safety summit is a stage-managed “photo opportunity” rather than an open discourse about the risks of AI.
“We need an open debate,” he told New Scientist. “We want an interdisciplinary view of AI with people who are informed from a legal perspective, from an ethical perspective, from a technological perspective, rather than really quite powerful people from companies. I mean, they need to be in the room, but it can’t just be them.”
Like the Open Letter signatories, Lee says the summit seems to be captivated by hypothetical existential risks like “robots with guns” rather than the real and pressing risks from AI systems making biased decisions in medical diagnosis, criminal justice, finance and job applications.
“A wider variety of voices, with more diverse backgrounds, could point law-makers in a more practical direction when discussing regulation,” he added.
Boomers and Doomers
Another reason why the AI safety summit cannot be described as an open debate is the over-representation of tech companies at the event, which critics warn could shape discussions and turn regulations in their favour.
“Self-regulation didn’t work for social media companies, it didn’t work for the finance sector, and it won’t work for AI,” says Carsten Jung, a senior economist at the Institute for Public Policy Research, a progressive think tank that recently published a report on the key policy pillar that should lead the discussions at the summit.
We need to learn lessons from our past mistakes and create a strong supervisory hub for all things AI, right from the start.
It’s unclear whether the summit will indeed keep tech companies from embedding their own corporate biases away from the roundtable discussions.
Already it seems as if the gathering is designed to reduce public fears around AI and convince those developing AI products that the UK will not take too strong an approach in regulating the technology.
But away from the AI safety summit, world leaders are being much sterner about protecting society from the risks of AI. The EU government, for instance, continues to step closer and closer to introducing robust AI safety act that protects its citizens from the risks of the technology.
Even the US, home to many of the tech companies developing AI technologies, seems to be taking a more regulatory approach if we are to listen to Biden’s recent sweeping executive order on AI.