em360tech image

As the AI arms race heats up, concerns are mounting about the rapid advancement of Artifical intelligence (AI) and its potential threat to the future of humanity. 

The EU recently entered final stages of introducing its own ‘AI Act’, which could ban AI technologies that it deems present an “unacceptable level of risk” to humanity, and introduce new regulations for generative AI tools like OpenAI’s ChatGPT.

Meanwhile, at the UN's first meeting about the threat of AI, UK Foreign Secretary James Cleverly stated that AI “challenges our fundamental assumptions about defence and deterrence." 

Cleverly set out the UK’s AI principles – one of which being that AI should be “safe and predictable by design, safeguarding property rights, privacy and national security”.

But Kevin Bocek, VP of Ecosystem and Community at Venafi argues that without identity, these principles will become obsolete. He believes we need an identity-based Kill Switch that can stop the AI from working, stop it from communicating, and protect it by shutting it down if it has been compromised. 

As the UK looks to host the first global AI safety later this year, EM360’s Ellis Stewart spoke to Kevin about why identity must be central to conversations about AI safety, and how introducing an AI kill switch could protect society from the AI threat. 

Ellis: Why do we need an "AI kill switch"? Is there a risk of a Terminator-like Skynet scenario soon becoming a reality?

Kevin: “An AI kill switch is needed to help mitigate the risk that AI could pose. If AI systems go rogue and start to represent a serious threat to humankind, as some key industry figures have warned could be possible, their identities could be used as a de facto kill switch. 

“As taking away an identity is akin to removing a passport, it becomes extremely difficult for that entity to operate. This kind of kill switch could stop the AI from working, prevent it from communicating with a certain service, and protect it by shutting it down if it has been compromised. It would also need to kill anything else deemed dangerous in the dependency chain that the AI model has generated.”

Ellis: What would this kill switch look like, and who would control it?

Kevin: “Whether you think of it as a kill switch or a pause button, a big red button, or an amber lever, being able to manage the identity of these AI models is increasingly important. 

“As we use this technology to make life-changing or life-and-death decisions, knowing which ones we’re using for what level of decision-making becomes important – and so, certifying them will become critical. That means having strong identity information, both on what we’re consuming from the cloud and also what we’re using with our own data.”

“We give a web server a TLS certificate when we’re convinced it’s fit for purpose. For generative AI, we need to have the same type of identity certification so that we can either allow or disallow systems like this, so we have the power to interrupt the actions of the systems if at any point we decide something’s not right, or we just need to check what’s happening. We need that idea of a kill switch, a big red button, to correct and modify the system as we go along.

“There’s plenty still to work out in terms of who would control it. But assigning each AI a distinct identity, would enhance developer accountability and foster greater responsibility, discouraging malicious use. Doing so with machine identity isn’t just something that will help protect businesses in the future, it’s a measurable success today."

Ellis: Such a measure is akin to the kill switch that shuts down nuclear reactors in emergencies. Do you think AI is as much of a threat to humanity as nuclear destruction?

Kevin: When you consider the wide-ranging applications for AI today and the potential applications for tomorrow, I would say potentially yes. As AI becomes embedded more firmly into business processes, it inevitably also becomes a more compelling target for attackers. 

“An emerging threat is malicious actors “poisoning” AI to affect the decisions a model makes, for example. The Center for AI Safety has already drawn up a lengthy list of potential societal risks, although some are more immediately concerning than others.

“That’s part of the reason why global governments are turning their attention to ways in which they can shepherd development and use of the technology, to minimize the abuse or accidental misuse. The G7 is talking about it.

“The White House is trying to lay down some rules of the road to protect individual rights and ensure responsible development and deployment. But it is the EU that is leading the way on regulation. Its proposals for a new “AI Act” were recently green-lit by lawmakers, and there are new liability rules in the works to make compensation easier for those suffering AI-related damages."

Ellis: With the UK’s recently announced AI summit looming, which regulatory measures would you like to see discussed? Why has it taken so long for these discussions to be held?

Kevin: The UK’s plans to host an AI Safety Summit this Autumn is a welcome step and forward-thinking development, but it is long overdue. With AI systems already being deployed in high-stakes domains, discussions on regulation and safety measures should have happened much sooner.

“As AI continues to advance at a rapid pace, there’s an urgent need to put guardrails in place, to mitigate dangers to individuals and businesses. The summit provides the opportunity to establish a shared vision around regulations that can help to contain the risks of AI while encouraging exploration, curiosity, and trial and error. 

“While the UK’s summit has great potential for shaping a shared regulatory approach for AI, it’s possible these discussions have been held back from transpiring because similar to the rest of the world, government officials weren’t aware of how quickly AI would advance and integrate into people’s lives. 

“However, now that the summit is in place for later this year, we can hope that discussions around regulation and safety measures are at the top of the agenda.”