Sundar Pichai, CEO at Google and Alphabet, is calling for the regulation of artificial intelligence (AI) – quite rightly so, too. Currently, no universal regulations are in place, but that’s unsurprising. Technology is moving at a pace that regulatory bodies can hardly keep up with. However, with driverless cars and other slightly scary innovations on the horizon, the need for rules is more urgent than ever.
AI is and will continue to change the lives of the public. Transport, manufacturing, retail, cosmetics – AI will impact pretty much every aspect of our daily lives. In particular, we will be able to enjoy more efficient, and in some instances, more personalised delivery of our daily endeavours.
However, some industries are more sensitive than others, with no room for error. Healthcare is a particular minefield; on one hand, AI has tremendous potential in diagnoses, data analysis, virtual nursing assistants, and so much more. On the other, it’s a dangerous game to play when lives are at stake and regulations are lacking. As per Sundar’s suggestions, such an industry needs rules specifically tailored around the risks and benefits AI may present to it.
Balancing good versus evil
AI is shaking up any and every industry, making for a very exciting time to be alive. As of yet, no one wants to be a party pooper and impede that with regulations. However, as much as AI can do good (and exciting) things, it also has the potential to do the world of damage.
Already, AI researchers, the Pentagon, and many more are taking part in the race against deepfake technology. Seeing is no longer believing, with deepfake becoming a harrowing weapon, particularly against famous figures. Without wanting to dish out any ideas, malicious actors could use deepfakes to doctor footage during election campaigns, or engineer fake by changing the identity to their target’s.
As with any technology, first and foremost, no one should ever come to harm. Thus, rules are needed to ensure processes are in place to keep people safe. Sundar’s suggestion is that we use the GDPR as a foundation for AI rules and build on the existing framework for different AI applications.
Sundar is not the only one who has expressed his concerns recently. The European Commission has also called for a five-year ban on recognition, which Sundar is backing. The leaked whitepaper detailed the European Commission’s endeavour to buy some time to prevent abuse of recognition technology.
Time is of the essence on both sides. AI researchers and companies want to deliver technologies that will improve our lives, health, and safety. On the other hand, regulatory bodies need everything to come to a halt to start penning some rules. Given the high-profile expressions of concern, it’ll be interesting to see what is said at the World Economic Forum, taking place this week.