em360tech image

Would you trust ChatGPT to give you a medical diagnosis, manage your finances, or plan your personal and professional future? Well, the majority of consumers would.

That’s according to a recent survey by the Capgemini Research Institute, which found that 73 per cent of consumers across the US, Europe, and the Asia Pacific would trust content written by generative AI systems like OpenAI’S ChatGPT and Microsoft’s Bing Chat

Researchers found consumers are most satisfied with AI for searching and gaming, but over half trust AI to assist with financial planning, while two-thirds said would benefit from receiving medical diagnoses and advice from generative AI. 

63 per cent of respondents also said they would be willing to seek advice from generative AI for personal relationships or life and career panthers, with consumers over the age of 55 most likely to use the tech for this purpose. 

Awareness remains low

While the majority of consumers responded positively to recent advancements in generative AI, researchers warned that many were unaware of the risks and dangers associated with the technology. 

According to the survey, almost half of the respondents were unconcerned by the risk of generative AI being used to create fake news stores, while only 34 per cent are concerned about cyber criminals using the tech to orchestrate phishing attacks. 

Awareness around the ethical risks of generative AI is also concerningly low –  just a third of respondents said they were worried about copyright issues surrounding AI, and even fewer are concerned about the prospect of AI systems copying product designs and formulas. 

Generative AI systems can be exploited for a range of malicious purposes – from launching malware campaigns in minutes to building sophisticated phishing attacks. 

AI Developers like OpenAI have placed restrictions to block malicious content creation on their platforms, but malicious actors have already found ways to bypass these measures.

In February, researchers from CheckPoint discovered that hackers were using OpenAI’s API in external Telegram channels to create malicious content including Python scripts to launch malware attacks and launching convincing phishing attacks in seconds. 

They also discovered several users selling these telegram bots as a service, a business model allowing cybercriminals to use the unrestricted ChatGPT model for 20 free queries before being charged $5.50 for every 100 queries they make. 

Niraj Parihar, CEO of the Insights & Data Global Business Line and member of the Group Executive Committee at Capgemini said that many consumers are unaware of the dangers of AI systems when they get into the wrong hands. 

“The awareness of generative AI amongst consumers globally is remarkable, and the rate of adoption has been massive, yet the understanding of how this technology works and the associated risks is still very low.” 

Regulating AI

The findings arrive as AI experts and governments around the world push for laws and regulations to protect people from potential risks AI could have on society. 

The EU parliament recently approved the world’s first AI Act, which would ban AI systems that present an “unacceptable level of risk,” such as predictive policing tools or real-time facial recognition software, and introduce new regulatory requirements on generative AI tools including OpenAI’s ChatGPT. 

A number of other governments around the world are also following suit. UK Prime Minister Rishi Sunak recently touted the UK as the future centre of AI regulation, calling for world governments to evaluate AI’s “most significant risks.” 

Mr Parihar said that the Capgemini Research Institute’s survey not only demonstrates the need for regulation that protects the public against the unforeseen risks of AI but also for businesses to inform their users of the risks associated with their products. 

Whilst the regulation is critical, business and technology partners also have an important role to play in providing education and enforcing the safeguards that address concerns around the ethics and misuse of generative AI.

Niraj Parihar, Capgemini

“Generative AI is not “intelligent” in itself; the intelligence stems from the human experts who these tools will assist and support. The key to success therefore, as with any AI, is the safeguards that humans build around them to guarantee the quality of its output.”

line em360

Diversity, Equity and Inclusion (DEI) is by no means a new concept, but the case for championing it is stronger than ever, and although massive strides have been made, there has been a clear stagnation. And the case for DEI is watertight.

The Diversity, Equity & Inclusion in Tech Awards will look to provide a safe and supportive space for like-minded individuals to collaborate, support and champion the differences that make us unique, and ultimately help businesses to thrive.

Taking place on 29th June 2023, at the Royal Lancaster London, the event will bring together employees of all backgrounds and career levels within the technology industry, with the shared aim of creating a supportive ecosystem and championing equality across the workforce.

GET YOUR TICKETS HERE