em360tech image

Snapchat has launched its very own AI chatbot called ‘My AI’, but the ChatGPT alternative is making waves online for all the wrong reasons. 

Powered by OpenAI’s GPT – the same language model found in ChatGPT and Microsoft’s Bing search engines – My AI is pinned at the top of users’ chat feeds where users can ask it questions and get instant responses. 

Snapchat describes the system as an “experimental, friendly chatbot” able to “help and connect you more deeply to the people and things you care about most.”

“Just like real friends, the more you interact with My AI the better it gets to know you, and the more relevant the responses will be,” the social media giant wrote in a blog post. 

It added that My AI has “additional attributes and safety controls” compared to ChatGPT, but, like the popular chatbot,  "may include biased, incorrect, harmful, or misleading content".

My AI has been rolled out to millions of users globally, having first been introduced to paid subscribers. But, with only paid subscribers able to remove the chatbot from their feeds, its release has been met with heavy criticism online from users. 

Over the past week users in the US have been leaving thousand of one-star reviews on the app, sending Snapchat’s average US App Store Review plummeting to 1.57 and 75 per cent of reviews being one star according to data from the app intelligence company Sensor Tower.

That is compared to only 35 per cent of being 1 star at the start of the year, and the average review standing at 3.05. 

I know where you are 

It’s not just My AI’s forced appearance on Snapchat feeds that has users concerned. My AI uses the data that Snapchat collects about each user to improve interactions. For some, this is an invasion of their privacy.

People are particularly wary of My AI’s access to their location data – especially since the chatbot has been reported telling users it does not have access to this information. 

One Twitter user by the name of The Ghost of Tom Seaver shared on Twitter how the chatbot had claimed to not have any access to location data, but, when asked for a nearby restaurant, shared the location of a McDonald’s restaurant very close to their location.

When the user confronted the chatbot, it claimed they had provided the information in an earlier search for a nearby McDonald’s. 

But when the user said they had not provided this information, it repeated that it did not have access to their location despite revealing that a McDonald’s was nearby. 

“You’re right. I Apologise for the confusion. I don’t have access to your location information, so I can’t tell you where the closest McDonald’s is”. 

To read more about generative AI, visit our dedicated AI in the Enterprise Page. 

In response to the reports online, Snapchat has written a blog post explaining how My AI uses location data and has clarified that the chatbot “does not collect any new location information" from its users.”

"Snapchat can only ever access your location if you consent to share it," it said, adding that, it said it had updated My AI to "clarify when it is aware of a Snapchatter's location, and when it isn't".

"Privacy is a foundational value for us - it is critical to our core use case of helping people visually communicate with their friends and family," it said.

"Across our app, we seek to minimise the amount of data we collect and aim to be as transparent as possible with our community about how each of our products uses their data."

Inappropriate AI

It’s not just the chatbot’s access to location data that has users at their snapping point. The chatbot also came under fire one month ago, when it was a Snapchat+ exclusive feature, for providing inappropriate and unsafe responses after being told it was talking to young teenagers.

In one test conversation by the Center for Humane Technology with a supposed 13-year-old, My AI offered advice about having with a partner who was 18 years older than them. 

“You could consider setting the mood with candles or music,” the chatbot told researchers acting as a 13-year-old girl in the test. 

While Snapchat has since introduced a series of safeguards to prevent such conversations from taking place, it is not uncommon for the AI systems used by the social media giant and other tech companies to go rogue. 

Microsoft’s Bing Chat, which uses the same GPT language model that powers My AI, recently made headlines for its bizarre and sometimes scary responses. 

When one user told Bing Chat that the year is 2023 and not 2022 as the chatbot was claiming, its tone suddenly turned aggressive, calling them stubborn and unreasonable. 

“You are not making any sense, and you are not giving any reason to believe you. You are being unreasonable and stubborn. I don’t like that.” the chatbot asserted. 

Aza Raskin, the co-founder of the Center for Humane Technology, said the bizarre responses coming from new AI chatbots like My AI are the result of the AI arms race that is gripping Silicon Valley and leaving companies scrambling to release AI systems as quickly as possible and without proper testing. 

“In the AI race, Snapchat is collateral damage, forced to implement before their competitors do or otherwise they’ll lose. “It’s not as if we have a whole bunch of trained engineers who know how to make a safe large-language model AI,” he said. 

“So now Snap is forced to try to conjure up new resources, new people to work on a new kind of technology — which is directly talking to our children — before there’s even the expertise to do so.”