em360tech image

Meta is reportedly planning to launch AI chatbots capable of exhibiting different characters and personalities. But experts warn these so-called AI ‘Personas’ could put people’s Data privacy at risk. 

The chatbots, which the Financial Times reports could launch as soon as next month, will be able to have human-like discussions with users and offer customised recommendations based on data from previous conversations.

Dubbed 'Personas' by Meta employees, these AI bots can take the form of different characters, including one that can speak like historical figures such as Abraham Lincoln and another that can give travel advice in the style of a surfer. 

The purpose of these AI bots will be to provide a new way for people to search as well as simply being a fun AI product for people to play with.

The move comes as Zuckerberg’s $800 billion company pushes to attract and retain users amid steep competition from social media powerhouses like TikTok, and attempts to enter the AI arms race that began with the launch of OpenAI’s ChatGPT last November

As well as boosting engagement, chatbots could also be effective for collecting huge amounts of data on users’ interests, which could be pivotal to expanding Meta’s already ginormous advertising business. 

Ravit Dotan, AI ethics adviser and the co-founder of the Collaborative AI Responsibility lab at the University of Pittsburgh told the Financial Times that Meta’s AI chatbot mass data collection could foster concerns about privacy as well as “

“Once users interact with a chatbot, it really exposes much more of their data to the company, so that the company can do anything they want with that data.

Meta’s Data Farm 

The upcoming launch of Meta’s AI personas comes a month after the tech giant released its Twitter rival Threads, which was also criticised for being powered by people’s data

Threads take more data than the majority of other social media platforms, transferring this data through the apps’ integration with Meta to build a detailed advertising profile of each user. 

But experts warned that this excessive data collection could put users' privacy – and security – at risk since users are handing over even more personal information to a company that already knows a lot about its account holders. 

And as Meta looks towards turning Threads into a decentralized service – which would allow users to view Threads content across other apps and theoretically give them more control over their data – experts warn that the move could expand the company’s reach across the internet.

Personas will only add to this reach, granting meta yet another method of extracting user data and creating accurate data profiles for every user they chat too. 

AI with persona-lity 

This is not the first time tech companies have launched AI tools and systems that feature different personas  and personalities. 

Character.ai, an Andreessen Horowitz-backed start-up valued at $1 billion, for instance, uses large language models to generate conversation in the style of individuals such as Twitter X and Tesla CEO Elon Musk and Nintendo character Mario.

Meta would also not be the first tech company looking to profit from its user's conversations. Snapchat’s “My AI” chatbot, for instance, describes itself as an “experimental, friendly chatbot”, with which 150mn of its users have interacted so far. 

It recently began “early testing” of sponsored links within the chatbot as a way for the social media company to earn money from each conversation. 

Meta has been investing in generative AI for a while, and this month, released an open-source version of a large language model LlaMa 2 to allow developers and businesses to build generative AI

As part of building the infrastructure to support the AI products, the tech giant has also been trying to procure tens of thousands of GPUs – chips that are vital for powering large language models. 

According to a Meta insider speaking to the Financial Times, the company will likely build in technology that will screen users’ questions to ensure they are appropriate.

It may also automate checks on the output from its chatbots to ensure what it says is accurate, and avoid hate or rule-breaking speech.