em360tech image

If you can read this, assume Bard can too. Google has updated its privacy policy to let you know that it now reserves the right to scrape anything you post online to build its own AI tools. 

The privacy changes amend an existing policy about “language models” and Google Translate, describing a host of new ways your content will be used used for Google’s AI systems.

"Google uses the information to improve our services and to develop new products, features and technologies that benefit our users and the public, reads Google’s newly-adjusted policy as of July 5 2022. 

“We use publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.”

This is a strange clause for a privacy policy. These policies typically describe ways that a business uses the information you use for its own services. 

But Google is actually reserving the right to capture and re-purpose the words from anything posted on the web – as if the entire internet is its own AI sandbox. 

While the update doesn't change the user experience or directly impact Google products as of yet, the language adjustment suggests that the company is leaning toward doubling down on its AI bid and that the general public's own words could be a significant factor in its development.

The search titan has been investing big in the development of AI systems since the launch of OpenAI’s ChatGPT last November and Microsoft’s $11-billion partnership with OpenAI

While Google’s chatbot Bard originally launched to a mixed response from users, it has rapidly caught up with other generative AI chatbots on the market and continues to improve day by day. 

Google also announced an upcoming AI-based search known as the Search Generative Experience (SGE) to round out its lineup of AI offerings – which include the likes of AI shopping experiences, Google Lens features, and even an AI music generator. 

Google's great harvest 

Google’s privacy changes are just the latest development in the continued discussions about the way AI systems harvest and handle peoples' data.

When OpenAI trained its ChatGPT large language model, it never asked for consent to use the stuff you might have posted publicly in various places on the internet. 

It didn’t do it after the launch of ChatGPT either and has continued to harvest data – including from the prompts users type into the chatbot.

Experts have warned that generative AI systems harvest data may put peoples’ data at risk, as these systems can be exploited by threat actors looking to steal sensitive data. 

Google’s parent company Alphabet itself has warned its own employees about the potential security risk of using AI in the workplace and has even released its own Secure AI Framework in an effort to enhance cybersecurity around AI threats.

Meanwhile, former Google employee and AI godfather Dr Geoffrey Hinton recently left his position at the tech titan due to his concern about AI’s data exploitation and risk to society. 

The neural net pioneer warned that the risks of AI chatbots are “quite scary”, and that the AI models could become more intelligent than humans and be exploited for malicious purposes.

“I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have,” Dr Hinton said in an interview with New York Times. 

“So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

Your voice, taken by AI

It’s not just the risk of AI’s handling of data that has people concerned. Fears about privacy, intellectual property, and the impact of these models on human labour and creativity have plagued the introduction of new AI products.

Just last week, OpenAI was sued by a US law firm for allegedly violating several privacy laws by scraping data from the internet to train its AI chatbot. 

"Despite established protocols for the purchase and use of personal information, [OpenAI] took a different approach: theft," the 160-page lawsuit read. 

“They systematically scraped 300 billion words from the internet, 'books, articles, websites and posts – including personal information obtained without consent.”

To read more about AI and data privacy, visit our dedicated AI in the Enterprise Page. 

In another suit from January, the AI Art generator Stability AI was sued by Getty Images for allegedly using millions of copyright-protected images from the site to train its AI image generator Stable Diffusion.

Google’s privacy policy changes make it clear that like these AI developers, it plans to use publicly available data to train all of its AI products. 

Whether it will face a lawsuit of its own will depend on how privacy laws and regulation on AI systems evolve as the tech takes the world by storm.