A house of Lords committee has urged the government to re-think its copyright laws as AI firms continue to scrape creative content to train their language models.
The House of Lords Communications and Digital Committee told ministers on Friday that copyright laws are currently ‘failing’ to protect creatives, warning that “some tech firms are using copyrighted material without permission, reaping vast financial rewards.”
The committee called on the government to not “sit on its hands” while LLMs developers exploit the rights of UK creatives, and urged it to assess closely the risks of open Large Language Models (LLMs) as well as their benefits.
“The application of the law to LLM processes is complex, but the principles remain clear. “The point of copyright is to reward creators for their efforts, prevent others from using works without permission, and incentivise innovation,” the committee said.
“The current legal framework is failing to ensure these outcomes occur and the Government has a duty to act. It cannot sit on its hands for the next decade until sufficient case law has emerged.”
It also questioned the government’s transparency regarding the appointment of experts to advise policymakers, warning there is a perception of conflicts of interest, which, they said, is undermining confidence in the integrity of the government’s work on AI.
“Redirected URLing this will become increasingly important as the government brings more private sector expertise into policymaking,” they said.
“Some conflicts of interest are inevitable, and we commend private sector leaders engaging in public service, which often involves incurring financial loss.”
The AI copyright war
The issue of copyright and AI has been ongoing amid the rise of AI with the legal framework around copyright being called into question.
Content creators and creatives have said that their content is being taken illegally to train LLMs as tech companies scrape significant volumes of data to train their chatbots.
So far the most high-profile legal battle surrounding LLMs in the UK is Getty Images’ claim that AI image generator Stability AI infringed its intellectual property rights by using its pictures for training purposes.
But over the pond, the US The New York Times is has also sued OpenAI and Microsoft for the use of its articles for the training of ChatGPT and Bing’s generative search responses.
Mail, Metro and i publisher DMG Media, Guardian Media Group and the Financial Times have also all made submissions to the committee arguing AI companies have ignored the legal licensing avenues available to them for the use of their content.
Read: Is AI Art Copyrighted? How AI Images Re-define Fair Use
DMG Media said it was “actively seeking advice on potential legal action” at the time of its submission, which was published in October.
“The issue that must concern legislators and regulator[s] is this: now that machines have been trained to absorb information gathered by other parties, and organise, present and monetise it as a news publisher would, what effect will this have on the incentive for media organisations to continue to create and publish high quality, reliable news content,” and what will be the impact on society and democracy should news organisations be forced to reduce investment or cease to operate,” DMG Media added.
Guardian Media group called for the government to take action to protect journalism and maintain the integrity of the open web.
“This one sided bargain risks undermining the willingness of individuals and businesses to invest the time and resources required to maintain a vibrant open web,” it said.
‘“even the mission of the Guardian to make our journalism as widely available as possible, this challenge to the future of the open web arguably matters to us more than any other news publisher.”
‘A nuanced approach’ to controlling AI
The committee has recommended that the government re-evaluate the current effectiveness of copyright law, noting that updating legislation if it is found that copyright holders aren’t being sufficiently protected.
For this, they called for a “nuanced approach” to the issue, warning: that “ the government has a duty to act. It cannot sit on its hands for the next decade and hope the courts will provide an answer.”
Still, what this approach would look like remains unclear. Some experts are concerned that too much control over AI development could prevent innovation in the UK sphere and ultimately take the UK out of the AI arms race taking place across the globe.
“It is important to protect creative intellectual property for any content from creators to brands, but the reality is that anyone not on the AI train is going to be left behind,” said John Kirk, Deputy CEO of Inspired Thinking Group.
“Ensuring there is a well-managed governance model in place supporting content operations can help mitigate risks and resulting hesitancy for the adoption of AI in day-to-day applications.
"Creatives must not be scared to embrace AI, despite any fears, as it can ease the burden of content creation to meet the rapidly rising demand.”
Sjuul van der Leeuw, CEO of Deployteq, part of Inspired Thinking Group, said: “The rapid evolution of AI over the past year has inevitably hit bumps in the road, and it is important to address issues around trust and safety such as AI copyright when they arise.
“Collaboration between government, regulators and industry can ensure marketing creatives are supported when it comes to content creation, but marketers themselves should embrace AI to boost efficiencies in their own operations.
“For example, AI can play a crucial role in creating content for email marketing campaigns as well as analyzing the first-party data collected, saving huge amounts of time and resulting in a higher impact campaign.”