As the UK grapples with how to regulate artificial intelligence, a fiery debate is unfolding over AI copyright transparency. At the heart of it is a proposed law that would force AI companies to reveal which copyrighted works were used to train their models—a move fiercely backed by artists but slammed by some tech leaders as unworkable. Nick Clegg, former UK deputy prime minister and Meta’s one-time policy chief, recently weighed in with a stark warning.
At a London event promoting his new book, Clegg argued that requiring AI developers to get creators’ consent before training on their content could “basically kill” the AI industry in the UK.
He acknowledged that artists should be able to opt out, but said the idea of asking permission before using content in the first place was simply unrealistic. “Quite a lot of voices say, ‘You can only train on my content if you first ask.’ And I have to say that strikes me as somewhat implausible,” Clegg said, adding that the scale of data required to train large models makes prior consent infeasible. “If you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight.”
Lawmakers Clash Over Transparency vs. Innovation
His comments come amid rising tensions in Parliament, where a key amendment to the Data (Use and Access) Bill is under fierce debate. Proposed by filmmaker and peer Beeban Kidron, the amendment would require AI developers to disclose the copyrighted works used in their model training data. It’s a move intended to enforce copyright law and deter misuse of intellectual property.
The proposal has won strong backing from across the creative industries. In early May, hundreds of high-profile figures—including Paul McCartney, Dua Lipa, Elton John, and Andrew Lloyd Webber—signed an open letter supporting the amendment. Their message was clear: creators deserve to know when and how their work is being used, especially by powerful tech companies.
Despite this momentum, the UK Parliament rejected the amendment last week. Technology Secretary Peter Kyle emphasized the need for balance, stating that “Britain’s economy needs both sectors—AI and creative—to succeed and to prosper.”
But Kidron and other backers of the transparency push argue that without clear accountability, AI companies will continue to scrape and train on protected works without fear of consequence. “If AI companies had to be transparent, they’d be less likely to steal in the first place,” Kidron argued in an op-ed for The Guardian, promising the fight is far from over. The bill is expected to return to the House of Lords in early June, where debate will resume.