Mobile navigation

News 

Roger Lynch testifies

Roger Lynch, CEO, Condé Nast, testified at the Senate Judiciary Subcommittee on Privacy, Technology and The Law.

Roger Lynch testifies
Roger Lynch: "Generative AI is not Fair Use."

Roger Lynch, CEO of Condé Nast’s testimony to the Senate Judiciary Subcommittee on Privacy, Technology, and the Law Hearing on January 10, 2024 is quoted in full below:

Oversight of AI: The Future of Journalism

Chair Blumenthal, Ranking Member Hawley, and members of the Subcommittee, thank you for inviting me to participate in today’s critical discussion about AI and the future of journalism. My name is Roger Lynch, and I am the CEO of Condé Nast.

Condé Nast was founded in 1909 and is one of the world’s most renowned media companies today, creating and distributing every type of media through its iconic brands and publications. These include The New Yorker, Vogue, Vanity Fair, Wired, Architectural Digest, Condé Nast Traveler, GQ, Bon Appétit and many more. Over 300 million unique users engage with our content on our sites every month and over a half billion people follow us on our social media channels. Our videos were watched 17 billion times last year alone. This audience represents our truly global scale with content distributed in 32 countries and markets around the world.

Our mission is to produce culture-defining exceptional journalism and creative content. This purpose is evident in our output – output that only human capital, together with rigorous standards and fact checking developed over a century, can produce. The result is stories that change society for the better, information that helps people make decisions in their daily lives, and content that creates an informed and empowered electorate. As an example, this has been what The New Yorker has done for 100 years - from the publishing of John Hersey’s 1946 account of survivors of the atomic bombing of Hiroshima; to the serialized chapters of Rachel Carson’s Silent Spring, which helped launch the modern environmental movement in 1962; to more recently, Patrick Radden Keefe’s exposés on the Sackler family’s ruthless marketing of OxyContin, the drug that profited billions from peoples’ addictions. Titles like Vogue, Vanity Fair and GQ bring public figures and current affairs to consumers through a lens of arts, culture and entertainment. As an example, Taylor Antrim interviewed Ukraine First Lady Olena Zelenska for Vogue, bringing her poignant reflections on the early days of the Russian invasion and life under siege to millions of readers.

Condé Nast is proud of its history of innovation, continually transforming itself to deliver the highest quality content to its readers and subscribers across brands, platforms and through the most advanced forms of media distribution.

I personally enjoy leading companies through times of great technological change. Generative AI (Gen AI) is certainly bringing about such change and is already demonstrating tremendous potential to make the world a better place. But Gen AI cannot replace journalism. It takes reporters with grit, integrity, ambition and human creativity to develop the stories that allow free markets, free speech, and freedom itself to thrive.

My experience, which includes starting one of the first broadband companies in Europe and the first IPTV company in the UK, as well as starting the first live streaming TV service, Sling TV, and as CEO of Pandora, lends great insight to the challenges and opportunities presented by technology and, in particular, Gen AI. Disrupting deeply entrenched content distribution models is in my DNA. In each of these cases, successful new businesses were built on a foundation of licensing content rights. Licensing allowed distributors to work together with content creators to innovate new and better consumer experiences and generate profits that were reinvested in great content. I am here today because congressional intervention is needed to make clear that Gen AI companies must also seek licenses to utilize publisher content for use with Gen AI.

Unfortunately, current Gen AI tools have been built with stolen goods. Gen AI companies copy and display our content without permission or compensation in order to build massive commercial businesses that directly compete with us. Such use violates copyright law and threatens the continued production of high-quality media content. These companies argue that their machines are just “learning” from reading our content just as humans learn, and that no licenses are required for that. But Gen AI models do not learn like humans do. There are many examples where the Chatbots display content plainly derived from the works they ingest. In effect, they are mashing up copies at enormous scale and speed. Just as copyright law does not permit a human to replicate and regurgitate copyrighted material for purposes of commercial gain without a license, it does not allow Gen AI companies to do so either.

Moreover, Gen AI technology enables misinformation/disinformation on an unprecedented scale. In the wrong hands, Gen AI can generate outputs that are customized for individuals, making misinformation in all forms - fake photographs, audio, video and documents - look real. Widely available Gen AI tools hallucinate and generate misstatements that are sometimes attributed to real publications like ours, damaging our brands. When confronted with unprecedented amounts of misinformation, Americans can’t possibly spend all of their time determining what’s true and what’s false. They are far more likely to stop trusting any source of information, which would have devastating consequences for our already polarized society.

In comparison, our company and companies like ours depend on customer trust. We have extensive processes to check our facts and make sure our content is accurate and fair. To ameliorate these issues, Gen AI providers should be required to disclose instances when output is generated without human review and should ensure that they do not attribute content to our brands that have not been created by us.

Luckily, there is a path forward that’s good policy, good business and already the law: licensed and compensated use of publisher content for both training and output. This will ensure a sustainable and competitive ecosystem in which high quality content continues to be produced and trustworthy brands can endure, giving society and democracy the information it needs. This will also ensure that the current multiplicity of viewpoints continues to be represented in journalism without the big tech companies becoming the gatekeepers for news. Without licensed use, however, the ecosystem will collapse, and Gen AI powered search will have nothing to search for beyond poor quality content and misinformation.

More than 20 lawsuits are pending against Gen AI companies that are refusing to license content for training and output. Big tech companies are depending on the expense, delay, and uncertainty of litigation to avoid coming to the table. In fact, they have told us directly that they believe our content should be available to them for free. Congress should eliminate any doubt that licenses are required.

Journalism is a fundamentally human pursuit, and it plays an essential and irreplaceable role in our society and democracy.

Journalism is, simply, hard work. Journalists must develop reliable sources and seek out firsthand information, sometimes in very difficult places. The New Yorker, for example, embedded journalists in Ukraine and its editor in chief, David Remnick, recently traveled to Israel and the Lebanese border and provided Americans with clear-eyed perspectives from both Israelis and Palestinians. The New Yorker was the first magazine to have ever won a Pulitzer Prize for its writing and now has six, in addition to eight George Polk Awards, 15 Oscar nominations, over 75 National Magazine Awards, and a Peabody, among others, all recognizing its distinguished journalism and public service. High quality journalism is essential work that changes societal norms, holds power to account, helps keep free markets free, and brings our communities together. Whether reporting on local high school sports, fashion, architecture, art or foreign affairs, journalism is a uniquely human venture that AI can only parrot, not create.

Creating this content requires tremendous effort and resources. We employ thousands of individuals to produce and distribute our journalism. All of our publications have specific rules for fact checking, attribution of quotes and other steps requiring human judgment before publication. We are now seeing that outputs from AI are making this work even harder, as publishers need to ensure information, images and video are not AI fakes. Media organizations need to pay salaries, fund research and fact checking, pay rent and technology costs, cover travel, invest in security for journalists, and more. Our publications are supported by revenue from subscriptions, advertising, e-commerce and licensing, all of which depend on consumer traffic - the very traffic threatened by Gen AI.

Gen AI companies are using our stolen intellectual property to build tools of replacement.

Today’s Gen AI tools maintain complete copies of the works they train on, including our content, and output the substance – sometimes verbatim, sometimes paraphrased – while keeping 100% of the value for themselves. They are training consumers to come to them for information, not to us, and, unlike traditional search, they are keeping consumers within their experiences, depriving us of the opportunity to connect with our audiences directly, customize our content for them, and generate advertising and subscription revenue, sales leads and other valuable data. By misappropriating our content in this way, they are directly threatening the viability of the media ecosystem.

Some Gen AIs provide their own output, others use a combination of natural language outputs and search, which is called retrieval augmented generation (or “RAG”). Some Gen AI companies claim this is “just search” but that couldn’t be further from the truth.

A traditional search engine response contains only a partial snippet and links to the content provider’s website; think of it as a teaser that entices the reader to click through to read the full piece of content. Search engines are vehicles to discover content, not replace it. But in Gen AI responses, users are provided with a complete answer to their query and the opportunity to ask follow up questions. The user receives the information derived from our sites without further clicks and without being transferred to our sites.

This isn’t just a traditional publisher versus tech dispute. In this context, many tech companies are also publishers – for example, Twitter/X and Meta are suing a scraper called Bright Data for misusing their scraped content and Reddit has taken a variety of steps to make clear that they are entitled to be compensated for training on their data. These tech companies also believe they are entitled to be compensated for the use of their systems and content.

Some AI companies will tell you that publishers can opt out from AI training. This is misleading. First, opting out of future training does not solve for the content they have already taken. Second, most publishers generate a substantial amount of traffic from search. In order to opt out of the new search engines powered by AI, such as Google’s “Search Generative Experience”, we would have to opt out of search, which would materially damage our businesses.

Publishers, such as Condé Nast, are committed to commercial licensing deals.

We are committed to negotiating licensing deals with Gen AI companies that we believe will allow the technology to thrive, while also creating a sustainable ecosystem for the continued creation of high quality content.

Big tech companies claim that getting permission for the use of copyrighted content isn’t practical, but it is. There are a great many situations where multitudes of rights owners license multitudes of users in efficient ways. In music publishing, ASCAP, BMI, SESAC, GMR and others fulfill this role. The Copyright Clearance Center, global mechanical rights organizations, and companies like Shutterstock and Getty Images - all of them are private market solutions to aggregating rights for license. I am confident that the free market can generate efficient licensing solutions once the Gen AI companies acknowledge the need to license. The constitutional foundation of copyright is as important now as it was long ago. Our nation’s founders believed that by creating a property right for authors and inventors, market forces, rather than politics, would provide incentives for the “progress of science and useful arts.”

And there’s good reason to believe that licensing deals will directly result in future investment in content. Following adoption of a law requiring that big tech platforms pay for news content in Australia, $140M in incremental annual revenue for journalism is resulting in hiring of a significant number of journalists and other investments.

Generative AI is not Fair Use.

We have already seen how the permissionless use of content causes harm to news, content creation and, ultimately, the public. Over sixty percent (60%) of all digital advertising revenue flows to three companies - Google, Amazon and Meta. A recent Press Gazette article reported that at least 8,000 journalism jobs were cut in 2023 alone in the US, UK, and Canada. A recent economic study conducted by academic researchers and the Brattle Group sheds light on the reason, concluding that existing deals between news media companies and platforms do not come close to capturing the value generated by news content on the platforms and estimates that under the framework of the Journalism Competition & Preservation Act, Facebook and Google Search would owe news publishers between $11.9 and $13.9 billion annually. And this was all before AI.

We believe that a legislative fix can be simple – clarifying that the use of copyrighted content in conjunction with commercial Gen AI is not fair use and requires a license.

Fair use is designed to allow criticism, parody, scholarship, research and news reporting. The law is clear that it is not fair use when there is an adverse effect on the market for the copyrighted material. It is that market that creates the incentive to invest and innovate in content - and in that way the market supports journalism and innovation in the long run. Fair use is not intended simply to enrich technology companies that prefer not to pay. If content is the raw material of Gen AI, then it should be licensed and compensated, just as engineers and computer time must be paid for and acquired lawfully.

We also believe that it is critical to confirm the IP Enforcement Coordinator – the IPEC – the head of the office empowered to advise the Administration on IP enforcement issues. This critical position has been vacant since 2021, and there is an excellent bipartisan candidate ready for confirmation.

It’s imperative that the U.S. – the world leader in technological innovation – lead in the legislative arena. In practice, the Gen AI models operate globally and the marketplace needs a global solution for the technology to be deployed responsibly and sustainably. If the U.S. does not provide this leadership, other countries will step into the void in ways that could hinder the ability of the technology to serve humanity to its fullest potential.

In conclusion, we urge Congress to take immediate action to clarify that the use of publisher content for both Gen AI training and output must be licensed and compensated. The rapidity with which this technology is being adopted has no precedent and the threat to the media and publishing industries are real and of great consequence. Big tech companies understand that time is on their side, that litigation is slow, and, for many publishers, prohibitively expensive. The time to act is now and the stakes are nothing short of the continued viability of journalism.

Keep up-to-date with publishing news: sign up here for InPubWeekly, our free weekly e-newsletter.