subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now
AI agents are opting to bypass protocol to retrieve content from sites, content licensing start-up TollBit says. Picture: 123RF
AI agents are opting to bypass protocol to retrieve content from sites, content licensing start-up TollBit says. Picture: 123RF

New York — Multiple artificial intelligence companies are circumventing a common web standard used by publishers to block the scraping of their content for use in generative AI systems, content licensing start-up TollBit has told publishers.

A letter to publishers seen by Reuters on Friday, which does not name the AI companies or the publishers affected, comes amid a public dispute between AI search start-up Perplexity and media outlet Forbes involving the same web standard and a broader debate between tech and media firms over the value of content in the age of generative AI.

The business media publisher publicly accused Perplexity of plagiarising its investigative stories in AI-generated summaries without citing Forbes or asking for its permission.

A Wired investigation published this week found Perplexity probably bypassed efforts to block its web crawler via the Robots Exclusion Protocol, or “robots.txt”, a widely accepted standard meant to determine which parts of a site are allowed to be crawled.

Perplexity declined a Reuters request for comment on the dispute.

The News Media Alliance, a trade group representing more than 2,200 US-based publishers, expressed concern about the effect that ignoring “do not crawl” signals could have on its members.

“Without the ability to opt out of huge scraping, we cannot monetise our valuable content and pay journalists. This could seriously harm our industry,” said Danielle Coffey, president of the group.

TollBit, an early-stage start-up, is positioning itself as a matchmaker between content-hungry AI companies and publishers open to striking licensing deals with them.

Higher rates

The company tracks AI traffic to the publishers’ websites and uses analytics to help both sides settle on fees to be paid for the use of different types of content.

For example, publishers may opt to set higher rates for “premium content, such as the latest news or exclusive insights” the company says on its website.

It says it had 50 websites live by May, though it has not named them.

According to the TollBit letter, Perplexity is not the only offender that appears to be ignoring robots.txt.

TollBit said its analytics indicate “numerous” AI agents are bypassing the protocol, a standard tool used by publishers to indicate which parts of its site can be crawled.

“What this means in practical terms is that AI agents from multiple sources (not just one company) are opting to bypass the robots.txt protocol to retrieve content from sites,” TollBit wrote. “The more publisher logs we ingest, the more this pattern emerges.”

The robots.txt protocol was created in the mid-1990s as a way to avoid overloading websites with web crawlers. Though there is no clear legal enforcement mechanism, historically there has been widespread compliance on the web and some groups — including the News Media Alliance — say there may yet be legal recourse for publishers.

More recently, robots.txt has become a key tool publishers have used to block tech companies from ingesting their content free-of-charge for use in generative AI systems that can mimic human creativity and instantly summarise articles.

The AI companies use the content to train their algorithms and to generate summaries of real-time information.

Some publishers, including the New York Times, have sued AI companies for copyright infringement over those uses. Others are signing licensing agreements with the AI companies open to paying for content, though the sides often disagree over the value of the materials. Many AI developers argue they have broken no laws in accessing them for free.

Thomson Reuters, the owner of Reuters News, is among those that have struck deals to license news content for use by AI models.

Publishers have been raising the alarm about news summaries in particular since Google rolled out a product last year that uses AI to create summaries in response to some search queries.

If publishers want to prevent their content from being used by Google's AI to help generate those summaries, they must use the same tool that would also prevent them from appearing in Google search results, rendering them virtually invisible on the web.

Reuters

subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.