News

Why this Indian newsroom AI story scandal should terrify readers

An Indian media outlet failed Journalism 101 by running an AI-generated story on a fake video with a fabricated expert quote.

Written by : Nic Dawes

Follow TNM’s WhatsApp channel for news updates and story links.

I think I have seen the end stage of ad-based content monetisation in India, and it is every bit as ridiculous and troubling as you might expect. Strangely enough, even though I worked for three years in a major Indian newsroom, it is because I am now in New York City that I have a front row seat.

On the evening of July 28, with the evening commute in full swing, the first hints that something awful was happening on Park Avenue started to ripple through the city: videos on X of a massive police contingent moving into position, official warnings of an active shooter, unconfirmed reports that at least one person was dead. “This situation in Midtown looks potentially very bad”, I messaged an editor in the newsroom at The City, the local news organisation that I lead. “Yes”, he replied “talking to contacts trying to get something more concrete”.

Over the next half hour, X deteriorated into the usual avalanche of rumor, hate speech, and potentially useful information that was impossible to trust. Local reporters from our newsroom, the New York Times, the New York Post and CNN were trying to piece together what was happening, balancing urgency and accuracy as best they could, but the first outlet to actually go live with the story was Hindustan Times, which posted a quick piece aggregating social media posts. Times Now quickly followed with its own story, aggregating HT’s reporting. No American outlet yet had a story live. With only X to go on, it isn’t surprising that HT’s coverage incorporated some of the most inflammatory and inaccurate speculation, notably the claim that the shooter was “a middle eastern man”. That wasn’t true, but even as more accurate details became available, right wing accounts continued to repeat it, and HT’s story – later amended – lent the idea credence.

Why was a Delhi-based newsroom competing to be first of a New York City story, without reporters on the ground, sources in the NYPD, or special expertise in American mass shootings?

In July this year, a young editor on the Global Desk of the Economic Times, part of The Times Group, fired up trending topics to see what has been delivered overnight and quickly found internet gold: Donald Trump has released an AI generated video of Barack Obama being arrested and jailed. The point of the video is to hype up allegations by Director of National Intelligence Tulsi Gabbard that Obama conspired to manipulate the 2016 intelligence community assessment of Russian interference in the Presidential election, and no doubt to distract his increasingly fractious base from the ballooning Jeffrey Epstein scandal.

Of course, to be fast is half the battle in the battle for breaking news traffic. It helps to get all your keywords right, and have the search authority that accrues to India’s biggest business newspaper, but that is nothing without speed, so our editor must work fast. When the story goes live it is headlined “Trump posts fake video of Obama in jail, declares ‘treason’ and ‘crime of the century’ in AI-driven blitz”. It also has a handy little summary at the top, and a quote from a prominent US disinformation expert, Nina Jankowicz. 

“This deepfake is political disinformation at its worst” the former head of Joe Biden’s Disinformation Governance Board apparently told ET, “it erodes public trust, damages reputations, and poses serious threats to democratic stability.”

Except, of course, that she didn’t. As the real Nina Jankowicz pointed out on Bluesky a day after it was published, she never spoke to ET. The quote was almost certainly a pure hallucination, the kind of plausible-sounding bullshit that large language models – rather than journalists – produce when they need to smoothly fill a gap.

An updated version of the article no longer has this quote, a correction or an apology but the internet never forgets.

To get your favorite chatbot to produce this result you might prompt it with something like: “Write a news story about the Obama video released by Donald Trump on Truth Social yesterday. Here are some URLs you can look at for reference. Be sure to keep search engine optimization in mind”. Seconds later, the text would be ready for the content management system, and a few clicks later out in the wild, racking up views.

If Jankowicz, who is the subject of relentless trolling and the author of “How to Be A Woman Online” did not monitor online mentions of her name carefully, perhaps no one would have noticed the other unusual features of the piece: a writing style that no one familiar with Indian journalistic vernacular would recognize as such, and simple “global desk” byline on a story that had apparently involved some reporting effort. But Jankowicz did notice, and watched the words she had never said travel from ET, to MSN, and then back into LLM outputs. It was, she wrote, a “perfect AI Ourobouros”: ET had published an AI-generated story about an AI-generated video with a fake quote from a real expert, which was soon being authoritatively confirmed by AI tools as real.

ET’s Global Desk ought to have known better. On July 17, a few days before the Trump-Obama story appeared, another viral story was peaking: Andy Byron, the CEO of Astronomer caught in a tender embrace with his company’s HR chief at a Coldplay Concert, had issued a heartfelt apology. ET put out a story with a few tells which suggested that it too had the help of AI. It used the American spelling of “behavior”, for example, and appended a brief FAQ of the kind that ChatGPT often offers to provide. More worryingly, the apology was fake, as the Astronomer quickly confirmed. Some versions of the story still live on the ET site have a quickly inserted “fact check” deep in the piece, written in a much more characteristically Indian English style. Others are unchanged.

What is going on here? One version is that these errors are exactly what you would expect when the logic of production at Bennett Coleman & Co meets AI enablement: speed, volume, and virality drive revenue, journalism is more of a format, than a set of standards and values. Because the stories in question are “global” they have no potential to trigger legal, political, or commercial consequences in India, and they can be pushed out with zero accountability. Indeed, Indian audiences are likely an afterthought. Coverage of this kind at big Indian outlets, including Hindustan Times, where I worked for three years, is not designed to elucidate world events from an Indian perspective – that is the job of the few remaining foreign correspondents at HT, ToI and The Hindu – instead, its goal is to win the race for search traffic, and earn the much higher Cost per Mille (CPMs) that programmatic advertising yields in the US and Europe. 

Now, those CPMs are cratering, and the arbitrage play of spending on lower-cost Indian aggregation talent and tech to earn dollar denominated advertising revenue is beginning to break. What remains, is profoundly at risk. AI overviews send far fewer clicks to news sites than traditional search results did, and users who haven’t already switched to social video are starting to turn directly to chatbots for news. 

Google Zero, as search specialists call it, is a real possibility, and there is only so much volume you can produce to keep piling up the remaining pennies. Times of India already produces over 1500 stories per day. Hiring a junior deskie to feed trending topics to an LLM is one way to keep that treadmill spinning faster and cheaper.

Of course, there might well be things that ET could do with AI to truly enhance its journalism: looking for suspicious patterns in stocks or commodity prices, mining government data and company reports, more efficiently building compelling apps, but those things would only be relevant to a strategy based on unique, value added journalism that people will pay for. Like most of its counterparts, ET now paywalls some of its journalism, but the incentives to keep the churn going predate the internet era, and the habit is clearly hard to kick.

In 1994 the first price war between ToI and HT in Delhi kicked off, setting up a decade of skirmishes that drove down the price consumers paid for a copy dramatically. The result was that major dailies began practising internet economics in print. They needed all of the advertising that a booming consumer economy could deliver to pay for the cost of the newsprint they were giving away, and they needed to make it intrusive: gatefold covers, wraps, “creative” treatments of the masthead, logos that broke free and floated into the copy. They pushed plants into 3rd tier cities, cranked up their circulation numbers, and rode the waves of marketing from cellphone companies, builders, private universities, and e-commerce. 

Something similar happened with satellite TV, which was sold so cheaply that endless ad-breaks ate deep into viewing time, banners floated over Amir Khan’s biceps, and a nation of consumers was taught that journalism and entertainment just weren’t worth paying much for, and that fighting past the ads was as inevitable as traffic at Ashram Chowk.

Compounding the problem, newspapers had just about zero data on their subscribers, which made it much harder to win them over to paid digital alternatives. The last mile of print delivery, and the customer relationship was the sole property of independent distributors. I once asked a very smart senior executive at HT if we could pay delivery workers to gather address information on subscribers, and start incentivising digital subscriptions among the most loyal readers. “Never”, he said “distributors would throw our copies in the Yamuna”.

In this world, the discovery of global search traffic, and the associated revenue, was a happy accident. Wire copy was cheap, we put it up as a matter of routine, and with the foreign eyeballs, came dollar-based revenue that involved no complicated reinvention of the business.

Soon, the tail was wagging the dog, with HT and BCCL going toe-to-toe to produce trending content for global audiences. 

If newsroom bosses think the quality of international coverage doesn’t matter because it’s just a sideline – a commodity play for foreigners, not real Indian customers – they should think again. The temptations of editing by AI-augemented algorithms will not stay in their box; once they have a toehold they will spread perhaps to the entertainment desk first, stock market coverage, weather, maybe metro, and so on. For those inclined to tolerate the immense legal and reputational risk, consider that you cannot build the direct, trusted, relationships with readers that keeps them coming back, and paying, on AI slop. For that, they can go straight to the source.
A journalist, editor and communications professional, Nic Dawes has worked as the editorial and chief content officer of the Hindustan Times in India and the Communications Director for Human Rights Watch. He is currently the Executive Director of The City.