Next Story
Newszop

Her bones tell Gaza story, but Grok thinks it is Yemen: Musk's chatbot flagged for false claims on starving girl photo - Can AI be trusted?

Send Push
A harrowing image captured in Gaza, showing a severely malnourished young girl held in her mother's arms, has become the latest flashpoint in the ongoing battle over truth, technology, and the Israel-Hamas war.

The photo, taken on August 2, 2025, by AFP photojournalist Omar al-Qattaa , documents the frail, skeletal frame of nine-year-old Mariam Dawwas amid growing fears of mass famine in the besieged Palestinian enclave. Israel's blockade of the Gaza Strip has cut off critical humanitarian aid, pushing over two million residents to the brink of starvation.

But when users turned to Elon Musk's AI chatbot, Grok, on X to verify the image, the response was stunningly off the mark. Grok insisted the photo was taken in Yemen in 2018, claiming it showed Amal Hussain, a seven-year-old girl whose death from starvation made global headlines during the Yemen civil war.

That answer was not just incorrect — it was dangerously misleading.



When AI becomes a disinformation machine

Grok's faulty identification rapidly spread online, sowing confusion and weaponising doubt. French left-wing lawmaker Aymeric Caron, who shared the image in solidarity with Palestinians, was swiftly accused of spreading disinformation, even though the image was authentic and current.

"This image is real, and so is the suffering it represents," said Caron, pushing back against the accusations.

The controversy spotlights a deeply unsettling trend: as more users rely on AI tools to fact-check content, the technology's errors are not just mistakes — they're catalysts for discrediting truth.

A human tragedy, buried under algorithmic error

Mariam Dawwas, once a healthy child weighing 25 kilograms before the war began in October 2023, now weighs just nine. "The only nutrition she gets is milk," her mother Modallala told AFP, "and even that's not always available."

Her image has become a symbol of Gaza's deepening humanitarian crisis. But Grok's misfire reduced her to a data point in the wrong file, an AI hallucination with real-world consequences.

Even after being challenged, Grok initially doubled down: "I do not spread fake news; I base my answers on verified sources." While the chatbot eventually acknowledged the error, it again repeated the incorrect Yemen attribution the very next day.
Loving Newspoint? Download the app now