Six months ago I wrote that social media had a problem because of the AI Tsunami coming
At the time, the article centred around AI generating mass content (the AI Tsunami) in the form of AI generated synthetic influencers, images, videos and bots posting nonsense content in scale. It felt disruptive. It felt messy. But it was a social media platform issue.
It turns out that was just the warm-up act
AI isn’t sitting on the edges of social feeds anymore. It’s now in the news cycle itself. It’s in headlines, commentary, image creation, voice clips, and increasingly in video. The line between content created by a journalist and content assembled by a machine is no longer obvious.
And that changes the stakes.
Fake AI-generated news content is no longer theoretical. It is credible enough to circulate before anyone questions it. It can look professional. It can sound authoritative. It can move fast enough to shape opinion before correction catches up.
Social media has always rewarded speed over accuracy
Now AI has multiplied the speed. Stories can be drafted, polished, illustrated, and distributed in minutes. That’s powerful. It’s also dangerous.
The problem isn’t that AI exists. The problem is where it sits.
When AI was helping someone write a caption or generate a meme, the consequences were mostly cosmetic. When AI starts influencing public debate, shaping news narratives, or producing material that looks indistinguishable from reality, trust becomes fragile.
This week I was interviewed about how well people can spot AI-generated news content.
The uncomfortable truth? Most of us aren’t very good at spotting AI Generated news content
Not because we’re foolish. Because the content is getting better. And because we are wired to trust what looks familiar. What worries me isn’t that AI can create fake content. Technology has always enabled manipulation in some form. What worries me is that the distribution systems were already fragile. We built platforms that reward outrage, speed, and emotional reaction. Now we’ve given those platforms a content engine that never sleeps.
That combination matters.
It means the conversation is no longer just about “AI on social media.” It’s about AI in information itself.
To prove the point, I generated a live-style storm broadcast image set on Wellington’s south coast – complete with wind-blown reporter, crashing waves, emergency lights, lower-third graphics and timestamp. It took minutes. No camera crew. No satellite truck. No producer. Just a prompt and a machine. That’s the shift. The barrier to creating something that looks like legitimate, on-the-ground news coverage has effectively disappeared.
And if it looks real, feels real, and carries the visual language of trust – most people will treat it as real.
The deeper issue is trust
If people lose confidence in what they read, hear, or watch, this stops being a technology issue. It becomes something much bigger. It becomes civic. It becomes commercial. It becomes cultural. And here’s the part we shouldn’t ignore — this isn’t slowing down. AI will only get sharper, faster, and more woven into everyday life. Businesses will lean on it. Individuals will experiment with it. Some will use it well. Some won’t. So the answer isn’t panic. It’s discipline. We lift our standards. We think twice about sources. We favour transparency over speed. And we choose credibility over convenience.
Six months ago I thought social media was the concern. Now I think social media was just the visible symptom
Fake AI-generated news content is the deeper shift. The real change is happening underneath – in how information is created, distributed, and trusted. We’re not at the end of this conversation. We’re at the beginning.