“45% of the AI responses studied contained at least one significant issue, with 81% having some form of problem”

I’m a big fan of using GenAI to assist in research, ideation, and even sense-checking – asking it to help me with my own critical and lateral thinking. I use these tools multiple times a day, and am constantly encouraging the journalists I work with at Today Digital o use GenAI more to help them boost both their productivity and the impact of their work.

But it’s *vital* to keep fully aware of GenAI’s limitations when using it for anything where facts are important.

No matter how often we remind ourselves that LLMs have no true understanding, no real intelligence, no concept of what a “fact” actually is, the more you use them the easier it is to be taken in by their very, very convincing pastiche of true intelligence.

As this Reuters study shows, despite the apparent progress of the last couple of years, there are still fundamental challenges – which are unlikely to ever be fully overcome using this form of AI. (And which is why LLMs weren’t even classified as AI until very recently…)

The good news? With GenAI’s limitations increasingly becoming more widely appreciated, this could ultimately be a good thing for news orgs – because why go to an unreliable intermediary when you can go direct to the journalistic source?

Journalistic scepticism and fundamental critical thinking skills are becoming more important than ever.