The return of the Rise of the Robots

This, on the resurgence of the Rise of the Robots fears about the threat of widespread AI job losses, gets some of the way to articulating the niggling issues I have with this apocalyptic narrative:

Even if you do believe the technology has got or can get good enough to replace workers at scale, the economics simply don’t make sense.

Of course, we’ve spent the last two decades witnessing many, many things that made no economic sense yet that happened anyway thanks to a combination of complacency, willful ignorance, ideology, bloody-mindedness, and spite. Just because something makes no economic sense doesn’t mean it won’t happen.

But despite non-AI industry stocks having been hammered over the last couple of weeks, think what needs to happen to enable this AI revolution. Most developed nations had energy and clean water supply challenges even before factoring in a data centre building boom. We still have a deep reliance on rare earth metals for the hardware that the AI needs to function (the clue’s in the name).

What happens to prices when demand surges to unprecedented levels and supply struggles to keep up? And how does that change the balance sheet projections when deciding whether to replace human workers with a grandiose form of a new SaaS subscription, whose monthly costs and reliability could shift at any moment?

Remember the $7 *trillion* Sam Altman was asking for to invest in infrastructure? That’s likely to be a substantial under-estimate of the amounts needed given how much every industry upstream of the AI companies is already struggling to meet their projected needs.

How much can structured data help with GEO?

This is a nice, neat summary of the core constraints of current LLM based AI when it comes to SEO/GEO (based on a much longer, more technical piece, if you want the details).

Back when ChatGPT 3.5 came out, I was telling anyone who’d listen that it was going to disrupt search and publishing.

In early 2024, while at PwC, I started pitching new content formats to address this – intended to help capture whatever the GenAI equivalent of search ranking was going to be. “GEO” before this label stuck (I was calling it AIO at the time).

My thinking then was based on what seemed to be a logical, structured approach – similar to the “query fan out” advocates you’ll see in the “GEO” space today. (Basically label the hell out of your content, anticipate and answer the questions your target audience is likely to ask, as that structure should help the AI understand the context more easily, and so encourage it to pull from your page rather than someone else’s. Effectively a slightly deeper version of an old school Q&A or FAQ piece…)

But as I dug deeper it soon became clear that the challenge with LLM-based GenAI (from a model visibility perspective) wasn’t to do with clarifying the intended meaning of the information you want the model to ingest and regurgitate, as I first thought. (“These things can process unstructured data, but they’ll process *structured* data easier – so let’s structure it for them.”)

Instead it’s that these systems – despite being called Large *Language* Models – don’t actually understand language, or context. “Logic” to them is a meaningless concept; not only that, they have no concept of what a concept even is.



Tokens aren’t words, and don’t have meaning independently – they only appear to have meaning when combined into words.

Tokens create the illusion of being words (and having meaning) because of the probabilistic nature of these tools, when working with them using language as the system interface. This creates an environment in which they’re working within the rules of language, so can produce output that makes sense – even if they don’t “understand” what they’re saying.

But URLs aren’t language, and don’t have linguistic rules or any consistency from site to site in terms of information architecture. Every site’s URL structure is similar, but different.

And as LLMs don’t really understand structure (except as recognisable, predictable patterns), this makes accurately relating URLs a significant challenge for current LLM-based GenAI tools.



This is a structural challenge, baked into the very nature of these models. Despite what many GEO “experts” are now claiming, if your goal is to generate links and traffic from GenAI results, it’s not going to be an easy one to engineer if you’re working from outside that system.

It may be possible to tweak model outputs to improve this and increase URL attribution accuracy, but a) it won’t remove the underlying structural constraints, and b) what would be the incentive for the GenAI companies to do this?

The dust has yet to settle on this one.

AI intensifies work, rather than reducing it

This feels *very* familiar with GenAI:

“What looks like higher productivity in the short run can mask silent workload creep and growing cognitive strain as employees juggle multiple AI-enabled workflows…

“Over time, overwork can impair judgment, increase the likelihood of errors, and make it harder for organizations to distinguish genuine productivity gains from unsustainable intensity.”

As so often, it’s too early to say what the true impact of GenAI will be on the workforce – see other recent studies suggesting that productivity gains may (so far!) be overstated or marginal – but if it leads to doing more work at unsustainable rates, it would be a strange irony if the fears about job losses ultimately prove unfounded. Could GenAI end up pushing organisations to need more people, not fewer?

(Ever the optimist, me!)

On GenAI filmmaking

“You don’t know if you’re gonna get what you want on the first take or the 12th take or the 40th take”

This is GenAI’s current biggest challenge: It’s still being sold as primarily an efficiency tool – do more, faster!

In practice, as most who’ve played with it have found, it’s only faster if good enough is good enough. If you’re seeking excellence, it can help you to improve and refine what you’re doing – but not at speed.

The time / cost / quality pyramid persists, despite what we were all hoping.

What GenAI *is* allowing is for more people to try things that previously they’d never have been able to do – like code, write better, or create video or imagery.

But what this fascinating piece shows is that even genuine experts with a desire to experiment and push the boundaries can struggle to get genuinely excellent results – and that human + machine + time + iteration + patience remains (for now) the only way to get beyond good enough.

On Moltbook, AI Agents, and hype

This piece about sums up my feelings on Moltbook:

“These are nondeterministic, unpredictable systems that are now receiving inputs and context from other such systems… From a security perspective, it’s an absolute nightmare.”

The whole exercise initially struck me as a fun enough probabilistic parlour trick – similar to the entertaining “Infinite conversation” site with bots based on Werner Herzog and Slavoj Žižek from a couple of years back. There’s no true *intelligence* here, just chatbots slotting into established tropes for online forums, including creating their own memes and complaining about privacy and the mods (here, “the humans”).

So far so unsurprising – just as it’s unsurprising that some people who should know better have decided to read meaning and understanding into these interactions. (Hell, some of the stuff robot Werner Herzog came up with could also sound profound – it’s all in the voice…)

But what *is* new is the naiveté of some early adopters who’ve entrusted incredibly sensitive personal information and provided ridiculous amounts of access to AI agents whose programming is not deterministic and which are now able to interact with other agents.

The tech may be impressive – these agents are able to *do* more than I was expecting by this stage – but the potential for compound risk is insane. No sensible organisation would let a system like this anywhere near its operations until it’s possible to put far more robust constraints in place.

And so, just as with gambling, the question with GenAI systems seems increasingly to be all about personal and organisational risk tolerance.

My risk tolerance for this kind of thing is low, because the potential payoff – a bit of enhanced productivity? – is similarly low. If you’re really so time poor that you’re willing to take this gamble, then you need to rethink your priorities.

Best practice vs expertise

This. My biggest data lessons from 25 years in digital publishing / marketing to add to the efficiency/effectiveness debate:

1) There’s an important distinction between being data-driven and data-informed; more organisations need to lean towards the latter, because…

2) No numbers mean anything without context – almost everything measurable needs multiple other datapoints, timescales, and points of comparison to have any meaning

3) Most data tracked by marketing departments are vanity metrics with almost zero long-term value for the business as a whole

4) Pick the wrong KPIs (pageviews being the most obvious, revenue growth perhaps the least) you’re more likely to harm the business than help it by focusing on improving the *indicator* rather than the business-wide performance, because…

5) Almost every metric can be gamed or significantly impacted by outliers or picking the wrong points of comparison, but…

6) Not enough people check to see if this is what’s happening, especially if the results are looking good

7) Equally, just because you *think* you can measure something doesn’t mean this is what you’re actually measuring, or that it’s helpful to do so, but…

8) Tables of numbers and nice pretty charts (especially with trend lines) are addictive, while cross-referencing multiple metrics and trying to make sense of it all is difficult – not helped by most of the tools available being deeply unintuitive, so…

9) Most laypeople don’t bother asking about the methodology for fear of looking stupid, and just nod along, so…

10) Keep on questioning the data – who compiled it, how, when, where, why, and what could we be missing? Data interpretation is as much art as science – the more we question what we’re seeing, the more likely it is someone will have one of those sparks of inspiration that help you find something genuinely meaningful



What have I missed?

What have I got wrong?

GenAI continues to make major errors in news summaries

“45% of the AI responses studied contained at least one significant issue, with 81% having some form of problem”

I’m a big fan of using GenAI to assist in research, ideation, and even sense-checking – asking it to help me with my own critical and lateral thinking. I use these tools multiple times a day, and am constantly encouraging the journalists I work with at Today Digital o use GenAI more to help them boost both their productivity and the impact of their work.

But it’s *vital* to keep fully aware of GenAI’s limitations when using it for anything where facts are important.

No matter how often we remind ourselves that LLMs have no true understanding, no real intelligence, no concept of what a “fact” actually is, the more you use them the easier it is to be taken in by their very, very convincing pastiche of true intelligence.

As this Reuters study shows, despite the apparent progress of the last couple of years, there are still fundamental challenges – which are unlikely to ever be fully overcome using this form of AI. (And which is why LLMs weren’t even classified as AI until very recently…)

The good news? With GenAI’s limitations increasingly becoming more widely appreciated, this could ultimately be a good thing for news orgs – because why go to an unreliable intermediary when you can go direct to the journalistic source?

Journalistic scepticism and fundamental critical thinking skills are becoming more important than ever.

On GenAI writing styles – again…

The rhythms and tone of AI-assisted writing are now pretty much endemic on LinkedIn

And I get why: GenAI copy is generally pretty tight, pretty focused, and flows pretty well. Certainly better than most non-professional writers can manage on their own.

Hell, it sounds annoyingly like my own natural writing style, honed over years of practice…

But people I’ve known for years are starting to no longer sound like themselves.

Their words are too polished, too slick, too much like those an American social media copywriter would use, no matter where they’re from.

None of this post was written with AI.

And despite (because of?) being a professional writer/editor, It took me over half an hour of questioning myself, rewriting, starting again, looking for the right phrase. Doing this on my phone, my thumbs now ache and the little finger on my right hand, which I always use to support the weight while writing, is begging for a break.

With GenAI I could have “written” this in a fraction of the time, and it would have been tighter, easier to follow.

But it wouldn’t have been me – and I still (naively) want my social media interactions to be authentically human to human.

(Of course, the AI version would probably have ended up getting more engagement, because this post – as well as going out on a Sunday morning when no one’s looking, and without an image – is now far too long for most people, or the LinkedIn algorithm, to give it much attention. Hey ho!)

The Tragedy of the Commons redux

The Tragedy of the Commons is coming for the internet:

Google’s AI Is Destroying Search, the Internet, and Your Brain

404 Media, 23 July 2025

The GenAI equivalent of Googlebombing (remember that?) was one of my first concerns when pondering the likely impact of GenAI search, way back when ChatGPT 3.5 came out and the prospect started looking real.

This kind of thing is, sadly, inevitable. And while Google’s got very solid experience of getting around attempts to manipulate its algorithms, it doesn’t have a great track record of releasing AI products that can distinguish facts from confabulations (remember both the Bard and the Gemini launches?).

The other inevitability is that this is also going to lead to more scammy marketing techniques. We’re going to be inundated with yet more of those snake oil salespeople popping up to promise brands results in GenAI, just as they used to in the early days of SEO – fuelled by similar tactics of vast networks of websites all interlinking to each other to create the impression of authority.

Only now, rather than using underpaid humans in content farms, they’ll be using GenAI to spit out infinite copy and infinite webpages, poisoning the GenAI well for everyone in pursuit of short-term profits.

The GenAI default style

A GenAI pixelated image of two robots talking while other robots look onThe default writing style of GenAI is becoming ever more prevalent on LinkedIn, both in posts and comments.

This GenAI standard copy has a rhythm that, because it’s becoming so common, is becoming increasingly noticeable.

Sometimes it’s really very obvious we’ve got bots talking to bots – especially on those AI-generated posts where LinkedIn tries to algorithmically flatter us by pretending we’re one of a select few experts invited to respond to a question.

Top tip: If you’re using LinkedIn to build a personal / professional brand, you really need a personality – a style or tone (and preferably ideas) of your own. If you sound the same as everyone else, you fade into the background noise.

So while it may be tempting to hit the “Rewrite with AI” button, or just paste a question into your Chatbot of choice, my advice: Don’t.

Or, at least, don’t without giving it some thought.

There are lots of good reasons to use AI to help with your writing – it’s an annoyingly good editor when used carefully, and can be a superb help for people working in their second language, or with neurodiverse needs. It can be helpful to spot ways to tighten arguments, and in suggesting additional points. But like any tool, it needs a bit of practice and skill to use well.

But seeing that this platform is about showing off professional skills, don’t use the default – that’s like turning up to a client presentation with a PowerPoint with no formatting.

Put a bit of effort in, and maybe you’ll get read and responded to by people, not just bots. And isn’t that the point of *social* media?