Interesting, thought-provoking and convincing about what needs to be done, while being realistic about how likely it is such vast changes to how the world works will come about. Yet also packed with examples of ways in which such changes are already taking place, giving some room for optimism.
A good polemic, in other words – and made even better by continually citing sources and experts from non-traditional backgrounds – neither ostentatiously nor explicitly, it made me realise how few economics and politics books regularly cite women or people from non-Western countries. Which may well be part of the reason why our economics and politics are so broken.
The only real criticism: The book itself is well enough written in terms of individual sentences and paragraphs, but lacks enough variety of tone and pacing to really keep the attention, and the author has a tendency to both repeat herself and extend metaphors well beyond the point where they have impact.
I’m vaguely pondering starting up a newsletter/podcast/etc exploring media/marketing received wisdom and groupthink…
The Superbowl, Davos, and ChatGPT’s announcement it’s running ads means media/marketing LinkedIn will be swamped with lukewarm hot takes this week.
This industry herd mentality is increasingly fascinating to me – the need to comment on the same things everyone else is talking about is rarely “thought leadership”, and is very far from the old advertising mantra “When the world zigs, zag”.
I’ve spent a decade in marketing, more than double that in publishing. In all that time I’ve rarely encountered many convincing new ideas – even during major platform shifts. And usually when I have, the evidence for “best practice” has lacked much substance – or blatantly originated in some tech company’s hype (as with the first, second, and third pivots to video, and certainly with the “everything needs to be optimised for Alexa now” fad).
It feels like we’ve now all got so used to running with the latest fad for fear of missing out or – worse! – looking out of touch, we’ve lost all sense of critical thinking, or desire to question industry norms.
But is this something in which enough people would be sufficiently interested to make it worthwhile? And will it cut through the algorithm – another idea we’ve all unthinkingly adopted?
This is a strange book. Originally written to accompany a BBC TV series back in 1981, it has since been extensively revised to reflect the (substantial) changes in understanding of this long period – covering over a thousand years, from Boudicca to the Norman Conquest.
That period alone is enough to raise an eyebrow. What the hell does Wood mean by “the Dark Ages”? And why, if he’s in search of them, does he focus purely on England? Equally, why does he choose to explore them by focusing on a series of individuals?
In part, the thinking seems to be that by centering each chapter on a named individual, you can explore the sources to understand how much we can really know in an era of fragmentary record keeping and near constant conflict. This is a nice enough idea – but it’s been done better elsewhere, especially in the last decade or so, as archaeology and history have merged and a glut of good books have come out on the Vikings and Anglo-Saxons in particular.
Equally, given the use of the term “Dark Ages” – usually contrasted to the Greek/Roman Golden Age and the Renaissance – it’s strange the focus here is largely on politics and power rather than culture and learning and civilisation and society.
Not a bad book, certainly, but its episodic nature betrays its roots in television. It’s let down by the fact that there’s really no clear connecting thread, and nor is there a flowing narrative – something seemingly made worse by Woods’ laudable decision to add some new chapters about prominent women in this revised edition, to counter his early 80s patriarchal mindset and work in some more recent scholarship.
Nonetheless, Woods is a good writer, and this is engaging enough – it just feels a bit confused and incomplete.
This. My biggest data lessons from 25 years in digital publishing / marketing to add to the efficiency/effectiveness debate:
1) There’s an important distinction between being data-driven and data-informed; more organisations need to lean towards the latter, because…
2) No numbers mean anything without context – almost everything measurable needs multiple other datapoints, timescales, and points of comparison to have any meaning
3) Most data tracked by marketing departments are vanity metrics with almost zero long-term value for the business as a whole
4) Pick the wrong KPIs (pageviews being the most obvious, revenue growth perhaps the least) you’re more likely to harm the business than help it by focusing on improving the *indicator* rather than the business-wide performance, because…
5) Almost every metric can be gamed or significantly impacted by outliers or picking the wrong points of comparison, but…
6) Not enough people check to see if this is what’s happening, especially if the results are looking good
7) Equally, just because you *think* you can measure something doesn’t mean this is what you’re actually measuring, or that it’s helpful to do so, but…
8) Tables of numbers and nice pretty charts (especially with trend lines) are addictive, while cross-referencing multiple metrics and trying to make sense of it all is difficult – not helped by most of the tools available being deeply unintuitive, so…
9) Most laypeople don’t bother asking about the methodology for fear of looking stupid, and just nod along, so…
10) Keep on questioning the data – who compiled it, how, when, where, why, and what could we be missing? Data interpretation is as much art as science – the more we question what we’re seeing, the more likely it is someone will have one of those sparks of inspiration that help you find something genuinely meaningful
At times I liked this a lot – a neat companion to Neal Stephenson’s Cryptonomicon as a novel about the birth of the computer age. It could equally work as a companion to Sebastian Mallaby’s non-fiction The Power Law, focused on the venture capitalists and somewhat unstable, potentially sociopathic tech bros who have built the modern tech industry into the morally suspect force that it is.
Effectively a montage rather than a narrative, with surprisingly little-known polymath genius John von Neumann and the various hugely influential ideas he had as its centre of gravity, it’s as wide-ranging as he was. This is the guy who co-created Game Theory (an approach many tech types seem to consciously adopt), helped develop not just the atomic bomb, but also the hydrogen bomb and concept of Mutually Assured Destruction – with its wonderfully appropriate acronym.
But he also came up with some initial concepts for artificial intelligence, notably the self-teaching, self-reproducing, self-improving Von Neumann machines that he envisioned spreading through the universe long after his (and humankind’s) death.
It’s this that the book is really building to throughout: Pretty much all modern AI systems are Von Neumann machines – at least, to an extent.
This makes this extremely timely and thought-provoking, despite being about someone who died 70 years ago.
How will these systems continue to evolve? Given von Neumann himself is, throughout, compared to the machines and systems he developed – his utterly alien way of thinking, his apparent disregard for his fellow humans, his neglect of his family, his apparent patronising contempt for people not as smart as he was – the suggestion that these alien intelligences are something to be wary and probably scared of starts coming through stronger and stronger.
This culminates in the final section, a detailed narrative of the significance and a blow by blow account of DeepMind’s 2016 victory over the world’s leading human Go player with their AlphaGo system.
Yet while an impressive achievement, as a whole the book didn’t quite work for me. The different voices talking about their relationships and experiences with von Neumann, done as if being interviewed, eventually all started to sound too similar. The opening and closing sections were thematically clearly linked, but the structure as a whole leaves the reader doing much of the work to connect the dots and get to the point the author’s making. A final coda to wrap it all up would, for me at least, have been appreciated.
Goodreads tells me I finished 74 books in 2025, some 35,000 pages. I almost made it to 75, but just ran out of time… Most were nonfiction, but mostly history, philosophy and science, so not exactly classic LinkedIn fodder.
Here’s a few I’d definitely recommend to better navigate the world of business / work (in no particular order):
1) Alchemy, by Rory Sutherland – a useful corrective to the idea that logic and reason should drive strategy, and a timely reminder (in this age of GenAI probability-driven “thinking”) that it’s often necessary to go lateral to succeed. But Sutherland’s a marketer at heart – of *course* he’d say that…
2) The Art of Explanation, by Ros Atkins – a guide to more effective communication, borrowing from a couple of decades’ experience in journalism; a book many non-journalists could do with reading, and almost the opposite of Sutherland’s approach.
3) Economics, The User’s Guide, by Ha-Joon Chang – as the debate about AI bubbles and the future of the job market drags on, this is one of the very best overviews of the history and post-financial crisis state of economic thinking I’ve come across; thought-provoking and accessible via short, clear chapters. An excellent read.
4) The Corporation in the 21st Century, by John Kay – a slight cheat as I’ve got a couple of dozen pages to go, but this is an excellent companion to the previous one, providing a potted history of how we’ve got to where we are in the world of business organisations and ecosystems, and how it all seems to be changing. Again.
5) The Power Law, by Sebastian Mallaby – a deep dive into the history, mentality and working methods of the venture capitalists that have done so much to influence the tech industry and global economy over the last few decades. It helpfully shows that Elon Musk (among others) has been problematic for years…
—
Of course, all of these were written before the rise of GenAI and the advent of Trump 2, so.who knows how helpful they’ll be in navigating 2026?
If you’re happy with platitudinous banality for your “thought leadership”, GenAI is great!
The trouble is, this isn’t just a GenAI issue.
Many (most?) brands have been spewing out generic nonsense with their content marketing for as long as content marketing has been a thing.
Because what GenAI content is very good at exposing is something that those of us who’ve been working in content marketing for a long time have known since forever: Coming up with genuinely original, compelling insights is *incredibly* hard.
Especially when the raw material most B2B marketers have to work with is the half-remembered received wisdom a distracted senior stakeholder has just tried to recall from their MBA days in response to a question about their business strategy that they’ve probably never even considered before.
And even more especially when these days many of those senior stakeholders are asking their PA to ask ChatGPT to come up with an answer for the question via email rather than speak with anyone.
If you want real insight that’s going to impress real experts, you need to put the work in, and give it some real thought. GenAI can help with this – I have endless conversations with various bots to refine my thinking across dozens of projects. But even that takes time. Often a hell of a lot of time.
Because even in the age of GenAI, it turns out the project management Time / Cost / Quality triangle still applies.
We were having the same arguments 20 years ago about blog content from actual humans.
The problem is not with how the sausage is made but, as Sturgeon’s Law states, that “Ninety percent of everything is crap”.
(Of course, on Linkedin this quite simple – and surely obvious – statement led to lots of debate about the *ethics* of AI content rather than the quality. That’s a different matter altogether…)
“45% of the AI responses studied contained at least one significant issue, with 81% having some form of problem”
I’m a big fan of using GenAI to assist in research, ideation, and even sense-checking – asking it to help me with my own critical and lateral thinking. I use these tools multiple times a day, and am constantly encouraging the journalists I work with at Today Digital o use GenAI more to help them boost both their productivity and the impact of their work.
But it’s *vital* to keep fully aware of GenAI’s limitations when using it for anything where facts are important.
No matter how often we remind ourselves that LLMs have no true understanding, no real intelligence, no concept of what a “fact” actually is, the more you use them the easier it is to be taken in by their very, very convincing pastiche of true intelligence.
As this Reuters study shows, despite the apparent progress of the last couple of years, there are still fundamental challenges – which are unlikely to ever be fully overcome using this form of AI. (And which is why LLMs weren’t even classified as AI until very recently…)
The good news? With GenAI’s limitations increasingly becoming more widely appreciated, this could ultimately be a good thing for news orgs – because why go to an unreliable intermediary when you can go direct to the journalistic source?
Journalistic scepticism and fundamental critical thinking skills are becoming more important than ever.
The rhythms and tone of AI-assisted writing are now pretty much endemic on LinkedIn
And I get why: GenAI copy is generally pretty tight, pretty focused, and flows pretty well. Certainly better than most non-professional writers can manage on their own.
Hell, it sounds annoyingly like my own natural writing style, honed over years of practice…
But people I’ve known for years are starting to no longer sound like themselves.
Their words are too polished, too slick, too much like those an American social media copywriter would use, no matter where they’re from.
None of this post was written with AI.
And despite (because of?) being a professional writer/editor, It took me over half an hour of questioning myself, rewriting, starting again, looking for the right phrase. Doing this on my phone, my thumbs now ache and the little finger on my right hand, which I always use to support the weight while writing, is begging for a break.
With GenAI I could have “written” this in a fraction of the time, and it would have been tighter, easier to follow.
But it wouldn’t have been me – and I still (naively) want my social media interactions to be authentically human to human.
(Of course, the AI version would probably have ended up getting more engagement, because this post – as well as going out on a Sunday morning when no one’s looking, and without an image – is now far too long for most people, or the LinkedIn algorithm, to give it much attention. Hey ho!)
To help shape my thinking, I write essays and shorter notes examining the ideas and narratives that shape media, marketing, technology and culture.
A core focus: The way context and assumptions can radically change how ideas are interpreted. Much of modern business, marketing, and media thinking is built on other people's frameworks, models, theories, and received wisdom. This can help clarify complex problems – but as ideas travel between disciplines and organisations they are often simplified, misapplied or treated as universal truths. I'm digging into these, across the following categories - the first being a catch-all for shorter thoughts: