So it turns out Google doesn’t like “commodity content”, and rewards content that’s original and interesting in search and AI results.

Give it half a second’s thought and this was always going to be the direction Google was going to take with its AI search.

Google’s whole thing was helping us find the valuable parts of the internet.

But when something – in this case content – can be mass produced, its perceived value goes down.

If mass-produced AI content takes over the web, then more genuinely original content becomes harder to find – and (relative) scarcity or genuine quality tends to create value in a sea of mass-produced “good enough” products.

(This is why a tailored woollen suit cost so much more than one made from synthetic materials and stitched in a sweatshop – the latter may be functional, but they tend to rapidly fall apart, and can also make you look bad if you try to pretend you can’t tell the difference.)

Where Google’s value lies

If Google can help us find that more valuable original, insightful, *human* content, Google continues to have value for us.

This is why their focus on E-E-A-T – Experience, Expertise, Authoritativeness, and Trustworthiness – made sense in the age of search, and it makes even more sense in the age of GenAI, where awareness of the questionable trustworthiness of AI output is increasingly front of mind.

They were never going to take the arrival of GenAI lying down, and they were always going to come back to finding ways to cut through the mass of average material out there to help us find the really good stuff. That’s their whole thing.

What makes a sensible AI strategy?

It’s also notable that while they’ve been making a lot of effort to make Gemini and the rest of their AI suite substantially better over the last couple of years (after a poor start with Bard and early AI search results), Google’s most distinctive AI product – NotebookLM – focused on providing verifiable citations from clear sources, rather than just making stuff up.

Google’s strategic need from their AI efforts has been clear for years, even if they’ve had some wobbles along the way – focus on utility. Meanwhile, OpenAI’s has largely consisted of throwing features around the place to see what sticks, and rapidly ditching what doesn’t.

ChatGPT 3.5’s launch may have led Google to scramble to catch up, but they’ve not deviated from their core objective. They’re not moving fast and breaking things, but moving deliberately and adapting their core offering to fit the new environment.

It’s something quite a few other companies could learn from.