The growing social media advertising boycott

The most surprising thing with this growing move away from social media advertising is that it has taken this long for brands to realise that they can’t control the context in which their adverts appear – and that context can change perception of their messaging.

The real lesson here is not that social media needs stricter controls (an ethical debate), it’s that in the classic Paid/Earned/Owned model, the *only* part brands can fully control is Owned. Many are only now beginning to wake up to the fact that their social accounts are not Owned platforms.

All this should have been obvious for years – every fresh story about an algorithm change destroying business models that were relying on social audiences has been an alarm bell. But perhaps now brands are finally realising that social isn’t as straightforward as they’ve long seemed to believe.

What does this mean for brands?

1) They need more robust, nuanced social strategies. Chucking money at paid posts and adverts doesn’t cut it. It never has.

2) The quality of their genuinely Owned platforms is becoming more important than ever. These are the only places they have complete control over the context and the message.

And it’s also notable that many brands joining the boycott have solid Owned strategies in place…

Google’s May 2020 update makes quality content even more important

“I haven’t witnessed an update as widespread as this one since 2003,” says the author of this piece. Some sites are reporting 90% traffic drops, with even the likes of Spotify and LinkedIn apparently impacted. This is big.

What exactly has changed is still unclear – a few days on results are still fluctuating too much for detailed analysis – but one thing does seem certain: “there are multiple reports of thin content losing positions”.

This has been the trend with Google for a while now, with the firm recommending “focusing on ensuring you’re offering the best content you can. That’s what our algorithms seek to reward.”

What *is* good content in this context? After all, “quality” is quite a subjective concept.

Well, algorithms aren’t people, but Google’s long been aiming to make their code more intelligent, and better able to understand context and likely relevance. Keyword stuffing has been penalised for years, as have dodgy link-building efforts. Instead, Google is aiming for near-human levels of appreciation of nuance.

Helpfully, though, Google has also put out a list of questions to help you understand if the content of your site is likely to be seen as quality in the eyes of the all-powerful algorithm:

  1. Does the content provide original information, reporting, research or analysis?
  2. Does the content provide a substantial, complete or comprehensive description of the topic?
  3. Does the content provide insightful analysis or interesting information that is beyond obvious?
  4. If the content draws on other sources, does it avoid simply copying or rewriting those sources and instead provide substantial additional value and originality?
  5. Does the headline and/or page title provide a descriptive, helpful summary of the content?
  6. Does the headline and/or page title avoid being exaggerating or shocking in nature?

All good questions, and all from Google’s own blog.

Review: Who Owns the Future?, by Jaron Lanier

4/5 stars

Impressively prescient, considering it was published five years ago but is about technology – something that’s been moving madly fast during that timeframe. The Facebook / Cambridge Analytica scandal effectively predicted, many of the debates still going on in business and government today about things like the gig economy, autonomous vehicles and more were anticipated and summarised before they’d really started happening. The impression is that Lanier had seen all this coming decades ago – and he probably did.

As such, lots here to spark thought, lots to be impressed with, and it’s hard to disagree with the central thesis that the informational economics of the internet age are fundamentally broken. But at the same time, the only alternative to the current way of doing things – micropayments for data exhange/generation – still seems insanely impractical, even employing a technology like blockchain (something similar to which Lanier kinda proposes here).

So while Lanier ends on an optimistic note, the book left me more pessimistic than ever about our tech-driven future.

What I’ve been working on for the past year

Here it is. The new, multiplatform MSN.

The new MSN - customisable

Engadget has a solid overview piece.

The content proposition is fairly straightforward – a customisable mix of useful tools and the best content from many of the world’s biggest publishing brands across a bunch of key topic areas or verticals, curated by teams of in-market editors.

The aim on a technical level is actually the most interesting part of it – we’ve been developing a cloud-hosted CMS that enables single-publish across all devices and platforms, for both web and apps, running across 55 markets in 27 languages, with a coherent look and feel no matter your screen size or operating system. That’s properly ambitious.

Most of my input has been procedural (improving multimarket and multiplatform publishing processes) and hidden in the back end (I was part of the CMS superuser group that’s been working on back-end UX and workflow). I’ve not had as much involvement in the front-end design, architecture, or overall content strategy as I’d like, but still – a most definite improvement on one of the web’s longest-running major publishers (20 years old this year, and still doing a good 22 billion pageviews every month).

Please keep Twitter pure

The filtered feeds of Facebook (and LinkedIn) are the things I dislike most about them, the unfiltered most recent first approach of Twitter what I love about it, so this possibility that Twitter’s going down the algorithmic-filter route worries me – and not just because of recent concerns voiced over how algorithms can affect net neutrality and news reporting.

I very much hope Twitter at least retains the option of turning on the firehose, though I fully get the need to tame the chaos with some kind of algo or filter to pull in new users. Not everyone can get to grips with lists and Tweetdeck – too confusing for the newcomer.

Now don’t get me wrong: algorithmic filtering has its place. One of my favourite apps is Zite, and I was an early adoptor of StumbleUpon (well over a decade ago) – precisely because of their ability to get to know my interests and serve me up interesting content from sources I’d usually not discover by myself. For Facebook to offer up this kind of service, with its vast databases of its users’ Likes, makes perfect sense (though I’d still prefer a raw feed, or category feeds, so I can split off news about the world from news about my actual friends – a new baby or a wedding is not the same as a terrorist attack).

This is why I love Twitter – it is raw, unfiltered. And at 140 characters a pop, it’s (more or less) manageable. Especially if these old stats are still accurate, suggesting the majority of Twitter users only follow around 50 other accounts. If you end up following a few hundred, you’re already a power user, and likely know order them via lists. If you end up following a few thousand, then frankly you no longer care if you miss a few things.

Could Twitter be improved with a bit of algo? For sure. Why am I only ever shown three related accounts when I follow a new one? Why isn’t MagicRecs built in?

But the fact is we’ve already got this option on Twitter – it’s called the Discover tab. And I never use it, because it somehow manages to feel even more random than the raw feed. The problem isn’t a lack of algorithms, it’s a lack of intelligent algorithms, intelligently integrated.

Algorithms and the news agenda

Well worth a read on the Ferguson riots, and how different social media sites (notably Twitter vs Facebook) served up news about them:

“Now, we expect documentation, live-feeds, streaming video, real time Tweets… [Ferguson] unfolded in real time on my social media feed which was pretty soon taken over by the topic…

And then I switched to non net-neutral Internet to see what was up. I mostly have a similar a composition of friends on Facebook as I do on Twitter.

Nada, zip, nada.

This morning, though, my Facebook feed is also very heavily dominated by discussion of Ferguson. Many of those posts seem to have been written last night, but I didn’t see them then. Overnight, “edgerank” –or whatever Facebook’s filtering algorithm is called now?—?seems to have bubbled them up, probably as people engaged them more.

But I wonder: what if Ferguson had started to bubble, but there was no Twitter to catch on nationally? Would it ever make it through the algorithmic filtering on Facebook? Maybe, but with no transparency to the decisions, I cannot be sure.

Would Ferguson be buried in algorithmic censorship?

Would we even have a chance to see her?

This isn’t about Facebook per se—maybe it will do a good job, maybe not—but the fact that algorithmic filtering, as a layer, controls what you see on the Internet. Net neutrality (or lack thereof) will be yet another layer determining this. This will come on top of existing inequalities in attention, coverage and control.”

It’s a continual worry – how to ensure we see what’s important? Though, of course, the concept is nothing new – the algorithm is just an editor or an editorial policy in a different form. It’s something I’ve written about before when it relates to the EU, focusing on a BBC editorial policy that fails to cover EU affairs in mainstream news most of the time, and then serves up extremes.

This kind of human editorial determination of the appropriate news agenda based on perceived audience interests is arguably no massive degree different from a Facebook algorithm determining what is important based on how it interprets user interests. If anything, there’s a strong argument to be made that Facebook knows its audience better than any editor on any publication or TV show ever, due to the sheer quantities of data it possesses on its userbase.

But then what of *importance* – who determines this? Who overrides the algorithmic or standard editorial policy assumption? Is there a chance that an important story will get buried because a bit of code doesn’t see it as significant? Yes. But the same is true of any number of important news stories that human editors don’t pick up on, or choose to bury on page 23 because they don’t think their readers will be that interested.

As so often, the web may be a bit different, but there’s nothing that *new* here.

Web writing, hate reading, and the decline of quality

Nothing new, but this is worth a read on web writing and hate-reading – that old trick of being as controversial as possible in order to get an extreme response, purely because extremes get more attention, and in a pageview-driven business model, controversy is seen as good purely because, based on the metrics, it’s the controversial stuff that’s driving engagement.

This infantile attitude of provocation to get attention is increasingly being combined with ream upon ream of cheap content, because the more content you’ve got, the more potential PVs you can attract. We end up with the most depressing (and false) equation of online publishing:

Cheap content + Controversy = Clicks = Cash

It’s an attitude that’s lazy *and* massively short-termist in thinking – over the long term, quality can and should trump quantity. But even if it doesn’t, cheap, crappy content is a turn-off for audiences. The more sites that start to rely on hastily-produced, poorly-checked copy, or lazy semi-plagiarisms of things that desperate teams of poorly-paid hacks with deadlines and quotas to hit have found elsewhere, the less distinctive sites get, and the fewer returning visitors you’ll get. As that linked article puts it:

“With a business model based on a ton of cheap content, Web publishers can rely too heavily on acid-reflux-style aggregation, in which young writers destroy the savor of interesting stories and an interesting world by constantly regurgitating the news with added bile.”

There’s also an interesting point made from John Waters in the Irish Times (now behind a paywall), on the impact of comment sections under online articles:

“Because everything written specifically for online consumption is written in the expectation of addressing a hostile community, the writing process demands, as a prerequisite, either a defensive or antagonistic demeanor.”

Having learned my online publishing trade in the realm of message boards, chatrooms and blogs, I’m incredibly aware of the vast levels of bile that exist in comment sections. But it doesn’t have to be this way. With careful community management, it’s perfectly possible to build online communities that are supportive, friendly, and constructive, rather than the supposed default of objectionable and offensive. Check out the likes of b3ta, imgur and Metafilter for some prime examples of sites with vast *positive* communities of commenters. And then contrast those with the comments sections of pretty much any national newspaper site – packed with trolls and maniacs.

It doesn’t have to be this way.

Understanding how people interact with your content: the code

Upworthy have released the code they use to track user engagement, with a nice bit of methodology explaining what they’re tracking and why they care:

“In the age of ever-present social media, our collective attentions have never been spread thinner. According to Facebook, each user has the potential to be served 1,500 stories in their newsfeed each time. For a heavy user, that number could be as much as 15,000. In this climate, how do you get people to pay attention? And, more importantly, how do you know they’re actually engaged?

“Clicks and pageviews, long the industry standards, are drastically ill-equipped for the job. Even the share isn’t a surefire measure that the user has spent any time engaging with the content itself. It’s in everyone’s interest — from publishers to readers to advertisers — to move to a metric that more fully measures attention spent consuming content. In other words, the best way to answer the question is to measure what happens between the click and the share. Enter Attention Minutes.”

News as procrastination in the age of mobile first

“Know your enemy” – the first rule of everything competitive. But we’re mostly doing it wrong – speaking with my MSN hat on, it’s all too easy to fall into the trap of assuming that our main competition is Yahoo, Buzzfeed or the Huffington Post, and base strategy on what they are/aren’t doing to get ahead of the competition.

But if you’re in publishing, no matter what kind, your competition isn’t other publishers – it’s anything and everything that competes for your audience’s time and attention. And this is only getting more obvious for anyone in the online world now that mobile is one of the key entrypoints for news.

What do we use mobile phones for? Communication, obviously. Information, naturally. But mostly? Proscrastination. Have a few minutes to kill waiting for a bus, for someone to turn up for a meeting, for the line and the checkout to run down, and what are we all doing? Pissing about on our phones. Some read ebooks, some play games, some do work, some watch videos, some learn a language, some catch up on the news and lastest gossip, look for lifestyle tips, browse recipes, check holiday destinations – all the other stuff that broad-catchment websites like the one I work on offer up to attract readers.

Even news itself is as much about wasting time as it is about getting information – because, let’s face it, most news doesn’t directly affect most people. Even the most horrific news – terrorist attacks, mass shootings, kidnappings, wars and natural disasters – only directly affect the tiniest fraction of our audiences. They are effectively entertainment to readers – macabre entertainment, perhaps, but entertainment nonetheless. Diversions from their daily lives. Time-wasters.

It’s obvious once you realise it, but it still seems strange to hear the managing editor of the Financial Times name Candy Crush as the paper’s main competitor.

So we as news publishers need to think about how we make *our* product the most attractive time-waster:
– Is it snackable enough?
– Is it engaging enough?
– Will it keep me coming back for another hit like those addictive game apps?
– Do I get any rewards or points or prizes?
– Does it give me things I can share with my friends to show off or entertain them?
– Is it respectable enough that I wouldn’t mind the people behind me on the bus seeing what I’m looking at?
– Is it always fresh?
– Does it have depth to dig deeper if I want to, or does it simply finish and leave me with nothing to do?
– How long will it entertain me for?

These questions are the same for games as they are for media. As everyone carries on catching up with the concept of mobile first, we need to keep reminding ourselves that the questions are the same no matter what kind of mobile product you’re creating.

The power of Google

Another Google algo update and, as ever, original, interesting, useful content is key to SEO success.

The hit eBay’s taken is interesting, though… An 80% drop in Google traffic coukd be a business-killer for anyone less big. And their content surely *is* original and relevant, what with the products changing all the time?

Possibly another impact from the authorship/Google+ changes the Google guys have introduced? After all, eBay product page writers are hardly likely to be verified Google+ authors. Is this why eBay are starting to invest in creating narrative content around their auctions?

Update: See also the ever-excellent Matthew Ingram on this, who points out the extremely worrying hit the long-running and much-loved Metafilter has taken:

“Reliant on Google not only for the bulk of its traffic but also the bulk of its advertising revenue, Metafilter has had to lay off almost half of its staff.”

The lesson?

Google can kill a site on a whim, and even the experts can’t tell us how or why, because Google’s algorithms are even more secret than the Colonel’s delicious blend of herbs and spices. Any site dependent on search for the bulk of its traffic is playing a very, very dangerous game.

Update 2: More detail on the Metafilter revenue/traffic decline, complete with stats.

The related power of Facebook to stifle updates from sources it has deemed to be suspect for whatever reason simply – and even the New York Times’ recently-leaked innovation report’s charts In the decline of its homepage – makes an obvious cliché all the more true even it comes to Web traffic: don’t put all your eggs in one basket.

If more than 25% of your traffic/revenue comes from one source, you’re in danger. More than 50%, you have a potential death sentence. All it takes is one thing to change, and you’re screwed.