Well worth a read on the Feguson riots, and how different social media sites (notably Twitter vs Facebook) served up news about them:

“Now, we expect documentation, live-feeds, streaming video, real time Tweets… [Ferguson] unfolded in real time on my social media feed which was pretty soon taken over by the topic…

And then I switched to non net-neutral Internet to see what was up. I mostly have a similar a composition of friends on Facebook as I do on Twitter.

Nada, zip, nada.

This morning, though, my Facebook feed is also very heavily dominated by discussion of Ferguson. Many of those posts seem to have been written last night, but I didn’t see them then. Overnight, “edgerank” –or whatever Facebook’s filtering algorithm is called now?—?seems to have bubbled them up, probably as people engaged them more.

But I wonder: what if Ferguson had started to bubble, but there was no Twitter to catch on nationally? Would it ever make it through the algorithmic filtering on Facebook? Maybe, but with no transparency to the decisions, I cannot be sure.

Would Ferguson be buried in algorithmic censorship?

Would we even have a chance to see her?

This isn’t about Facebook per se—maybe it will do a good job, maybe not—but the fact that algorithmic filtering, as a layer, controls what you see on the Internet. Net neutrality (or lack thereof) will be yet another layer determining this. This will come on top of existing inequalities in attention, coverage and control.”

It’s a continual worry – how to ensure we see what’s important? Though, of course, the concept is nothing new – the algorithm is just an editor or an editorial policy in a different form. It’s something I’ve written about before when it relates to the EU, focusing on a BBC editorial policy that fails to cover EU affairs in mainstream news most of the time, and then serves up extremes.

This kind of human editorial determination of the appropriate news agenda based on perceived audience interests is arguably no massive degree different from a Facebook algorithm determining what is important based on how it interprets user interests. If anything, there’s a strong argument to be made that Facebook knows its audience better than any editor on any publication or TV show ever, due to the sheer quantities of data it possesses on its userbase.

But then what of *importance* – who determines this? Who overrides the algorithmic or standard editorial policy assumption? Is there a chance that an important story will get buried because a bit of code doesn’t see it as significant? Yes. But the same is true of any number of important news stories that human editors don’t pick up on, or choose to bury on page 23 because they don’t think their readers will be that interested.

As so often, the web may be a bit different, but there’s nothing that *new* here.