Nicholas Carr wrote a blog post well worth a read last January, positing the impact of social media is content collapse, not context collapse. Indeed when we all started out on social software the phrase context collapse was on our lips.

Since 2016 Carr sees context restoration however, a movement away from public FB posts to private accounts, chat groups, and places where content self-destructs after a while. In its place he sees a different collapse, that of content.

Context collapse remains an important conceptual lens, but what’s becoming clear now is that a very different kind of collapse — content collapse — will be the more consequential legacy of social media. Content collapse, as I define it, is the tendency of social media to blur traditional distinctions among once distinct types of information — distinctions of form, register, sense, and importance. As social media becomes the main conduit for information of all sorts — personal correspondence, news and opinion, entertainment, art, instruction, and on and on — it homogenizes that information as well as our responses to it.

Content collapse, because all those different types of information reach us in the exact same templated way, the endlessly scrolling timeline on our phone’s screen.
Carr posits our general unease with social media stems from this content collapse even, and names four aspects of it:

First, by leveling everything, social media also trivializes everything….

Second, as all information consolidates on social media, we respond to it using the same small set of tools the platforms provide for us. Our responses become homogenized, too….

Third, content collapse puts all types of information into direct competition….

Finally, content collapse consolidates power over information, and conversation, into the hands of the small number of companies that own the platforms and write the algorithms….

My first instinct is that it is that last aspect that causes the most unease. The first and third are ultimately the same thing, I feel. The second trivialises not the content but us. It severely limits people’s response range, leaving no room for nuance or complexity (which makes unease and lack of power more tangible to users, such that I suspect it significantly amps the outrage feedback loop in people’s attempts to break the homogeneity, to be seen, to be heard) It is what removes us as an independent entity, a political actor, a locus of agency, an active node in the network that is society.

So here’s to variety and messiness, the open web, the animated gifs of yesteryear, and refusing the endlessly scrolling algorithmic timelines.

In a conversation in Prague last weekend, I formulated some thoughts on data quality I am blogging here so I can find them back again later.

Often in the context of opening up government data the data quality gets mentioned as a barrier.  Data quality, or rather absence thereof, is put forward as a reason to not publish the data, or as a reason why re-use is not happening. (To the former Andrew Stott always replies that keeping the data inside government for the past decades has not improved it, so why think not publishing now would change anything?)

To me data quality is not an intrinsic aspect of the data. It is an external aspect. Data quality only becomes visible, gets noticed, in the context of usage. The job for which the data is being used determines whether the data is of the right quality to do so.

Also data quality is not the same as data granularity.

Only through making data available for re-use, and attempting to re-use that data in various settings, do notions of quality and questions on quality get formulated and discussed, and eventually dealt with (such as when Open Street Map corrected the location of 18.000 out of 360.000 busstops in the UK). This then may or may not reflect back on the public task for which the data was originally collected, and hence on the original data collection process.