We’re in a time where whatever is presented to us as discourse on Facebook, Twitter or any of the other platforms out there, may or may not come from humans, bots, or someone/a group with a specific agenda irrespective of what you say or respond. We’ve seen it at the political level, with outside influences on elections, we see it in things like gamer gate, and in critiques of the last Star Wars movie. It creates damage on a societal level, and it damages people individually. To quote Angela Watercutter, the author of the mentioned Star Wars article,

…it gets harder and harder to have an honest discussion […] when some of the speakers are just there to throw kerosene on a flame war. And when that happens, when it’s impossible to know which sentiments are real and what motivates the people sharing them, discourse crumbles. Every discussion […] could turn into a […] fight — if we let it.

Discourse disintegrates I think specifically when there’s no meaningful social context in which it takes place, nor social connections between speakers in that discourse. The effect not just stems from that you can’t/don’t really know who you’re conversing with, but I think more importantly from anyone on a general platform being able to bring themselves into the conversation, worse even force themselves into the conversation. Which is why you never should wade into newspaper comments, even though we all read them at times because watching discourse crumbling from the sidelines has a certain addictive quality. That this can happen is because participants themselves don’t control the setting of any conversation they are part of, and none of those conversations are limited to a specific (social) context.

Unlike in your living room, over drinks in a pub, or at a party with friends of friends of friends. There you know someone. Or if you don’t, you know them in that setting, you know their behaviour at that event thus far. All have skin in the game as well misbehaviour has immediate social consequences. Social connectedness is a necessary context for discourse, either stemming from personal connections, or from the setting of the place/event it takes place in. Online discourse often lacks both, discourse crumbles, entropy ensues. Without consequence for those causing the crumbling. Which makes it fascinating when missing social context is retroactively restored, outing the misbehaving parties, such as the book I once bought by Tinkebell where she matches death threats she received against the sender’s very normal Facebook profiles.

Two elements therefore are needed I find, one in terms of determining who can be part of which discourse, and two in terms of control over the context of that discourse. They are point 2 and point 6 in my manifesto on networked agency.

  • Our platforms need to mimick human networks much more closely : our networks are never ‘all in one mix’ but a tapestry of overlapping and distinct groups and contexts. Yet centralised platforms put us all in the same space.
  • Our platforms also need to be ‘smaller’ than the group using it, meaning a group can deploy, alter, maintain, administrate a platform for their specific context. Of course you can still be a troll in such a setting, but you can no longer be one without a cost, as your peers can all act themselves and collectively.
  • This is unlike on e.g. FB where the cost of defending against trollish behaviour by design takes more effort than being a troll, and never carries a cost for the troll. There must, in short, be a finite social distance between speakers for discourse to be possible. Platforms that dilute that, or allow for infinite social distance, is where discourse can crumble.

    This points to federation (a platform within control of a specific group, interconnected with other groups doing the same), and decentralisation (individuals running a platform for one, and interconnecting them). Doug Belshaw recently wrote in a post titled ‘Time to ignore and withdraw?‘ about how he first saw individuals running their own Mastodon instance as quirky and weird. Until he read a blogpost of Laura Kalbag where she writes about why you should run Mastodon yourself if possible:

    Everything I post is under my control on my server. I can guarantee that my Mastodon instance won’t start profiling me, or posting ads, or inviting Nazis to tea, because I am the boss of my instance. I have access to all my content for all time, and only my web host or Internet Service Provider can block my access (as with any self-hosted site.) And all blocking and filtering rules are under my control—you can block and filter what you want as an individual on another person’s instance, but you have no say in who/what they block and filter for the whole instance.

    Similarly I recently wrote,

    The logical end point of the distributed web and federated services is running your own individual instance. Much as in the way I run my own blog, I want my own Mastodon instance.

    I also do see a place for federation, where a group of people from a single context run an instance of a platform. A group of neighbours, a sports team, a project team, some other association, but always settings where damaging behaviour carries a cost because social distance is finite and context defined, even if temporary or emergent.

    Slate saw their traffic from Facebook drop by 87% in a year after changes in how FB prioritises news and personal messages in your timeline. Talking Points Memo reflects on it and doing so formulates a few things I find of interest.

    TPM writes:
    Facebook is a highly unreliable company. We’ve seen this pattern repeat itself a number of times over the course of company’s history: its scale allows it to create whole industries around it depending on its latest plan or product or gambit. But again and again, with little warning it abandons and destroys those businesses.” …”Google operates very, very differently.”..”Yet TPM gets a mid-low 5-figure check from Google every month for the ads we run on TPM through their advertising services. We get nothing from Facebook.”..”Despite being one of the largest and most profitable companies in the world Facebook still has a lot of the personality of a college student run operation, with short attention spans, erratic course corrections and an almost total indifference to the externalities of its behavior.

    This first point I think is very much about networks and ecosystems, do you see others as part of your ecosystem or merely as a temporary leg-up until you can ditch them or dump externalities on.

    The second point TPM makes is about visitors versus ‘true audience’.
    “we are also seeing a shift from a digital media age of scale to one based on audience. As with most things in life, bigger is, all things being equal, better. But the size of a publication has no necessary connection to its profitability or viability.” It’s a path to get to a monopoly that works for tech (like FB) but not for media, the author Josh Marshall says. “…the audience era is vastly better for us than the scale era”

    Audience, or ‘true audience’ as TPM has it, are the people who have a long time connection to you, who return regularly to read articles. The ones you’re building a connection with, for which TPM, or any newsy site, is an important node in their network. Scaling there isn’t about the numbers, although numbers still help, but the quality of those numbers and the quality of what flows through the connections between you and readers. The invisible hand of networks more than trying to get ever more eye-balls.

    Scale thinking would make blogging like I do useless, network thinking makes it valuable, even if there are just 3 readers, myself included. It’s ‘small b’ blogging as Tom Critchlow wrote a few months ago. “Small b blogging is learning to write and think with the network“. Or as I usually describe it: thinking out loud, and having distributed conversations around it. Big B blogging, Tom writes, in contrast “is written for large audiences. Too much content on the web is designed for scale” and pageviews, where individual bloggers seem to mimick mass media companies. Because that is the only example they encounter.

    Back in April I wrote how my blogging had changed since I reduced my Facebook activity last fall. I needed to create more space again to think and write, and FB was eroding my capacity to do so. Since my break with FB I wrote more than since a long time, and the average weekly activity was higher than ever in the past 16 years. In april I wondered how that would keep up in the second quarter of this year so here are the numbers of the first half of 2018.

    First, the number of postings was 203 this first half of 2018, or an average of 7 to 8 per week. Both as total number and as weekly average this is more than I have ever blogged since 2002 on even a yearly basis. (see the graphs in my previous posting Back to the Blog, the Numbers).

    Mid April I added a stream of micro-postings to this blog, and that helps explain part of the large jump in number of postings in the first graph below. What microblogging helps do however is get the small bits, references and random thoughts out of my head, leaving more space to write posts with more content. I’ve written 84 ‘proper’ blog posts the last 6 months, of which 50 since adding the microblog mid April, so it has pushed up all my writing.


    Blogposts 2018 per month. It shows July as week 26 ends July 1st, which had 2 postings


    Blogposts 2018 per week, the micro blog started week 15

    Let’s look at how that compares to previous months and years.


    Number of posts per month since 2016. Leaving FB in October 2017 started a strong uptick.

    I feel I have found back a writing rhythm. So tracking the number of postings moving forward is likely mostly of interest in terms of ‘proper’ postings and the topics covered, and less to see if I blog at all. My steps away from FB have paid off, and reconfiguring my information strategies for more quality is the next phase.

    Some links I thought worth reading the past few days

    Some links I thought worth reading the past few days

    The second founder, Jan Koum, of WhatsApp has left Facebook, apparently over differences in dealing with encryption and the sharing of data of WhatsApp. The other founder, Brian Acton, had already left Facebook last September, over similar issues. He donated $50 million to the non-profit Signal Foundation earlier this year, and stated he wanted to work on transparent, open-source development and uncompromising data protection. (Koum on the other hand said he was going to spend time on collecting Porsches….) Previously the European Union fined Facebook 110 million Euro for lying about matching up data of Whatsapp with Facebook profiles when Facebook acquired Whatsapp in 2014. Facebook at the time said it couldn’t match Whatsapp and Facebook accounts automatically, then 2 years later did precisely that, while the technology for it already existed in 2014 of which Facebook was aware. Facbeook says “errors made in its 2014 filings were not intentional” Another “we’re sorry, honestly” moment for Facebook in a 15 year long apology tour since even before its inception.

    I have WhatsApp on my phone but never use it to initiate contact. Some in my network however don’t use any alternatives.

    The gold standard for messaging apps is Signal by Open Whisper Systems. Other applications such as Whatsapp, FB Messenger or Skype have actually incorporated Signal’s encryption technology (it’s open after all), but in un-testable ways (they’re not open after all). Signal is available on your phone and as desktop app (paired with your phone). It does require you to disclose a phone number, which is a drawback. I prefer using Signal, but the uptake of Signal is slow in western countries.

    Other possible apps using end-to-end encryption are:
    Threema, a Switzerland based application, I also use but not with many contacts. Trust levels in the application are partly based on exchanging keys when meeting face to face, adding a non-tech layer. It also claims to not store metadata (anonymous use possible, no phone necessary, not logging who communicates with whom, contact lists and groups locally on your device etc). Yet, the app itself isn’t open for inspection.

    Telegram (originating in Russia, but now banned for not handing over encryption keys to Russian authorities, and now also banned in Iran, where it has 40 million users, 25% of its global user population.) I don’t use Telegram, and don’t know many in my network who do.

    Interestingly the rise in using encrypted messaging is very high in countries high on the corruption perception index. It also shows how slowly Signal is growing in other countries.

    VPN tools will allow you to circumvent blocking of an app, by pretending to be in a different country. However VPN, which is a standard application in all businesses allowing remote access to employees, itself is banned in various countries (or only allowed from ‘approved’ VPN suppliers, basically meaning bans of a messaging app will still be enforced).

    Want to message me? Use Signal. Use Threema if you don’t want to disclose a phone number.

    Many tech companies are rushing to arrange compliance with GDPR, Europe’s new data protection regulations. What I have seen landing in my inbox thus far is not encouraging. Like with Facebook, other platforms clearly struggle, or hope to get away, with partially or completely ignoring the concepts of informed consent and unforced consent and proving consent. One would suspect the latter as Facebooks removal of 1.5 billion users from EU jurisdiction, is a clear step to reduce potential exposure.

    Where consent by the data subject is the basis for data collection: Informed consent means consent needs to be explicitly given for each specific use of person related data, based on a for laymen clear explanation of the reason for collecting the data and how precisely it will be used.
    Unforced means consent cannot be tied to core services of the controlling/processing company when that data isn’t necessary to perform a service. In other words “if you don’t like it, delete your account” is forced consent. Otherwise, the right to revoke one or several consents given becomes impossible.
    Additionally, a company needs to be able to show that consent has been given, where consent is claimed as the basis for data collection.

    Instead I got this email from Twitter earlier today:

    “We encourage you to read both documents in full, and to contact us as described in our Privacy Policy if you have questions.”

    and then

    followed by

    You can also choose to deactivate your Twitter account.

    The first two bits mean consent is not informed and that it’s not even explicit consent, but merely assumed consent. The last bit means it is forced. On top of it Twitter will not be able to show content was given (as it is merely assumed from using their service). That’s not how this is meant to work. Non-compliant in other words. (IANAL though)

    Some links I think worth reading today.

    It seems, from a preview for journalists, that the GDPR changes that Facebook will be making to its privacy controls, and especially the data controls a user has, are rather unimpressive. I had hoped that with the new option to select ranges of your data for download, you would also be able to delete specific ranges of data. This would be a welcome change as current options are only deleting every single data item by hand, or deleting everything by deleting your account. Under the GDPR I had expected more control over data on FB.

    It also seems they still keep the design imbalanced, favouring ‘let us do anything’ as the simplest route for users to click through, and presenting other options very low key, and the account deletion option still not directly accessible in your settings.

    They may or may not be deemed to have done enough towards implementing GDPR by the data protection authorities in the EU after May 25th, but that’s of little use to anyone now.

    So my intention to delete my FB history still means the full deletion of my account. Which will be effective end of this week, when the 14 day grace period ends.

    I’ve disengaged from Facebook (FB) last October, mostly because I wanted to create more space for paying attention, and for active, not merely responsive, reflection and writing, and realised that the balance between the beneficial and destructive aspects of FB had tilted too much to the destructive side.

    My intention was to keep my FB account, as it serves as a primary channel to some professional contacts and groups. Also FB Messenger is the primary channel for some. However I wanted to get rid of my FB history, all the likes, birthday wishes etc. Deleting material is possible but the implementation of it is completely impractical: every element needs to be deleted separately. Every like needs to be unliked, every comment deleted, every posting on your own wall or someone else’s wall not just deleted but also the deletion confirmed as well. There’s no bulk deletion option. I tried to use a Chrome plugin that promised to go through the activity log and ‘click’ all those separate delete buttons, but it didn’t work. The result is that deleting your data from Facebook means deleting every single thing you ever wrote or clicked. Which can easily take 30 to 45 mins to just do for a single month worth of likes and comments. Now aggregate that over the number of years you actively used FB (about 5 years in my case, after 7 years of passive usage).

    The only viable path to delete your FB data therefore is currently to delete the account entirely. I wonder if it will be different after May, when the GDPR is fully enforced.

    Not that deletion of your account is easy either. You don’t have full control over deletion. The link to do so is not available in your settings interface, but only through the help pages, and it is presented as submitting a request. After you confirm deletion, you receive an e-mail that deletion of your data will commence after 14 days. Logging back in in that period stops the clock. I suspect this will no longer be enough when the GDPR enters into force, but it is what it currently is.

    Being away from FB for a longer time, with the account deactivated, had the effect that when I did log back in (to attempt to delete more of my FB history), the FB timeline felt very bland. Much like how watching tv was once not to be missed, and then it wasn’t missed at all. This made me realise that saying FB was the primary channel for some contacts which I wouldn’t want to throw away, might actually be a cop-out, the last stand of FOMO. So FB, by making it hard to delete data while keeping the account, made it easy to decide to delete my account altogether.

    Once the data has been deleted (which can take up to 90 days according to FB after the 14 day grace period), I might create a new account, with which to pursue the benefits of FB, but avoid the destructive side and with 12 years of Facebook history wiped. Be seeing you!


    FB’s mail confirming they’ll delete my account by the end of April.

    Stephanie Booth, a long time blogging connection, has been writing about reducing her Facebook usage and increasing her blogging. She says at one point

    As the current “delete Facebook” wave hits, I wonder if there will be any kind of rolling back, at any time, to a less algorithmic way to access information, and people. Algorithms came to help us deal with scale. I’ve long said that the advantage of communication and connection in the digital world is scale. But how much is too much?

    I very much still believe there’s no such thing as information overload, and fully agree with Stephanie that the possible scale of networks and connections is one of the key affordances of our digital world. My rss-based filtering, as described in 2005, worked better when dealing with more information, than with less. Our information strategies need to reflect and be part of the underlying complexity of our lives.

    Algorithms can help us with that scale, just not the algorithms that FB uses around us. For algorithms to help, like any tool, they need to be ‘smaller’ than us, as I wrote in my networked agency manifesto. We need to be able to control its settings, tinker with it, deploy it and stop it as we see fit. The current application of algorithms, as they usually need lots of data to perform, sort of demands a centralised platform like FB to work. The algorithms that really will be helping us scale will be the ones we can use for our own particular scaling needs. For that the creation, maintenance and usage of algorithms needs to have a much lower threshold than now. I placed it in my ‘agency map‘ because of it.

    Going back to a less algorithmic way of dealing with information isn’t an option, nor something to desire I think. But we do need algorithms that really serve us, perform to our information needs. We need less algorithms that purport to aid us in dealing with the daily river of newsy stuff, but really commodotise us at the back-end.

    As a next step in rethinking my approach to using Facebook, I have started deleting my Facebook history. FB only let’s you delete things by hand, posting by posting, like by like, comment by comment. Which takes about as long or longer than the original time spent posting or liking. So I am using a Chrome plugin to do it for me by pretending to be me, going through all the delete and unlike links. I’m currently deleting 2014 data, to see how well that works. 2014 is the first full year I posted more than just the RSS feed of my blogposts, whereas 2013 and the years before that until October 2006 basically only contain my RSS feed, which only contains public material anyway.

    It’s never been a secret that Facebook is a data hungry monster, and I have always acted in that knowledge. There are reasons why FB is valuable as a tool for me, there are a range of others why FB is all wrong. Feeling increasingly uncomfortable, it is time to create a path for myself away from it. I am not leaving, at least not completely for reasons following towards the end of this post. I did however deactivate my account last night. This means my data is still available within FB but invisible, and logging in will reactivate it all.

    In short, I am not going cold turkey on FB, but merely semifreddo, half cold.
    I expect that removing myself from FB for now creates the space for me to figure out what to do and not do with FB.

    My FB history
    I joined FB in the first week of October 2006, shortly after it became available for non-US non-academic e-mail accounts in September. Until the winter of 2013 I posted virtually nothing, except automatic links to my blogposts. Only from November 2013, nudged by Gerrit Eicker, I started interacting with FB more, and from early 2014 the number of postings slowly grew. It turned into an addictive habit, that you really want to quit but don’t really seem to be able to. A tool I use to track my own software and web-usage has been brutally direct in showing me how much of a time sink FB became. I removed the FB app from my phone at some point in the last year, mostly to cut away the noise and disconnect from the here and now that doing a quick FB check during ‘empty’ moments creates.

    The constructive effects of FB
    FB has been both helpful, as well as damaging on a personal level. In the positive sense,

    • it allows me to stay in touch with a wide variety of people that I otherwise wouldn’t be in touch with. Because they are distant in terms of geography, because the context we once shared is a long time ago, the current overlap in context is small, or any combination thereof.
    • It makes keeping a sense of what’s going on in their lives pretty effortless, and even if actual interaction may be low, it serves as a low threshold channel to emphatise.
    • It allowed me to share things in a much less public place than my blog when in 2015 and 2016 personal events were dominant, without the need to reach out directly to anyone. This allowed others to respond as they see fit.
    • In some instances FB is the only way I can easily connect to people and groups I am professionally connected with, such as for instance colleagues in Central Asia, or the Serbian open data community.

    There’s also a destructive side to FB
    FB has had a huge impact over time on my regular information strategies. This was of course helped along by the demise of many other tools and platforms such as Google Reader, Dopplr, and the erosion of what makes the web work, such as Twitter doing away with RSS. For these tools lost behind the event horizon of the black hole that FB and other walled gardens are, me and many of my network used FB as a replacement. For instance, Dopplr was a good way to inform the traveling part of my professional network of upcoming trips and therefore potential opportunities to meet, should some of our travel coincide. Posting a travel update on FB has replaced it, albeit with loss of visibility and functional value. The large majority of RSS feeds I followed have dried up. The disappearance of RSS from many platforms which allowed me to aggregate information about my network myself, meant I had to go to a place where that aggregation was done for me, and FB is such a place with the most people in it. Again at a significant loss of functional value (influence, filtering, tagging, making on the fly cross-sections etc.)

    Over time it became clear to me that the endless FB timeline has de facto replaced my carefully calibrated rss-based information diet. What were my intentional and purposeful acts of keeping in touch, learning and informing myself, got replaced by a steady stream of distraction and procrastination. Starting from a question, and then seeking out what might be relevant mostly disappeared, at best responsive but often passive consumption replacing it. There’s a distracting quality to FB as a gateway to material even when interacting with the same content: I follow some thinkers/authors in my FB network, but end up engaging with whatever they posted today, rather then diving deeply into their actual work available on their own sites. It’s almost as if I have to remind myself that doing online deskresearch or literature review isn’t checking out the FB timelines of the people in that field.

    The nefarious asymmetry of FB
    To large swathes of the global population FB serves as a facsimile of the internet, hiding the potent agency-inducing qualities the internet actually has, and merely presenting the passive and consumer side of the net. As long as you keep scrolling down your timeline you’re not taking action (even if changing your avatar to show sympathy with one plight or another gives you that feeling).
    Although I’ve succeeded in preserving a certain variety in my FB network, so that regularly I get presented with viewpoints or articles which clash with my sense of what is common (not: common sense), FB has been busily building my own bubble around me. This is readily apparent whenever I venture into other places, darker places, following links on the profiles of friends of friends of friends further and further outside my ‘regular’ network. The resulting suggested articles and ads that fill my timeline for days afterwards are a shocking view on what others apparently get presented as their day to day diet.

    At issue here is the enormous asymmetry. It is infinitely easier to automatically feed me dross, aim to manipulate my choices in a myriad of ways, then it is for me to purposefully individually and manually resist that (if at all psychologically possible). FB, or actually anyone paying them, can without effort suggest 100s of articles and ads, an individual will get fed up manually hitting ‘show me less of this’ after less than a dozen times. A second layer of asymmetry is that none of the pattern matching or categorising you and I are subject to are in any meaningful way available to us ourselves. This isn’t about the personal data we consciously share (e.g. dates of birth, phone numbers, postings), but about our actual behaviour on the platform, the things I and my connections hit like for, the links we click, the time we spend engaged with those links, the comments we typed in and ultimately decided not to post, the frequency with which we yet again open up the timeline to see if we get a little bit of endorphins from being ‘liked’, our responses to the A/B testing done on us unawares ,etc. etc.

    On top of all that, similar to the tobacco industry, FB really likes to keep you hooked. Over the years I’ve deleted accounts from dozens if not hundreds of services. Some will say “we’re really sorry to see you go”, or ask you to reconsider, or make you type a confirmation manually (“DELETE”). FB however appeals to your emotions like no other, saying ‘oh but this or that person will miss you so much!’ and force you to provide a reason to leave, and even then ask for another confirmation after all that. That’s just for de-activating the account, leaving everything still there. Re-activating merely requires logging in, yet another designed asymmetry. I wonder what they will do when I actually would to delete the account.


    Aldo will miss me, Jeroen will miss me, Baden will miss me, all 660 of my FB friends will miss me,……please stay! Yes, I’m sure.

    So, what to do?

    • FB has to be removed as a time sink and obstacle to purposeful interaction.
    • In terms of information intake it means going through my network list (I downloaded my FB data), and find other ways to keep track.
    • In terms of sharing, my blog (which is fully public, so very different questions apply concerning what to post or not there) will need to have prevalence, likely augmented with some other tools. I can see running my own Diaspora-pod (or another distributed FB simile), and inviting selected groups into specific instances. This replaces the single humongous space that is used for all group interaction in FB, with more group and community specific ‘town squares’. Having the right spaces for interaction is an important aspect in community health, and FB is not designed like that.
    • In terms of interacting it means looking at my network list and more frequently purposefully reach out. Yes, that’s more time consuming, but more rewarding as well. Since my friend Peter left FB, the frequency of being in touch has risen I feel, and the quality and awareness of it has definitely increased. Although it won’t scale to all other connections.
    • Ensuring using FB more deliberately is another element. There are those I’m only connected with there. There are those I’m only professionally connected with there. So for them I will likely retain my account. For some Messenger is the only tool we share, and that too requires keeping the FB account. But like going to the pub, visiting FB will need to be a planned and time-boxed thing, and no longer the ‘filler’ of small periods of time. This is the ‘quitting smoking’ part of FB, all the ‘quick ciggies’ during the day. Any return to FB will be like a none-smoker entering a venue where smoking is allowed.
    • For that more purposeful and limited interaction, my entire FB history is of no importance, so deleting that is a logical step.