Some things I thought worth reading in the past days

  • A good read on how currently machine learning (ML) merely obfuscates human bias, by moving it to the training data and coding, to arrive at peace of mind from pretend objectivity. Because of claiming that it’s ‘the algorithm deciding’ you make ML a kind of digital alchemy. Introduced some fun terms to me, like fauxtomation, and Potemkin AI: Plausible Disavowal – Why pretend that machines can be creative?
  • These new Google patents show how problematic the current smart home efforts are, including the precursor that are the Alexa and Echo microphones in your house. They are stripping you of agency, not providing it. These particular ones also nudge you to treat your children much the way surveillance capitalism treats you: as a suspect to be watched, relationships denuded of the subtle human capability to trust. Agency only comes from being in full control of your tools. Adding someone else’s tools (here not just Google but your health insurer, your landlord etc) to your home doesn’t make it smart but a self-censorship promoting escape room. A fractal of the panopticon. We need to start designing more technology that is based on distributed use, not on a centralised controller: Google’s New Patents Aim to Make Your Home a Data Mine
  • An excellent article by the NYT about Facebook’s slide to the dark side. When the student dorm room excuse “we didn’t realise, we messed up, but we’ll fix it for the future” defence fails, and you weaponise your own data driven machine against its critics. Thus proving your critics right. Weaponising your own platform isn’t surprising but very sobering and telling. Will it be a tipping point in how the public views FB? Delay, Deny and Deflect: How Facebook’s Leaders Fought Through Crisis
  • Some of these takeaways from the article just mentioned we should keep top of mind when interacting with or talking about Facebook: FB knew very early on about being used to influence the US 2016 election and chose not to act. FB feared backlash from specific user groups and opted to unevenly enforce their terms or service/community guidelines. Cambridge Analytica is not an isolated abuse, but a concrete example of the wider issue. FB weaponised their own platform to oppose criticism: How Facebook Wrestled With Scandal: 6 Key Takeaways From The Times’s Investigation
  • There really is no plausible deniability for FB’s execs on their “in-house fake news shop” : Facebook’s Top Brass Say They Knew Nothing About Definers. Don’t Believe Them. So when you need to admit it, you fall back on the ‘we messed up, we’ll do better going forward’ tactic.
  • As Aral Balkan says, that’s the real issue at hand because “Cambridge Analytica and Facebook have the same business model. If Cambridge Analytica can sway elections and referenda with a relatively small subset of Facebook’s data, imagine what Facebook can and does do with the full set.“: We were warned about Cambridge Analytica. Why didn’t we listen?
  • [update] Apparently all the commotion is causing Zuckerberg to think FB is ‘at war‘, with everyone it seems, which is problematic for a company that has as a mission to open up and connect the world, and which is based on a perception of trust. Also a bunker mentality probably doesn’t bode well for FB’s corporate culture and hence future: Facebook At War.

We’re in a time where whatever is presented to us as discourse on Facebook, Twitter or any of the other platforms out there, may or may not come from humans, bots, or someone/a group with a specific agenda irrespective of what you say or respond. We’ve seen it at the political level, with outside influences on elections, we see it in things like gamer gate, and in critiques of the last Star Wars movie. It creates damage on a societal level, and it damages people individually. To quote Angela Watercutter, the author of the mentioned Star Wars article,

…it gets harder and harder to have an honest discussion […] when some of the speakers are just there to throw kerosene on a flame war. And when that happens, when it’s impossible to know which sentiments are real and what motivates the people sharing them, discourse crumbles. Every discussion […] could turn into a […] fight — if we let it.

Discourse disintegrates I think specifically when there’s no meaningful social context in which it takes place, nor social connections between speakers in that discourse. The effect not just stems from that you can’t/don’t really know who you’re conversing with, but I think more importantly from anyone on a general platform being able to bring themselves into the conversation, worse even force themselves into the conversation. Which is why you never should wade into newspaper comments, even though we all read them at times because watching discourse crumbling from the sidelines has a certain addictive quality. That this can happen is because participants themselves don’t control the setting of any conversation they are part of, and none of those conversations are limited to a specific (social) context.

Unlike in your living room, over drinks in a pub, or at a party with friends of friends of friends. There you know someone. Or if you don’t, you know them in that setting, you know their behaviour at that event thus far. All have skin in the game as well misbehaviour has immediate social consequences. Social connectedness is a necessary context for discourse, either stemming from personal connections, or from the setting of the place/event it takes place in. Online discourse often lacks both, discourse crumbles, entropy ensues. Without consequence for those causing the crumbling. Which makes it fascinating when missing social context is retroactively restored, outing the misbehaving parties, such as the book I once bought by Tinkebell where she matches death threats she received against the sender’s very normal Facebook profiles.

Two elements therefore are needed I find, one in terms of determining who can be part of which discourse, and two in terms of control over the context of that discourse. They are point 2 and point 6 in my manifesto on networked agency.

  • Our platforms need to mimick human networks much more closely : our networks are never ‘all in one mix’ but a tapestry of overlapping and distinct groups and contexts. Yet centralised platforms put us all in the same space.
  • Our platforms also need to be ‘smaller’ than the group using it, meaning a group can deploy, alter, maintain, administrate a platform for their specific context. Of course you can still be a troll in such a setting, but you can no longer be one without a cost, as your peers can all act themselves and collectively.
  • This is unlike on e.g. FB where the cost of defending against trollish behaviour by design takes more effort than being a troll, and never carries a cost for the troll. There must, in short, be a finite social distance between speakers for discourse to be possible. Platforms that dilute that, or allow for infinite social distance, is where discourse can crumble.

    This points to federation (a platform within control of a specific group, interconnected with other groups doing the same), and decentralisation (individuals running a platform for one, and interconnecting them). Doug Belshaw recently wrote in a post titled ‘Time to ignore and withdraw?‘ about how he first saw individuals running their own Mastodon instance as quirky and weird. Until he read a blogpost of Laura Kalbag where she writes about why you should run Mastodon yourself if possible:

    Everything I post is under my control on my server. I can guarantee that my Mastodon instance won’t start profiling me, or posting ads, or inviting Nazis to tea, because I am the boss of my instance. I have access to all my content for all time, and only my web host or Internet Service Provider can block my access (as with any self-hosted site.) And all blocking and filtering rules are under my control—you can block and filter what you want as an individual on another person’s instance, but you have no say in who/what they block and filter for the whole instance.

    Similarly I recently wrote,

    The logical end point of the distributed web and federated services is running your own individual instance. Much as in the way I run my own blog, I want my own Mastodon instance.

    I also do see a place for federation, where a group of people from a single context run an instance of a platform. A group of neighbours, a sports team, a project team, some other association, but always settings where damaging behaviour carries a cost because social distance is finite and context defined, even if temporary or emergent.

    Slate saw their traffic from Facebook drop by 87% in a year after changes in how FB prioritises news and personal messages in your timeline. Talking Points Memo reflects on it and doing so formulates a few things I find of interest.

    TPM writes:
    Facebook is a highly unreliable company. We’ve seen this pattern repeat itself a number of times over the course of company’s history: its scale allows it to create whole industries around it depending on its latest plan or product or gambit. But again and again, with little warning it abandons and destroys those businesses.” …”Google operates very, very differently.”..”Yet TPM gets a mid-low 5-figure check from Google every month for the ads we run on TPM through their advertising services. We get nothing from Facebook.”..”Despite being one of the largest and most profitable companies in the world Facebook still has a lot of the personality of a college student run operation, with short attention spans, erratic course corrections and an almost total indifference to the externalities of its behavior.

    This first point I think is very much about networks and ecosystems, do you see others as part of your ecosystem or merely as a temporary leg-up until you can ditch them or dump externalities on.

    The second point TPM makes is about visitors versus ‘true audience’.
    “we are also seeing a shift from a digital media age of scale to one based on audience. As with most things in life, bigger is, all things being equal, better. But the size of a publication has no necessary connection to its profitability or viability.” It’s a path to get to a monopoly that works for tech (like FB) but not for media, the author Josh Marshall says. “…the audience era is vastly better for us than the scale era”

    Audience, or ‘true audience’ as TPM has it, are the people who have a long time connection to you, who return regularly to read articles. The ones you’re building a connection with, for which TPM, or any newsy site, is an important node in their network. Scaling there isn’t about the numbers, although numbers still help, but the quality of those numbers and the quality of what flows through the connections between you and readers. The invisible hand of networks more than trying to get ever more eye-balls.

    Scale thinking would make blogging like I do useless, network thinking makes it valuable, even if there are just 3 readers, myself included. It’s ‘small b’ blogging as Tom Critchlow wrote a few months ago. “Small b blogging is learning to write and think with the network“. Or as I usually describe it: thinking out loud, and having distributed conversations around it. Big B blogging, Tom writes, in contrast “is written for large audiences. Too much content on the web is designed for scale” and pageviews, where individual bloggers seem to mimick mass media companies. Because that is the only example they encounter.

    Back in April I wrote how my blogging had changed since I reduced my Facebook activity last fall. I needed to create more space again to think and write, and FB was eroding my capacity to do so. Since my break with FB I wrote more than since a long time, and the average weekly activity was higher than ever in the past 16 years. In april I wondered how that would keep up in the second quarter of this year so here are the numbers of the first half of 2018.

    First, the number of postings was 203 this first half of 2018, or an average of 7 to 8 per week. Both as total number and as weekly average this is more than I have ever blogged since 2002 on even a yearly basis. (see the graphs in my previous posting Back to the Blog, the Numbers).

    Mid April I added a stream of micro-postings to this blog, and that helps explain part of the large jump in number of postings in the first graph below. What microblogging helps do however is get the small bits, references and random thoughts out of my head, leaving more space to write posts with more content. I’ve written 84 ‘proper’ blog posts the last 6 months, of which 50 since adding the microblog mid April, so it has pushed up all my writing.


    Blogposts 2018 per month. It shows July as week 26 ends July 1st, which had 2 postings


    Blogposts 2018 per week, the micro blog started week 15

    Let’s look at how that compares to previous months and years.


    Number of posts per month since 2016. Leaving FB in October 2017 started a strong uptick.

    I feel I have found back a writing rhythm. So tracking the number of postings moving forward is likely mostly of interest in terms of ‘proper’ postings and the topics covered, and less to see if I blog at all. My steps away from FB have paid off, and reconfiguring my information strategies for more quality is the next phase.

    Some links I thought worth reading the past few days

    Some links I thought worth reading the past few days

    The second founder, Jan Koum, of WhatsApp has left Facebook, apparently over differences in dealing with encryption and the sharing of data of WhatsApp. The other founder, Brian Acton, had already left Facebook last September, over similar issues. He donated $50 million to the non-profit Signal Foundation earlier this year, and stated he wanted to work on transparent, open-source development and uncompromising data protection. (Koum on the other hand said he was going to spend time on collecting Porsches….) Previously the European Union fined Facebook 110 million Euro for lying about matching up data of Whatsapp with Facebook profiles when Facebook acquired Whatsapp in 2014. Facebook at the time said it couldn’t match Whatsapp and Facebook accounts automatically, then 2 years later did precisely that, while the technology for it already existed in 2014 of which Facebook was aware. Facbeook says “errors made in its 2014 filings were not intentional” Another “we’re sorry, honestly” moment for Facebook in a 15 year long apology tour since even before its inception.

    I have WhatsApp on my phone but never use it to initiate contact. Some in my network however don’t use any alternatives.

    The gold standard for messaging apps is Signal by Open Whisper Systems. Other applications such as Whatsapp, FB Messenger or Skype have actually incorporated Signal’s encryption technology (it’s open after all), but in un-testable ways (they’re not open after all). Signal is available on your phone and as desktop app (paired with your phone). It does require you to disclose a phone number, which is a drawback. I prefer using Signal, but the uptake of Signal is slow in western countries.

    Other possible apps using end-to-end encryption are:
    Threema, a Switzerland based application, I also use but not with many contacts. Trust levels in the application are partly based on exchanging keys when meeting face to face, adding a non-tech layer. It also claims to not store metadata (anonymous use possible, no phone necessary, not logging who communicates with whom, contact lists and groups locally on your device etc). Yet, the app itself isn’t open for inspection.

    Telegram (originating in Russia, but now banned for not handing over encryption keys to Russian authorities, and now also banned in Iran, where it has 40 million users, 25% of its global user population.) I don’t use Telegram, and don’t know many in my network who do.

    Interestingly the rise in using encrypted messaging is very high in countries high on the corruption perception index. It also shows how slowly Signal is growing in other countries.

    VPN tools will allow you to circumvent blocking of an app, by pretending to be in a different country. However VPN, which is a standard application in all businesses allowing remote access to employees, itself is banned in various countries (or only allowed from ‘approved’ VPN suppliers, basically meaning bans of a messaging app will still be enforced).

    Want to message me? Use Signal. Use Threema if you don’t want to disclose a phone number.

    Many tech companies are rushing to arrange compliance with GDPR, Europe’s new data protection regulations. What I have seen landing in my inbox thus far is not encouraging. Like with Facebook, other platforms clearly struggle, or hope to get away, with partially or completely ignoring the concepts of informed consent and unforced consent and proving consent. One would suspect the latter as Facebooks removal of 1.5 billion users from EU jurisdiction, is a clear step to reduce potential exposure.

    Where consent by the data subject is the basis for data collection: Informed consent means consent needs to be explicitly given for each specific use of person related data, based on a for laymen clear explanation of the reason for collecting the data and how precisely it will be used.
    Unforced means consent cannot be tied to core services of the controlling/processing company when that data isn’t necessary to perform a service. In other words “if you don’t like it, delete your account” is forced consent. Otherwise, the right to revoke one or several consents given becomes impossible.
    Additionally, a company needs to be able to show that consent has been given, where consent is claimed as the basis for data collection.

    Instead I got this email from Twitter earlier today:

    “We encourage you to read both documents in full, and to contact us as described in our Privacy Policy if you have questions.”

    and then

    followed by

    You can also choose to deactivate your Twitter account.

    The first two bits mean consent is not informed and that it’s not even explicit consent, but merely assumed consent. The last bit means it is forced. On top of it Twitter will not be able to show content was given (as it is merely assumed from using their service). That’s not how this is meant to work. Non-compliant in other words. (IANAL though)

    Some links I think worth reading today.

    It seems, from a preview for journalists, that the GDPR changes that Facebook will be making to its privacy controls, and especially the data controls a user has, are rather unimpressive. I had hoped that with the new option to select ranges of your data for download, you would also be able to delete specific ranges of data. This would be a welcome change as current options are only deleting every single data item by hand, or deleting everything by deleting your account. Under the GDPR I had expected more control over data on FB.

    It also seems they still keep the design imbalanced, favouring ‘let us do anything’ as the simplest route for users to click through, and presenting other options very low key, and the account deletion option still not directly accessible in your settings.

    They may or may not be deemed to have done enough towards implementing GDPR by the data protection authorities in the EU after May 25th, but that’s of little use to anyone now.

    So my intention to delete my FB history still means the full deletion of my account. Which will be effective end of this week, when the 14 day grace period ends.

    I’ve disengaged from Facebook (FB) last October, mostly because I wanted to create more space for paying attention, and for active, not merely responsive, reflection and writing, and realised that the balance between the beneficial and destructive aspects of FB had tilted too much to the destructive side.

    My intention was to keep my FB account, as it serves as a primary channel to some professional contacts and groups. Also FB Messenger is the primary channel for some. However I wanted to get rid of my FB history, all the likes, birthday wishes etc. Deleting material is possible but the implementation of it is completely impractical: every element needs to be deleted separately. Every like needs to be unliked, every comment deleted, every posting on your own wall or someone else’s wall not just deleted but also the deletion confirmed as well. There’s no bulk deletion option. I tried to use a Chrome plugin that promised to go through the activity log and ‘click’ all those separate delete buttons, but it didn’t work. The result is that deleting your data from Facebook means deleting every single thing you ever wrote or clicked. Which can easily take 30 to 45 mins to just do for a single month worth of likes and comments. Now aggregate that over the number of years you actively used FB (about 5 years in my case, after 7 years of passive usage).

    The only viable path to delete your FB data therefore is currently to delete the account entirely. I wonder if it will be different after May, when the GDPR is fully enforced.

    Not that deletion of your account is easy either. You don’t have full control over deletion. The link to do so is not available in your settings interface, but only through the help pages, and it is presented as submitting a request. After you confirm deletion, you receive an e-mail that deletion of your data will commence after 14 days. Logging back in in that period stops the clock. I suspect this will no longer be enough when the GDPR enters into force, but it is what it currently is.

    Being away from FB for a longer time, with the account deactivated, had the effect that when I did log back in (to attempt to delete more of my FB history), the FB timeline felt very bland. Much like how watching tv was once not to be missed, and then it wasn’t missed at all. This made me realise that saying FB was the primary channel for some contacts which I wouldn’t want to throw away, might actually be a cop-out, the last stand of FOMO. So FB, by making it hard to delete data while keeping the account, made it easy to decide to delete my account altogether.

    Once the data has been deleted (which can take up to 90 days according to FB after the 14 day grace period), I might create a new account, with which to pursue the benefits of FB, but avoid the destructive side and with 12 years of Facebook history wiped. Be seeing you!


    FB’s mail confirming they’ll delete my account by the end of April.

    Stephanie Booth, a long time blogging connection, has been writing about reducing her Facebook usage and increasing her blogging. She says at one point

    As the current “delete Facebook” wave hits, I wonder if there will be any kind of rolling back, at any time, to a less algorithmic way to access information, and people. Algorithms came to help us deal with scale. I’ve long said that the advantage of communication and connection in the digital world is scale. But how much is too much?

    I very much still believe there’s no such thing as information overload, and fully agree with Stephanie that the possible scale of networks and connections is one of the key affordances of our digital world. My rss-based filtering, as described in 2005, worked better when dealing with more information, than with less. Our information strategies need to reflect and be part of the underlying complexity of our lives.

    Algorithms can help us with that scale, just not the algorithms that FB uses around us. For algorithms to help, like any tool, they need to be ‘smaller’ than us, as I wrote in my networked agency manifesto. We need to be able to control its settings, tinker with it, deploy it and stop it as we see fit. The current application of algorithms, as they usually need lots of data to perform, sort of demands a centralised platform like FB to work. The algorithms that really will be helping us scale will be the ones we can use for our own particular scaling needs. For that the creation, maintenance and usage of algorithms needs to have a much lower threshold than now. I placed it in my ‘agency map‘ because of it.

    Going back to a less algorithmic way of dealing with information isn’t an option, nor something to desire I think. But we do need algorithms that really serve us, perform to our information needs. We need less algorithms that purport to aid us in dealing with the daily river of newsy stuff, but really commodotise us at the back-end.

    As a next step in rethinking my approach to using Facebook, I have started deleting my Facebook history. FB only let’s you delete things by hand, posting by posting, like by like, comment by comment. Which takes about as long or longer than the original time spent posting or liking. So I am using a Chrome plugin to do it for me by pretending to be me, going through all the delete and unlike links. I’m currently deleting 2014 data, to see how well that works. 2014 is the first full year I posted more than just the RSS feed of my blogposts, whereas 2013 and the years before that until October 2006 basically only contain my RSS feed, which only contains public material anyway.