Some things I thought worth reading in the past days

  • A good read on how currently machine learning (ML) merely obfuscates human bias, by moving it to the training data and coding, to arrive at peace of mind from pretend objectivity. Because of claiming that it’s ‘the algorithm deciding’ you make ML a kind of digital alchemy. Introduced some fun terms to me, like fauxtomation, and Potemkin AI: Plausible Disavowal – Why pretend that machines can be creative?
  • These new Google patents show how problematic the current smart home efforts are, including the precursor that are the Alexa and Echo microphones in your house. They are stripping you of agency, not providing it. These particular ones also nudge you to treat your children much the way surveillance capitalism treats you: as a suspect to be watched, relationships denuded of the subtle human capability to trust. Agency only comes from being in full control of your tools. Adding someone else’s tools (here not just Google but your health insurer, your landlord etc) to your home doesn’t make it smart but a self-censorship promoting escape room. A fractal of the panopticon. We need to start designing more technology that is based on distributed use, not on a centralised controller: Google’s New Patents Aim to Make Your Home a Data Mine
  • An excellent article by the NYT about Facebook’s slide to the dark side. When the student dorm room excuse “we didn’t realise, we messed up, but we’ll fix it for the future” defence fails, and you weaponise your own data driven machine against its critics. Thus proving your critics right. Weaponising your own platform isn’t surprising but very sobering and telling. Will it be a tipping point in how the public views FB? Delay, Deny and Deflect: How Facebook’s Leaders Fought Through Crisis
  • Some of these takeaways from the article just mentioned we should keep top of mind when interacting with or talking about Facebook: FB knew very early on about being used to influence the US 2016 election and chose not to act. FB feared backlash from specific user groups and opted to unevenly enforce their terms or service/community guidelines. Cambridge Analytica is not an isolated abuse, but a concrete example of the wider issue. FB weaponised their own platform to oppose criticism: How Facebook Wrestled With Scandal: 6 Key Takeaways From The Times’s Investigation
  • There really is no plausible deniability for FB’s execs on their “in-house fake news shop” : Facebook’s Top Brass Say They Knew Nothing About Definers. Don’t Believe Them. So when you need to admit it, you fall back on the ‘we messed up, we’ll do better going forward’ tactic.
  • As Aral Balkan says, that’s the real issue at hand because “Cambridge Analytica and Facebook have the same business model. If Cambridge Analytica can sway elections and referenda with a relatively small subset of Facebook’s data, imagine what Facebook can and does do with the full set.“: We were warned about Cambridge Analytica. Why didn’t we listen?
  • [update] Apparently all the commotion is causing Zuckerberg to think FB is ‘at war‘, with everyone it seems, which is problematic for a company that has as a mission to open up and connect the world, and which is based on a perception of trust. Also a bunker mentality probably doesn’t bode well for FB’s corporate culture and hence future: Facebook At War.

Something that strikes me as odd in addressing fake news, is that it’s almost exclusively focused on the information production and distribution. Not on the skills and strategies of the entity taking information in. Partly this is understandable, as forcing transparency on how your information might have been influenced is helpful (especially to see if what you get presented with is something others / everyone else is presented with). But otherwise it’s as if those receiving information are treated as passive consumers, not as agents in their own right.

“Our best defense against hostile influence, whatever its vector, is to invest in critical thinking skills at all levels of the population so that outlandish claims are seen for what they truly are: emotional exploitation for political or monetary gain”, wrote Nina Jankowicz on how Finnish society instills critical thinking skills.

The question of course is whether governments truly want to inoculate society, or merely want to deflect disinformation and manipulation from specific sources. Then it’s easier to understand where the focus on technology oriented solutions, or ones that presume centralised efforts come from.

In networks smartness needs to be at the endpoints, not in the core. There’s a lack of attention for the information strategies, filtering and interpreting tactics of those receiving information. Crap detection skills need to be developed for instance, and societies have a duty to self-inoculate. I think the obligation to explain* applies here too, showing others what you do and how.

Here’s a list of postings about my information habits. They’re not fixed, and currently I’m in the process of describing them again, and taking a critical look at them. What are your information habits, have you ever put them into words?

*The obligation to explain is something I’ve adopted from my friend Peter Rukavina: “The benefits of a rich, open pool of knowledge are so great that those who have learned have an obligation to share what they’ve learned.

Some links I thought worth reading the past few days

  • On how blockchain attempts to create fake scarcity in the digital realm. And why banks etc therefore are all over it: On scarcity and the blockchain by Jaap-Henk Hoepman
  • Doc Searl’s has consistently good blogposts about the adtech business, and how it is detrimental to publishers and citizens alike. In this blogpost he sees hope for publishing. His lists on adverts and ad tech I think should be on all our minds: Is this a turning point for publishing?
  • Doc Searl’s wrote this one in 2017: How to plug the publishing revenue drain – The Graph – Medium
  • In my information routines offline figures prominently, but it usually doesn’t in my tools. There is a movement to put offline front and center as design principle it turns out: Designing Offline-First Web Apps
  • Hoodie is a backendless tool for building webapps, with a offline first starting point: hood.ie intro
  • A Berlin based company putting offline first as foremost design principle: Neighbourhoodie – Offline First
  • And then there are Service Workers, about which Jeremy Keith has just published a book: Going Offline
  • Haven’t tested it yet, but this type of glue we need much more of, to reduce the cost of leaving silos, and to allow people to walk several walled gardens at the same time as a precursor to that: Granary