Hossein Derakhshan makes an important effort to find more precise language to describe misinformation (or rather mis- dis- and mal- information). In this Medium article, he takes a closer look at the different combinations of actors and targets, along the lines of state, non-state entities and the public.

Table by Hossein Derakhshan, from article DisInfo Wars

One of his conclusions is

…that from all the categories, those three where non-state organisations are targeted with dis-/malinfomation (i.e. SN, NN, and PN) are the most effective in enabling the agents to reach their malicious goals. Best example is still how US and UK state organisations duped independent and professional media outlets such as the New York Times into selling the war with Iraq to the public.
….
The model, thus, encourages to concentrate funds and efforts on non-state organisations to help them resist information warfare.

He goes on to say that public protection against public agents is too costly, or too complicated:

the public is easy to target but very hard (and expensive) to protect – mainly because of their vast numbers, their affective tendencies, and the uncertainty about the kind and degree of the impact of bad information on their minds

I feel that this is where our individual civic duty to do crap detection, and call it out when possible, or at least not spread it, comes into play as inoculation.

Some of the things I found worth reading in the past few days:

  • Although this article seems to confuse regulatory separation with technological separation, it does give a try in formulating the geopolitical aspects of internet and data: There May Soon Be Three Internets. America’s Won’t Necessarily Be the Best
  • Interesting, yet basically boils down to actively exercising your ‘free will’. It assumes a blank slate for the hacking, where I haven’t deliberately set out for information/contacts on certain topics. And then it suggests doing precisely that as remedy. The key quote for me here is “Humans are hacked through pre-existing fears, hatreds, biases and cravings. Hackers cannot create fear or hatred out of nothing. But when they discover what people already fear and hate it is easy to push the relevant emotional buttons and provoke even greater fury. If people cannot get to know themselves by their own efforts, perhaps the same technology the hackers use can be turned around and serve to protect us. Just as your computer has an antivirus program that screens for malware, maybe we need an antivirus for the brain. Your AI sidekick will learn by experience that you have a particular weakness – whether for funny cat videos or for infuriating Trump stories – and would block them on your behalf.“: Yuval Noah Harari on the myth of freedom
  • This is an important issue, always. I recognise it from my work for the World Bank and UN agencies. Is what you’re doing actually helping, or is it shoring up authorities that don’t match with your values? And are you able to recognise it and withdraw when you cross the line from the former to the latter? I’ve known entrepreneurs who kept a client ban-list of sectors, governments and companies, but often it isn’t that clear cut. I’ve avoided engagements in various countries over the years, but every client engagement can be rationalised: How McKinsey Has Helped Raise the Stature of Authoritarian Governments, and when the consequences come back to bite, Malaysia files charges against Goldman-Sachs
  • This seems like a useful list to check for next books to read. I’ve definitely enjoyed reading the work of Chimamanda Ngozi Adichie and Nnedi Okorafor last year: My year of reading African women, by Gary Younge

Some things I thought worth reading in the past days

  • A good read on how currently machine learning (ML) merely obfuscates human bias, by moving it to the training data and coding, to arrive at peace of mind from pretend objectivity. Because of claiming that it’s ‘the algorithm deciding’ you make ML a kind of digital alchemy. Introduced some fun terms to me, like fauxtomation, and Potemkin AI: Plausible Disavowal – Why pretend that machines can be creative?
  • These new Google patents show how problematic the current smart home efforts are, including the precursor that are the Alexa and Echo microphones in your house. They are stripping you of agency, not providing it. These particular ones also nudge you to treat your children much the way surveillance capitalism treats you: as a suspect to be watched, relationships denuded of the subtle human capability to trust. Agency only comes from being in full control of your tools. Adding someone else’s tools (here not just Google but your health insurer, your landlord etc) to your home doesn’t make it smart but a self-censorship promoting escape room. A fractal of the panopticon. We need to start designing more technology that is based on distributed use, not on a centralised controller: Google’s New Patents Aim to Make Your Home a Data Mine
  • An excellent article by the NYT about Facebook’s slide to the dark side. When the student dorm room excuse “we didn’t realise, we messed up, but we’ll fix it for the future” defence fails, and you weaponise your own data driven machine against its critics. Thus proving your critics right. Weaponising your own platform isn’t surprising but very sobering and telling. Will it be a tipping point in how the public views FB? Delay, Deny and Deflect: How Facebook’s Leaders Fought Through Crisis
  • Some of these takeaways from the article just mentioned we should keep top of mind when interacting with or talking about Facebook: FB knew very early on about being used to influence the US 2016 election and chose not to act. FB feared backlash from specific user groups and opted to unevenly enforce their terms or service/community guidelines. Cambridge Analytica is not an isolated abuse, but a concrete example of the wider issue. FB weaponised their own platform to oppose criticism: How Facebook Wrestled With Scandal: 6 Key Takeaways From The Times’s Investigation
  • There really is no plausible deniability for FB’s execs on their “in-house fake news shop” : Facebook’s Top Brass Say They Knew Nothing About Definers. Don’t Believe Them. So when you need to admit it, you fall back on the ‘we messed up, we’ll do better going forward’ tactic.
  • As Aral Balkan says, that’s the real issue at hand because “Cambridge Analytica and Facebook have the same business model. If Cambridge Analytica can sway elections and referenda with a relatively small subset of Facebook’s data, imagine what Facebook can and does do with the full set.“: We were warned about Cambridge Analytica. Why didn’t we listen?
  • [update] Apparently all the commotion is causing Zuckerberg to think FB is ‘at war‘, with everyone it seems, which is problematic for a company that has as a mission to open up and connect the world, and which is based on a perception of trust. Also a bunker mentality probably doesn’t bode well for FB’s corporate culture and hence future: Facebook At War.