Some things I thought worth reading in the past days

  • A good read on how currently machine learning (ML) merely obfuscates human bias, by moving it to the training data and coding, to arrive at peace of mind from pretend objectivity. Because of claiming that it’s ‘the algorithm deciding’ you make ML a kind of digital alchemy. Introduced some fun terms to me, like fauxtomation, and Potemkin AI: Plausible Disavowal – Why pretend that machines can be creative?
  • These new Google patents show how problematic the current smart home efforts are, including the precursor that are the Alexa and Echo microphones in your house. They are stripping you of agency, not providing it. These particular ones also nudge you to treat your children much the way surveillance capitalism treats you: as a suspect to be watched, relationships denuded of the subtle human capability to trust. Agency only comes from being in full control of your tools. Adding someone else’s tools (here not just Google but your health insurer, your landlord etc) to your home doesn’t make it smart but a self-censorship promoting escape room. A fractal of the panopticon. We need to start designing more technology that is based on distributed use, not on a centralised controller: Google’s New Patents Aim to Make Your Home a Data Mine
  • An excellent article by the NYT about Facebook’s slide to the dark side. When the student dorm room excuse “we didn’t realise, we messed up, but we’ll fix it for the future” defence fails, and you weaponise your own data driven machine against its critics. Thus proving your critics right. Weaponising your own platform isn’t surprising but very sobering and telling. Will it be a tipping point in how the public views FB? Delay, Deny and Deflect: How Facebook’s Leaders Fought Through Crisis
  • Some of these takeaways from the article just mentioned we should keep top of mind when interacting with or talking about Facebook: FB knew very early on about being used to influence the US 2016 election and chose not to act. FB feared backlash from specific user groups and opted to unevenly enforce their terms or service/community guidelines. Cambridge Analytica is not an isolated abuse, but a concrete example of the wider issue. FB weaponised their own platform to oppose criticism: How Facebook Wrestled With Scandal: 6 Key Takeaways From The Times’s Investigation
  • There really is no plausible deniability for FB’s execs on their “in-house fake news shop” : Facebook’s Top Brass Say They Knew Nothing About Definers. Don’t Believe Them. So when you need to admit it, you fall back on the ‘we messed up, we’ll do better going forward’ tactic.
  • As Aral Balkan says, that’s the real issue at hand because “Cambridge Analytica and Facebook have the same business model. If Cambridge Analytica can sway elections and referenda with a relatively small subset of Facebook’s data, imagine what Facebook can and does do with the full set.“: We were warned about Cambridge Analytica. Why didn’t we listen?
  • [update] Apparently all the commotion is causing Zuckerberg to think FB is ‘at war‘, with everyone it seems, which is problematic for a company that has as a mission to open up and connect the world, and which is based on a perception of trust. Also a bunker mentality probably doesn’t bode well for FB’s corporate culture and hence future: Facebook At War.

Peter in his blog pointed to a fascinating posting by Robin Sloan about ‘sentence gradients’. His posting describes how he created a tool that make gradients out of text, much like the color gradients we know. It uses neural networks (neuronal networks we called them when I was at university). Neural networks, in other words machine learning, are used to represent texts as numbers (color gradients can be numbers e.g., on just one dimension. If you keep adding dimensions you can represent things that branch off in multiple directions as numbers too.) Sentences are more complex to represent numerically but if you can then it is possible, just like with colors, to find sentences that are numerically between a starting sentence and an ending sentence. Robin Sloan demonstrates the code for it in his blog (go there and try it!), and it creates fascinating results.

Mostly the results are fascinating I think because our minds are hardwired to determine meaning. So when we see a list of sentences we want, we need, we very much need, to find the intended meaning that turns that list into a text.

I immediately thought of other texts that are sometimes harder to fully grasp, but where you know or assume there must be deeper meaning: poems.

So I took a random poem from one of Elmine’s books, and entered the first and last sentence into the tool to make a sentence gradient.

The result was:

I think it is a marvellous coincidence that the word Ceremony comes up.
The original poem is by Thomas Hardy, and titled ‘Without Ceremony’. (Hardy died in 1928, so the poem is in the public domain and can be shown below)

Without Ceremony

It was your way, my dear,
To vanish without a word
When callers, friends, or kin
Had left, and I hastened in
To rejoin you, as I inferred.

And when you’d a mind to career
Off anywhere – say to town –
Your were all on a sudden gone
Before I had thought thereon
Or noticed your trunks were down.

So, now that you disappear
For ever in that swift style,
Your meaning seems to me
Just as it used to be:
‘Good-bye is not worth wile’