Another good find by Neil Mather for me to read a few times more. A first reaction I have is that in my mind p2p networks weren’t primarily about evading surveillance, evading copyright, or maintaining anonymity, but one of netwerk-resilience and not having someone with power over the ‘off-switch’ for the entire network. These days surveillance and anonymity are more important, and should gain more attention in the design stage.

I find it slightly odd that the dark web and e.g. TOR aren’t mentioned in any meaningful way in the article.

Another element I find odd is how the author talks about extremists using federated tools “Can or should a federated network accept ideologies that are antithetical to its organic politics? Regardless of the answer, it is alarming that the community and its protocol leadership could both be motivated by a distrust of centralised social media, and be blindsided by a situation that was inevitable given the common ground found between ideologies that had been forced from popular platforms one way or another.”
It ignores that with going the federated route extremists loose two things they enjoyed on centralised platforms: amplification and being linked to the mainstream. In a federated setting I with my personal instance, and any other instance decides themselves whom to federate with or not. There’s nothing for ‘a federated network to accept’, each instance does their own acceptance. There’s no algorithmic rage-engine to amplify the extreme. There’s no standpoint for ‘the federated network’ to take, just nodes doing their own thing. Power at the edges.

Also I think that some of the vulnerabilities and attack surfaces listed (Napster, Pirate Bay) build on the single aspect in that context that still had a centralised nature. That still held some power in a center.

Otherwise good read, with good points made that I want to revisit and think through more.

Bookmarked This is Fine: Optimism & Emergency in the P2P Network

…driven by the desire for platform commons and community self-determination. These are goals that are fundamentally at odds with – and a response to – the incumbent platforms of social media, music and movie distribution and data storage. As we enter the 2020s, centralised power and decentralised communities are on the verge of outright conflict for the control of the digital public space. The resilience of centralised networks and the political organisation of their owners remains significantly underestimated by protocol activists. At the same time, the decentralised networks and the communities they serve have never been more vulnerable. The peer-to-peer community is dangerously unprepared for a crisis-fuelled future that has very suddenly arrived at their door.

Nick Punt writes a worthwile post (found via Roland Tanglao) on “De-Escalating Social Media, Designing humility and forgiveness into social media products

He writes

This is why it’s my belief that as designed today, social media is out of balance. It is far easier to escalate than it is to de-escalate, and this is a major problem that companies like Twitter and Facebook need to address.

This got me thinking about what particular use cases need de-escalation, and whether there’s something simple we can do to test the waters and address these types of problems.

And goes on to explore how to create a path for admitting mistakes on Twitter. This currently isn’t encouraged by Twitter’s design. You see no social reinforcement, as no others visibly admit mistakes. You do see many people pilig onto someone for whatever perceived slight, and you do see people’s reflex of digging in when attacked.

Punt suggest three bits of added functionality for Twitter:

  • The ability to add a ‘mea culpa’ to a tweet in the shape of “@ton_zylstra indicated they made a mistake in this tweet”. Doing that immediately stops the amplicifation of those messages. No more replies, likes or retweets without comments. Retweet with comment is still possible to amplify the correction, as opposed to the original message.
  • Surfacing corrections: those that have seen the original tweet in their timelines will also get presented with the correction.
  • Enabling forgiveness: works just like likes, but then to forgive the original poster for the mistake, as a form of positive reinforcement.

I like this line of thinking, although I think it won’t be added to existing silo’d networks. This type of nudging of constructive behaviour as well as adding specific types of friction are however of interest. Maybe it is easier for other platforms and newer players to adopt as a distinguishing feature. E.g. in Mastodon.

Of course it’s in direct conflict with FB’s business model but

social networks should reintroduce friction into their sharing mechanisms. Think of it as the digital equivalent of social distancing.

makes a lot of sense otherwise. There’s no viable path of doing only content moderation or filtering. Another option is breaking monopolistic silos up by requiring open API’s for them to be seen as true platforms. That too will reduce amplification, as it puts the selection into the hands of a wider variety of clients built on top of such a true platform. Of course that too is anathema to their business model.

Came across this article from last year, The new dot com bubble is here: it’s called online advertising. It takes a look at online advertising’s effectiveness. It seems the selection effect is strong, but not accounted for, because the metrics happen after that.

“It is crucial for advertisers to distinguish such a selection effect (people see your ad, but were already going to click, buy, register, or download) from the advertising effect (people see your ad, and that’s why they start clicking, buying, registering, downloading).”

They don’t.

All the data gathering, all the highly individual targeting, apparently means advertisers are reaching people they would already reach. Now people just click on a link the advertising company is paying extra for.

For eBay there was an opportunity in 2012 to experiment with what would happen if they stopped online advertising. Three months later, the results were clear: all the traffic that had previously come from paid links was now coming in through ordinary links. Tadelis had been right all along. Annually, eBay was burning a good $20m on ads targeting the keyword ‘eBay’. (Blake et al 2015, Econometrica Vol. 83, 1, pp 155-174. DOI 10.3982/ECTA12423, PDF on Sci-Hub)

It’s about a market of a quarter of a trillion dollars governed by irrationality. It’s about knowables, about how even the biggest data sets don’t always provide insight.

So, the next time when some site wants to emotionally blackmail you to please disable your adtech blockers, because they’ve led themselves to believe that undermining your privacy is the only way they can continue to exist, don’t feel guilty. Adtech has to go, you’re offering up your privacy for magical thinking. Shields up!

Ian Forrester over at Cubic Garden has submitted a GDPR request to ClearView AI, the alt-right linked company that is hawking their facial recognition database (based on scraped online images) to law enforcement as well as commercial outfits. Should be interesting to follow along. Recently IBM stopped facial recognition work (they previously showed not being up to speed with CC licensing it seemed to me), and others like Amazon and MicroSoft did too when it comes to law enforcement. Facial recognition is extremely sensitive to bias.

facial-recognition-1Facial recognition 1, by EFF, license CC BY

Long winded, but the point is in order to stop us externalising the destructive costs of our societies towards the future, to make that future the litmus test of everything. In the form of benchmarking everything on how it impedes or improves the rights and lives of children, putting their human rights as the key stone of every decision.

Bookmarked A simple plan for repairing our society: we need new human rights, and this is how we get them. by Vinay GuptaVinay Gupta

It’s very hard to get adults to reason properly about the human rights of other adults, because we always tend to say “well, their conditions are their fault.” Lot of black people wind up in jail? “That’s either bad policing, or bad behavior, or both” says the adult analysis. “Lot of black children are getting substandard educations” well, this is clearly not their fault. You can say their parents are responsible, and basically abandon these kids to the mercy of their environment, whatever random spot they were born in, or you can say “the children have fundamental rights as children and these rights require us to act on their behalf as a society” and, for example, really seriously invest in and fix education. You see what I’m saying? We can get leverage on issues like race in America by using the human rights of children, free from moral responsibility for their fates, as a universal standard by which to measure our obligations. The same kind of logic applies to the environment: “is this commons being handed over to the children, its future owners, intact, or is it being degraded in a manner that violates their rights.” That gets you concepts like natural parks protection from fracking etc. very nicely.

In short, making the rights of children fully explicit, and enshrining them in our legal systems may be the shortest path forwards to creating a world in which we, as adults, are also protected. But the children first: none of this is their fault, and they should be protected as best we can.

And a rights framework for children, something simple, reasonably universal, clear and easy to work with is certainly possible. We can do this.