Today it is Global Ethics Day. My colleague Emily wanted to mark it given our increasing involvement in information ethics, and organised an informal online get together, a Global Ethics Day party, with a focus on data ethics. We showed the participants our work on an ethical reference for using geodata, and the thesis work our newest colleague Pauline finished this spring on ethical leadership within municipal governments. I was asked to kick the event off with some remarks to spark discussion.

I took the opportunity to define and launch a new moniker, Ethics as a Practice (EaaP).(1)
The impulse for me to do that comes out of two things that I have a certain dislike for, in how I see organisations deal with the ethics of working with data and using data to directly inform decisions.

The first concerns treating the philosophy of technology, information and data ethics in general as a purely philosophical and scientific debate. It, due to abstraction, then has no immediate bearing on the things organisations, I and others do in practice. Worse, regularly it approaches actual problems purely starting from that abstraction, ending up with posing ethical questions I think are irrelevant to reality on the ground. An example would be MIT’s notion that classical trolly problems have bearing on how to create autonomous vehicles. It seems to me because they don’t appreciate that saying autonomous vehicle, does not mean the vehicle is an indepenent actor to which blame etc can be applied, and that ‘autonomous’ merely means that a vehicle is independent from its previous driver, but otherwise fully embedded in a wide variety of other dependencies. Not autonomous at all, no ghost in the machine.


The campus of University of Twente, where they do some great ethics work w.r.t. to technology. But in itself it’s not sufficient. (image by me, CC BY SA)

The second concerns seeing ‘Ethics by design’ as a sufficient fix. I dislike that because it carries 2 assumptions that are usually not acknowledged. Ethics by design in practice seems to be perceived as ethics being only a concern in the design phase of a new technology, process, approach or method. Whereas at least 95% of what organisations and professionals deal with isn’t new but existing, so as a result remains out of scope of ethical considerations. It’s an assumption that everything that exists has been thoroughly ethically evaluated, which isn’t true, not at all even when it comes to existing data collection. Ethics has no role at all in existing data governance for instance, and data governance usually doesn’t cover data collection choices or its deletion/archiving.
The other assumption conveyed by the term ‘ethics by design’ is that once the design phase is completed, ethics has been sufficiently dealt with. The result is, with 95% of our environment remaining the same, that ethics by design is forward looking but not backwards compatible. Ethics by design is seen as doing enough, but it isn’t enough at all.


Ethics by design in itself does not provide absolution (image by Jordanhill School D&T Dept, license CC BY)

Our everyday actions and choices in our work are the expression of our individual and organisational values. The ‘ethics by design’ label sidesteps that everyday reality.

Both taken together, ethics as academic endeavour and ethics by design, result in ethics basically being outsourced to someone specific outside or in the organisation, or at best to a specific person in your team, and starts getting perceived as something external being delivered to your work reality. Ethics as a Service (EaaS) one might say, a service that takes care of the ethical aspects. That perception means you yourself can stop thinking about ethics, it’s been allocated, and you can just take its results and run with it. The privacy officer does privacy, the QA officer does quality assurance, the CISO does information security, and the ethics officer covers everything ethical…..meaning I can carry on as usual. (e.g. Enron had a Code of Ethics, but it had no bearing on the practical work or decisions taken.)

That perception of EaaS, ethics as an externally provided service to your work has real detrimental consequences. It easily becomes an outside irritant to the execution of your work. Someone telling you ‘no’ when you really want to do something. A bureaucratic template to fill in to be able to claim compliance (similarly as how privacy, quality, regulations are often treated). Ticking the boxes on a checklist without actual checks. That way it becomes something overly reductionist, which denies and ignores the complexity of everyday knowledge work.


Externally applied ethics become an irritant (image by Iain Watson, license CC BY)

Ethical questions and answers are actually an integral part of the complexity of your work. Your work is the place where clear boundaries can be set (by the organisation, by general ethics, law), ánd the place where you can notice as well as introduce behavioural patterns and choices. Complexity can only be addressed from within that complexity, not as an outside intervention. Ethics therefore needs to be dealt with from within the complexity of actual work and as one of the ingredients of it.

Placing ethics considerations in the midst of the complexity of our work, means that the spot where ethics are expressed in real work choices overlaps where such aspects are considered. It makes EaaS as a stand alone thing impossible, and instead brings those considerations into your everyday work not as an external thing but as an ingredient.

That is what I mean by Ethics as a Practice. Where you use academic and organisational output, where ethics is considered in the design stage, but never to absolve you from your professional responsibilities.
It still means setting principles and hard boundaries from the organisational perspective, but also an ongoing active reflection on them and on the heuristics that guide your choices, and it actively seeks out good practice. It never assumes a yes or no to an ethical question by default, later to be qualified or rationalised, but also does not approach those questions as neutral (as existing principles and boundaries are applied).(2) That way (data) ethical considerations become an ethics of your agency as a professional, informing your ability to act. It embraces the actual complexity of issues, acknowledges that daily reality is messy, engages all relevant stakeholders, and deliberately seeks out a community of peers to spot good practices.

Ethics is part and parcel of your daily messy work, it’s your practice to hone. (image by Neil Cummings, license CC BY SA)

Ethics as a Practice (EaaP) is a call to see yourself as an ethics practitioner, and a member of a community of practice of such practitioners, not as someone ethics ‘is done to’. Ethics is part and parcel of your daily messy work, it’s your practice to hone. Our meet-up today was a step to have such an exchange between peers.

I ended my remarks with a bit of a joke, saying, EaaP is so you can always do the next right thing, a quote from a Disney movie my 4 year old watches, and add a photo of a handwritten numbered list headed ‘things to do’ that I visibly altered so it became a ‘right things to do’ list.

(1) the ‘..as a Practice’ notion I took from Anne-Laure Le Cunff’s Ness Labs posting that mentioned ‘playfulness as a practice’.
(2) not starting from yes or no, nor from a neutral position, taken from the mediation theory by University of Twente’s prof Peter Paul Verbeek

Protesters in Belarus are pulling of the masks of police men, because then these will think twice before being seen to use violence on protestors. Behind that is an effort to then ferret out their names and personal details. There is a fine line here to tread between exposing policemen to stop the dehumanisation of protestors that masks allow them to do, and that escalating into vigilante violence. However it does remind me of a tactic described in Cory Doctorow‘s novel Walkaway, where doxxing policemen is used to then create videos with their family members sympathetic to the cause asking them to stop the violence. It’s one thing to beat up someone anonymously while masked, it’s another having your mother, brother, aunt or grandfather berate you for it in public media.

“The only way to stop violence is to pull off the masks, in both the literal and metaphorical sense. An officer who is no longer anonymous will think twice before he grabs, beats or kidnaps someone,” said the founder of Black Book of Belarus, a channel on the app Telegram devoted to “de-anonymising” police officers, with more than 100,000 subscribers.

Bookmarked This is Fine: Optimism & Emergency in the P2P Network
...driven by the desire for platform commons and community self-determination. These are goals that are fundamentally at odds with – and a response to – the incumbent platforms of social media, music and movie distribution and data storage. As we enter the 2020s, centralised power and decentralised communities are on the verge of outright conflict for the control of the digital public space. The resilience of centralised networks and the political organisation of their owners remains significantly underestimated by protocol activists. At the same time, the decentralised networks and the communities they serve have never been more vulnerable. The peer-to-peer community is dangerously unprepared for a crisis-fuelled future that has very suddenly arrived at their door.

Another good find by Neil Mather for me to read a few times more. A first reaction I have is that in my mind p2p networks weren’t primarily about evading surveillance, evading copyright, or maintaining anonymity, but one of netwerk-resilience and not having someone with power over the ‘off-switch’ for the entire network. These days surveillance and anonymity are more important, and should gain more attention in the design stage.

I find it slightly odd that the dark web and e.g. TOR aren’t mentioned in any meaningful way in the article.

Another element I find odd is how the author talks about extremists using federated tools “Can or should a federated network accept ideologies that are antithetical to its organic politics? Regardless of the answer, it is alarming that the community and its protocol leadership could both be motivated by a distrust of centralised social media, and be blindsided by a situation that was inevitable given the common ground found between ideologies that had been forced from popular platforms one way or another.”
It ignores that with going the federated route extremists loose two things they enjoyed on centralised platforms: amplification and being linked to the mainstream. In a federated setting I with my personal instance, and any other instance decides themselves whom to federate with or not. There’s nothing for ‘a federated network to accept’, each instance does their own acceptance. There’s no algorithmic rage-engine to amplify the extreme. There’s no standpoint for ‘the federated network’ to take, just nodes doing their own thing. Power at the edges.

Also I think that some of the vulnerabilities and attack surfaces listed (Napster, Pirate Bay) build on the single aspect in that context that still had a centralised nature. That still held some power in a center.

Otherwise good read, with good points made that I want to revisit and think through more.

Nick Punt writes a worthwile post (found via Roland Tanglao) on “De-Escalating Social Media, Designing humility and forgiveness into social media products

He writes

This is why it’s my belief that as designed today, social media is out of balance. It is far easier to escalate than it is to de-escalate, and this is a major problem that companies like Twitter and Facebook need to address.

This got me thinking about what particular use cases need de-escalation, and whether there’s something simple we can do to test the waters and address these types of problems.

And goes on to explore how to create a path for admitting mistakes on Twitter. This currently isn’t encouraged by Twitter’s design. You see no social reinforcement, as no others visibly admit mistakes. You do see many people pilig onto someone for whatever perceived slight, and you do see people’s reflex of digging in when attacked.

Punt suggest three bits of added functionality for Twitter:

  • The ability to add a ‘mea culpa’ to a tweet in the shape of “@ton_zylstra indicated they made a mistake in this tweet”. Doing that immediately stops the amplicifation of those messages. No more replies, likes or retweets without comments. Retweet with comment is still possible to amplify the correction, as opposed to the original message.
  • Surfacing corrections: those that have seen the original tweet in their timelines will also get presented with the correction.
  • Enabling forgiveness: works just like likes, but then to forgive the original poster for the mistake, as a form of positive reinforcement.

I like this line of thinking, although I think it won’t be added to existing silo’d networks. This type of nudging of constructive behaviour as well as adding specific types of friction are however of interest. Maybe it is easier for other platforms and newer players to adopt as a distinguishing feature. E.g. in Mastodon.

Of course it’s in direct conflict with FB’s business model but

social networks should reintroduce friction into their sharing mechanisms. Think of it as the digital equivalent of social distancing.

makes a lot of sense otherwise. There’s no viable path of doing only content moderation or filtering. Another option is breaking monopolistic silos up by requiring open API’s for them to be seen as true platforms. That too will reduce amplification, as it puts the selection into the hands of a wider variety of clients built on top of such a true platform. Of course that too is anathema to their business model.

Came across this article from last year, The new dot com bubble is here: it’s called online advertising. It takes a look at online advertising’s effectiveness. It seems the selection effect is strong, but not accounted for, because the metrics happen after that.

“It is crucial for advertisers to distinguish such a selection effect (people see your ad, but were already going to click, buy, register, or download) from the advertising effect (people see your ad, and that’s why they start clicking, buying, registering, downloading).”

They don’t.

All the data gathering, all the highly individual targeting, apparently means advertisers are reaching people they would already reach. Now people just click on a link the advertising company is paying extra for.

For eBay there was an opportunity in 2012 to experiment with what would happen if they stopped online advertising. Three months later, the results were clear: all the traffic that had previously come from paid links was now coming in through ordinary links. Tadelis had been right all along. Annually, eBay was burning a good $20m on ads targeting the keyword ‘eBay’. (Blake et al 2015, Econometrica Vol. 83, 1, pp 155-174. DOI 10.3982/ECTA12423, PDF on Sci-Hub)

It’s about a market of a quarter of a trillion dollars governed by irrationality. It’s about knowables, about how even the biggest data sets don’t always provide insight.

So, the next time when some site wants to emotionally blackmail you to please disable your adtech blockers, because they’ve led themselves to believe that undermining your privacy is the only way they can continue to exist, don’t feel guilty. Adtech has to go, you’re offering up your privacy for magical thinking. Shields up!