My colleagues Emily and Frank have in the past months been contributing our company’s work on ethical data use to the W3C’s Spatial Data on the Web Interest Group.

The W3C now has published a draft document on the responsible use of spatial data, to invite comments and feedback. It is not a normative document but aims to promote discussion. Comments can be filed directly on the Github link mentioned, or through the group’s mailing list (subscribe, archives).

The purpose of this document is to raise awareness of the ethical responsibilities of both providers and users of spatial data on the web. While there is considerable discussion of data ethics in general, this document illustrates the issues specifically associated with the nature of spatial data and both the benefits and risks of sharing this information implicitly and explicitly on the web.

Spatial data may be seen as a fingerprint: For an individual every combination of their location in space, time, and theme is unique. The collection and sharing of individuals spatial data can lead to beneficial insights and services, but it can also compromise citizens’ privacy. This, in turn, may make them vulnerable to governmental overreach, tracking, discrimination, unwanted advertisement, and so forth. Hence, spatial data must be handled with due care. But what is careful, and what is careless? Let’s discuss this.

"Here"
2013 artwork by Jon Thomson and Alison Craighead. Located at the Greenwich Meridian, the sign marks the distance from itself in miles around the globe. Image by Alex Liivet, license CC-BY

In the oil industry it is common to have every meeting start with a ‘safety moment’. One of the meeting’s participants shares or discusses something that has to do with a safe work environment. This helps keep safety in view of all involved, and helps reduce the number of safety related incidents in oil companies.

Recently I wondered if every meeting in data rich environments should start with an ethics moment. Where one of the participants raises a point concerning information ethics, either a reminder, a practical issue, or something to reflect on before moving on to the next item on the meeting’s agenda. As I wrote in Ethics as a Practice, we have to find a way of positioning ethical considerations and choices as an integral part of professionalism in the self-image of (data using) professionals. This might be one way of doing that.

I’m participating in the IndieWebCamp East 2020. It’s nominally held on the US East Coast, but as everything else, it’s online. The 6 hr time difference makes it doable to take in at least part of it.

The first introductory talk today was by David Dylan Thomas, which I thoroughly enjoyed. He’s a content strategist, and took acknowledging the existence of cognitive biases (and the difficulty of overcoming them, even if you try to) as a perspective on content strategy. How do you design to mitigate bias? How do you use bias to design for good? It’s been the basis for his podcast series.

A short 106 page book was published this fall, and after the talk I bought it and uploaded it to my reader. Looking forward to reading it!

Today it is Global Ethics Day. My colleague Emily wanted to mark it given our increasing involvement in information ethics, and organised an informal online get together, a Global Ethics Day party, with a focus on data ethics. We showed the participants our work on an ethical reference for using geodata, and the thesis work our newest colleague Pauline finished this spring on ethical leadership within municipal governments. I was asked to kick the event off with some remarks to spark discussion.

I took the opportunity to define and launch a new moniker, Ethics as a Practice (EaaP).(1)
The impulse for me to do that comes out of two things that I have a certain dislike for, in how I see organisations deal with the ethics of working with data and using data to directly inform decisions.

The first concerns treating the philosophy of technology, information and data ethics in general as a purely philosophical and scientific debate. It, due to abstraction, then has no immediate bearing on the things organisations, I and others do in practice. Worse, regularly it approaches actual problems purely starting from that abstraction, ending up with posing ethical questions I think are irrelevant to reality on the ground. An example would be MIT’s notion that classical trolly problems have bearing on how to create autonomous vehicles. It seems to me because they don’t appreciate that saying autonomous vehicle, does not mean the vehicle is an indepenent actor to which blame etc can be applied, and that ‘autonomous’ merely means that a vehicle is independent from its previous driver, but otherwise fully embedded in a wide variety of other dependencies. Not autonomous at all, no ghost in the machine.


The campus of University of Twente, where they do some great ethics work w.r.t. to technology. But in itself it’s not sufficient. (image by me, CC BY SA)

The second concerns seeing ‘Ethics by design’ as a sufficient fix. I dislike that because it carries 2 assumptions that are usually not acknowledged. Ethics by design in practice seems to be perceived as ethics being only a concern in the design phase of a new technology, process, approach or method. Whereas at least 95% of what organisations and professionals deal with isn’t new but existing, so as a result remains out of scope of ethical considerations. It’s an assumption that everything that exists has been thoroughly ethically evaluated, which isn’t true, not at all even when it comes to existing data collection. Ethics has no role at all in existing data governance for instance, and data governance usually doesn’t cover data collection choices or its deletion/archiving.
The other assumption conveyed by the term ‘ethics by design’ is that once the design phase is completed, ethics has been sufficiently dealt with. The result is, with 95% of our environment remaining the same, that ethics by design is forward looking but not backwards compatible. Ethics by design is seen as doing enough, but it isn’t enough at all.


Ethics by design in itself does not provide absolution (image by Jordanhill School D&T Dept, license CC BY)

Our everyday actions and choices in our work are the expression of our individual and organisational values. The ‘ethics by design’ label sidesteps that everyday reality.

Both taken together, ethics as academic endeavour and ethics by design, result in ethics basically being outsourced to someone specific outside or in the organisation, or at best to a specific person in your team, and starts getting perceived as something external being delivered to your work reality. Ethics as a Service (EaaS) one might say, a service that takes care of the ethical aspects. That perception means you yourself can stop thinking about ethics, it’s been allocated, and you can just take its results and run with it. The privacy officer does privacy, the QA officer does quality assurance, the CISO does information security, and the ethics officer covers everything ethical…..meaning I can carry on as usual. (e.g. Enron had a Code of Ethics, but it had no bearing on the practical work or decisions taken.)

That perception of EaaS, ethics as an externally provided service to your work has real detrimental consequences. It easily becomes an outside irritant to the execution of your work. Someone telling you ‘no’ when you really want to do something. A bureaucratic template to fill in to be able to claim compliance (similarly as how privacy, quality, regulations are often treated). Ticking the boxes on a checklist without actual checks. That way it becomes something overly reductionist, which denies and ignores the complexity of everyday knowledge work.


Externally applied ethics become an irritant (image by Iain Watson, license CC BY)

Ethical questions and answers are actually an integral part of the complexity of your work. Your work is the place where clear boundaries can be set (by the organisation, by general ethics, law), ánd the place where you can notice as well as introduce behavioural patterns and choices. Complexity can only be addressed from within that complexity, not as an outside intervention. Ethics therefore needs to be dealt with from within the complexity of actual work and as one of the ingredients of it.

Placing ethics considerations in the midst of the complexity of our work, means that the spot where ethics are expressed in real work choices overlaps where such aspects are considered. It makes EaaS as a stand alone thing impossible, and instead brings those considerations into your everyday work not as an external thing but as an ingredient.

That is what I mean by Ethics as a Practice. Where you use academic and organisational output, where ethics is considered in the design stage, but never to absolve you from your professional responsibilities.
It still means setting principles and hard boundaries from the organisational perspective, but also an ongoing active reflection on them and on the heuristics that guide your choices, and it actively seeks out good practice. It never assumes a yes or no to an ethical question by default, later to be qualified or rationalised, but also does not approach those questions as neutral (as existing principles and boundaries are applied).(2) That way (data) ethical considerations become an ethics of your agency as a professional, informing your ability to act. It embraces the actual complexity of issues, acknowledges that daily reality is messy, engages all relevant stakeholders, and deliberately seeks out a community of peers to spot good practices.

Ethics is part and parcel of your daily messy work, it’s your practice to hone. (image by Neil Cummings, license CC BY SA)

Ethics as a Practice (EaaP) is a call to see yourself as an ethics practitioner, and a member of a community of practice of such practitioners, not as someone ethics ‘is done to’. Ethics is part and parcel of your daily messy work, it’s your practice to hone. Our meet-up today was a step to have such an exchange between peers.

I ended my remarks with a bit of a joke, saying, EaaP is so you can always do the next right thing, a quote from a Disney movie my 4 year old watches, and add a photo of a handwritten numbered list headed ‘things to do’ that I visibly altered so it became a ‘right things to do’ list.

(1) the ‘..as a Practice’ notion I took from Anne-Laure Le Cunff’s Ness Labs posting that mentioned ‘playfulness as a practice’.
(2) not starting from yes or no, nor from a neutral position, taken from the mediation theory by University of Twente’s prof Peter Paul Verbeek

Protesters in Belarus are pulling of the masks of police men, because then these will think twice before being seen to use violence on protestors. Behind that is an effort to then ferret out their names and personal details. There is a fine line here to tread between exposing policemen to stop the dehumanisation of protestors that masks allow them to do, and that escalating into vigilante violence. However it does remind me of a tactic described in Cory Doctorow‘s novel Walkaway, where doxxing policemen is used to then create videos with their family members sympathetic to the cause asking them to stop the violence. It’s one thing to beat up someone anonymously while masked, it’s another having your mother, brother, aunt or grandfather berate you for it in public media.

“The only way to stop violence is to pull off the masks, in both the literal and metaphorical sense. An officer who is no longer anonymous will think twice before he grabs, beats or kidnaps someone,” said the founder of Black Book of Belarus, a channel on the app Telegram devoted to “de-anonymising” police officers, with more than 100,000 subscribers.

Bookmarked This is Fine: Optimism & Emergency in the P2P Network (newdesigncongress.org)
...driven by the desire for platform commons and community self-determination. These are goals that are fundamentally at odds with – and a response to – the incumbent platforms of social media, music and movie distribution and data storage. As we enter the 2020s, centralised power and decentralised communities are on the verge of outright conflict for the control of the digital public space. The resilience of centralised networks and the political organisation of their owners remains significantly underestimated by protocol activists. At the same time, the decentralised networks and the communities they serve have never been more vulnerable. The peer-to-peer community is dangerously unprepared for a crisis-fuelled future that has very suddenly arrived at their door.

Another good find by Neil Mather for me to read a few times more. A first reaction I have is that in my mind p2p networks weren’t primarily about evading surveillance, evading copyright, or maintaining anonymity, but one of netwerk-resilience and not having someone with power over the ‘off-switch’ for the entire network. These days surveillance and anonymity are more important, and should gain more attention in the design stage.

I find it slightly odd that the dark web and e.g. TOR aren’t mentioned in any meaningful way in the article.

Another element I find odd is how the author talks about extremists using federated tools “Can or should a federated network accept ideologies that are antithetical to its organic politics? Regardless of the answer, it is alarming that the community and its protocol leadership could both be motivated by a distrust of centralised social media, and be blindsided by a situation that was inevitable given the common ground found between ideologies that had been forced from popular platforms one way or another.”
It ignores that with going the federated route extremists loose two things they enjoyed on centralised platforms: amplification and being linked to the mainstream. In a federated setting I with my personal instance, and any other instance decides themselves whom to federate with or not. There’s nothing for ‘a federated network to accept’, each instance does their own acceptance. There’s no algorithmic rage-engine to amplify the extreme. There’s no standpoint for ‘the federated network’ to take, just nodes doing their own thing. Power at the edges.

Also I think that some of the vulnerabilities and attack surfaces listed (Napster, Pirate Bay) build on the single aspect in that context that still had a centralised nature. That still held some power in a center.

Otherwise good read, with good points made that I want to revisit and think through more.