Via Iskander Smit an interesting editorial on practices in digital ethics landed in my inbox: Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical, by Luciano Floridi, director of the Digital Ethics lab of Oxford University’s Internet Institute.

It lists 5 groups of practices that subvert (by being distracting or destructive) actual ethical principles being applied in digital technology. They are very recognisable from ethical discussions I’ve been in or witness to. The paper also provides some pointers in how to address them.
I will list and quote the five practices, and add a sixth that I come across regularly in my own work. Together they are digital ethics dark patterns so to speak:

  1. ethics shopping: “the malpractice of choosing, adapting, or revising (“mixing and matching”) ethical principles, guidelines, codes, frameworks, or other similar standards (especially but not only in the ethics of AI), from a variety of available offers, in order to retrofit some pre-existing behaviours (choices, processes, strategies, etc.), and hence justify them a posteriori, instead of implementing or improving new behaviours by benchmarking them against public, ethical standards.
  2. ethics bluewashing: the malpractice of making unsubstantiated or misleading claims about, or implementing superficial measures in favour of, the ethical values and benefits of digital processes, products, services, or other solutions in order to appear more digitally ethical than one is.
  3. ethics lobbying: the malpractice of exploiting digital ethics to delay, revise, replace, or avoid good and necessary legislation (or its enforcement) about the design, development, and deployment of digital processes, products, services, or other solutions. (can you say big tech?)
  4. ethics dumping: the malpractice of (a) exporting research activities about digital processes, products, services, or other solutions, in other contexts or places (e.g. by European organisations outside the EU) in ways that would be ethically unacceptable in the context or place of origin and (b) importing the outcomes of such unethical research activities.
  5. ethics shirking: the malpractice of (a) exporting research activities about digital processes, products, services, or other solutions, in other contexts or places (e.g. by European organisations outside the EU) in ways that would be ethically unacceptable in the context or place of origin and (b) importing the outcomes of such unethical research activities.
  6. To which I want to add a sixth, based on observations in my work in various organisations, and what pops up ethics-related in my feedreader:

  7. ethics futurising: the malpractice of discussing ethics and even hiring ethics advisors only for technology and processes that are 10 years into the future for the organisation in question. E.g. AI ethics in a company that as yet has nothing to do with AI. At the same time that same ethical soul searching is not applied to currently relevant and used practices, technology or processes. It has a part of ethics bluewashing in it (pattern 2, being seen as ethical rather than being ethical), but there’s something else at play as well, a blind spot to ethical questions being relevant in reflecting on current digital practices, tech choices, processes and e.g. data collection methods, the assumption being that current practice is all right.
  8. I find this sixth one both distracting and destructive: it let’s an organisation believe they are on top of digital ethics issues, but it is all unrelated to their core activities. As a result staff currently involved in real questions are left to their own devices. Which means that for instance data protection officers are lonely figures and often all but ignored by their organisations until after a (legal) issue arises.

Jerome Velociter has an interesting riff on how Diaspora, Mastodon and similar decentralised and federated tools are failing their true potential (ht Frank Meeuwsen).

He says that these decentralised federated applications are trying to mimic the existing platforms too much.

They are attempts at rebuilding decentralized Facebook and Twitter

This tendency has multiple faces
I very much recognise this tendency, for this specific example, as well as in general for digital disruption / transformation.

It is recognisable in discussions around ‘fake news’ and media literacy where the underlying assumption often is to build your own ‘perfect’ news or media platform for real this time.

It is visible within Mastodon in the missing long tail, and the persisting dominance of a few large instances. The absence of a long tail means Mastodon isn’t very decentralised, let alone distributed. In short, most Mastodon users are as much in silos as they were on Facebook or Twitter, just with a less generic group of people around them. It’s just that these new silos aren’t run by corporations, but by some individual. Which is actually worse from a responsibility and liability view point.

It is also visible in how there’s a discussion in the Mastodon community on whether the EU Copyright Directive means there’s a need for upload filters for Mastodon. This worry really only makes sense if you think of Mastodon as similar to Facebook or Twitter. But in terms of full distribution and federation, it makes no sense at all, and I feel Mastodon’s lay-out tricks people into thinking it is a platform.

This type of effect I recognise from other types of technology as well. E.g. what regularly happens in local exchange trading systems (LETS), i.e. alternative currency schemes. There too I’ve witnessed them faltering because the users kept making their alternative currency the same as national fiat currencies. Precisely the thing they said they were trying to get away from, but ending up throwing away all the different possibilities of agency and control they had for the taking.

Dump mimicry as design pattern
So I fully agree with Jerome when he says distributed and federated apps will need to come into their own by using other design patterns. Not by using the design patterns of current big platforms (who will all go the way of ecademy, orkut, ryze, jaiku, myspace, hyves and a plethora of other YASNs. If you don’t know what those were: that’s precisely the point).

In the case of Mastodon one such copied design pattern that can be done away with is the public facing pages and timelines. There are other patterns that can be used for discoverability for instance. Another likely pattern to throw out is the Tweetdeck style interface itself. Both will serve to make it look less like a platform and more like conversations.

Tools need to provide agency and reach
Tools are tools because they provide agency, they let us do things that would otherwise be harder or impossible. Tools are tools because they provide reach, as extensions of our physical presence, not just across space but also across time. For a very long time I have been convinced that tools need to be smaller than us, otherwise they’re not tools of real value. Smaller (see item 7 in my agency manifesto) than us means that the tool is under the full control of the group of users using it. In that sense e.g. Facebook groups are failed tools, because someone outside those groups controls the off-switch. The original promise of social software, when they were mostly blogs and wiki’s, and before they morphed into social media, was that it made publishing, interaction between writers and readers, and iterating on each other’s work ‘smaller’ than writers. Distributed conversations as well as emergent networks and communities were the empowering result of that novel agency.

Jerome also points to something else I think is important

In my opinion the first step is to build products that have value for the individual, and let the social aspects, the network effects, sublime this value. Value at the individual level can be many things. Let me organise my thoughts, let me curate “my” web, etc.

Although I don’t fully agree with the individual versus the network distinction. To me instead of just the individual you can put small coherent groups within a single context as well: the unit of agency in networked agency. So I’d rather talk about tools that are useful as a single instance (regardless of who is using it), and even more useful across instances.

Like blogs mentioned above and mentioned by Jerome too. This blog has value for me on its own, without any readers but me. It becomes more valuable as others react, but even more so when others write in their own space as response and distributed conversations emerge, with technology making it discoverable when others write about something posted here. Like the thermometer in my garden that tells me the temperature, but has additional value in a network of thermometers mapping my city’s microclimates. Or like 3D printers which can be put to use on their own, but can be used even better when designs are shared among printer owners, and used even better when multiple printer owners work together to create more complex artefacts (such as the network of people that print bespoke hand prostheses).

It is indeed needed to spend more energy designing tools that really take distribution and federation as a starting point. That are ‘smaller’ than us, so that user groups control their own tools and have freedom to tinker. This applies to not just online social tools, but to any software tool, and to connected products and the entire maker scene just as much.

That holds true the other way. What does get measured, and how that is measured, is as much a political choice as what doesn’t. Metric design is political, also in the private sector. #ethicsbydesign

Replied to

This is a naive exercise to explore what ethics by design would look like for networked agency. There’s plenty of discussion about ethics by design in various places. Mostly in machine learning, where algorithmic bias is a very real issue already, and where other discussions such as around automated driving are misguided for lack of imagination and scope. It’s also an ongoing concern in adtech, especially since we know business practices don’t limit themselves to selling you stuff but also deceive you to sell political ideas. Data governance is an area where I encounter ethics by design as a topic on a regular basis, in decisions on what data to collect or not, and in questions of balancing or combining the need for transparency with the need for data protection. But I want to leave that aside, also because many organisations in those areas already have failed their customers and users. Which would make this posting a complaint and not constructive.

My current interest is in exploring what ethics means, and can be done by design, in the context of networked agency, and by extension a new civil society emerging in distributed digital transformation. A naive approach helps me find a first batch of questions and angles.

The notions that are the building blocks of networked agency are a starting point. Ethical questions follow directly from those building blocks.

First there are the building blocks related to the agency element in networked agency. These are technology and methods/processes, striking power, resilience and agility.
a) For the technologies and methods/processes involved, relevant are issues relating to who controls those tools, how these tools can be deployed by their users, and if a user group can alter the tools, adapt them to new needs and tinker with them.
b) Low thresholds of adoption need an exploration of what those thresholds are and how they play out for different groups. These are thresholds of technological and financial nature, but also barriers concerning knowledge, practicality, usability, and understandability.
c) Striking power, the actual acting part of agency provides questions about if a tool provides actual agency, and isn’t actually a pacifier. Not every action or activity constitutes agency. It’s why words like slacktivism and clicktivism have emerged.
d) Resilience in networked agency is about reducing the vulnerability to propagating failures from outside the group, and the manner in which mitigation is possible. Reduction of critical dependencies outside the group’s scope of control is something to consider here. That also works in reverse. Are you creating dependencies for others? In a similar vein, are you externalising costs onto others? Are you causing unintended consequences elsewhere, and can you be aware of them arising, or pre-empt them?
e) Agility in networked agency is about spotting and leveraging opportunities relative to your own needs in your wider network. Are you able to do that from a constructive perspective, or only a competitive/scarcity one? Do your opportunities come at the cost of other groups? When you leverage opportunities are you externalising costs or claiming exclusivity? In a networked environment externalising costs will return as feedback to your system. Networks almost by definition are endless repeats of the prisoners dilemma. Another side of this is which ways exist in which you can provide leverage to others simultaneously to creating your own, or when to be the lever in a situation.

Second there are notions that follow from the networked part of networked agency. The unit of agency in networked agency is a group of people that share some relationship (team, family, org, location, interest, history, etc), that together act upon a need shared across that group. This introduces three levels to evaluate ethical questions on, at the level of the individual in a group, at the level of the group itself, and between groups in a network. Group dynamics are thus firmly put into focus: power, control, ownership, voice, inclusion, decision making, conflict resolution, dependencies within a group, reciprocity, mutuality, verifiability, boundaries, trust, contributions, engagement, and reputations.
This in part translates back to the agency part, in terms of technology and skills to work with it. Skills won’t be evenly distributed in groups seeking agency, so potentially introduce power asymmetries, when unique capabilities mean de-facto gatekeepers or single points of failure are introduced. These may be counteracted with some mutual dependencies perhaps. More likely operational transparency in a group is of more importance so that the group can see such issues arise and calling them out is a normal thing to do, not something that has a threshold in itself. Operational transparency might build on an obligation to explain, which also is a logical element in ensuring (networked) agility.

The above output of this first exercise I will try and put in an overview. Not sure what will be useful here, a tree-like map, or a network, or a matrix. A next step is fleshing out the ethical issues in play. Then projecting them on for instance specific technologies, methods and group settings, to see what specific actions or design principles emerge from that.

Some links I thought worth reading the past few days

To celebrate the launch of the GDPR last week Friday, Jaap-Henk Hoekman released his ‘little blue book’ (pdf)’ on Privacy Design Strategies (with a CC-BY-NC license). Hoekman is an associate professor with the Digital Security group of the ICS department at the Radboud University.

I heard him speak a few months ago at a Tech Solidarity meet-up, and enjoyed his insights and pragmatic approaches (PDF slides here).

Data protection by design (together with a ‘state of the art’ requirement) forms the forward looking part of the GDPR where the minimum requirements are always evolving. The GDPR is designed to have a rising floor that way.
The little blue book has an easy to understand outline, which cuts up doing privacy by design into 8 strategies, each accompanied by a number of tactics, that can all be used in parallel.

Those 8 strategies (shown in the image above) are divided into 2 groups, data oriented strategies and process oriented strategies.

Data oriented strategies:
Minimise (tactics: Select, Exclude, Strip, Destroy)
Separate (tactics: Isolate, Distribute)
Abstract (tactics: Summarise, Group, Perturb)
Hide (tactics: Restrict, Obfuscate, Dissociate, Mix)

Process oriented strategies:
Inform (tactics: Supply, Explain, Notify)
Control (tactics: Consent, Choose, Update, Retract)
Enforce (tactics: Create, Maintain, Uphold)
Demonstrate (tactics: Record, Audit, Report)

All come with examples and the final chapters provide suggestions how to apply them in an organisation.

Today I was at a session at the Ministry for Interior Affairs in The Hague on the GDPR, organised by the center of expertise on open government.
It made me realise how I actually approach the GDPR, and how I see all the overblown reactions to it, like sending all of us a heap of mail to re-request consent where none’s needed, or taking your website or personal blog even offline. I find I approach the GDPR like I approach a quality assurance (QA) system.

One key change with the GDPR is that organisations can now be audited concerning their preventive data protection measures, which of course already mimics QA. (Next to that the GDPR is mostly an incremental change to the previous law, except for the people described by your data having articulated rights that apply globally, and having a new set of teeth in the form of substantial penalties.)

AVG mindmap
My colleague Paul facilitated the session and showed this mindmap of GDPR aspects. I think it misses the more future oriented parts.

The session today had three brief presentations.

In one a student showed some results from his thesis research on the implementation of the GDPR, in which he had spoken with a lot of data protection officers or DPO’s. These are mandatory roles for all public sector bodies, and also mandatory for some specific types of data processing companies. One of the surprising outcomes is that some of these DPO’s saw themselves, and were seen as, ‘outposts’ of the data protection authority, in other words seen as enforcers or even potentially as moles. This is not conducive to a DPO fulfilling the part of its role in raising awareness of and sensitivity to data protection issues. This strongly reminded me of when 20 years ago I was involved in creating a QA system from scratch for my then employer. Some of my colleagues saw the role of the quality assurance manager as policing their work. It took effort to show how we were not building a straightjacket around them that kept them within strict boundaries, but providing a solid skeleton to grow on, and move faster. Where audits are not hunts for breaches of compliance but a way to make emergent changes in the way people worked visible, and incorporate professionally justified ones in that skeleton.

In another presentation a civil servant of the Ministry involved in creating a register of all person related data being processed. What stood out most for me was the (rightly) pragmatic approach they took with describing current practices and data collections inside the organisation. This is a key element of QA as well. You work from descriptions of what happens, and not at what ’should’ happen or ‘ideally’ happens. QA is a practice rooted in pragmatism, where once that practice is described and agreed it will be audited.
Of course in the case of the Ministry it helps that they only have tasks mandated by law, and therefore the grounds for processing are clear by default, and if not the data should not be collected. This reduces the range of potential grey areas. Similarly for security measures, they already need to adhere to national security guidelines (called the national baseline information security), which likewise helps with avoiding new measures, proves compliance for them, and provides an auditable security requirement to go with it. This no doubt helped them to be able to take that pragmatic approach. Pragmatism is at the core of QA as well, it takes its cues from what is really happening in the organisation, what the professionals are really doing.

A third one dealt with open standards for both processes and technologies by the national Forum for Standardisation. Since 2008 a growing list of currently some 40 or so standards is mandatory for Dutch public sector bodies. In this list of standards you find a range of elements that are ready made to help with GDPR compliance. In terms of support for the rights of those described by the data, such as the right to export and portability for instance, or in terms of preventive technological security measures, and ‘by design’ data protection measures. Some of these are ISO norms themselves, or, as the mentioned national baseline information security, a compliant derivative of such ISO norms.

These elements, the ‘police’ vs ‘counsel’ perspective on the rol of a DPO, the pragmatism that needs to underpin actions, and the building blocks readily to be found elsewhere in your own practice already based on QA principles, made me realise and better articulate how I’ve been viewing the GDPR all along. As a quality assurance system for data protection.

With a quality assurance system you can still famously produce concrete swimming vests, but it will be at least done consistently. Likewise with GDPR you will still be able to do all kinds of things with data. Big Data and developing machine learning systems are hard but hopefully worthwile to do. With GDPR it will just be hard in a slightly different way, but it will also be helped by establishing some baselines and testing core assumptions. While making your purposes and ways of working available for scrutiny. Introducing QA upon its introduction does not change the way an organisation works, unless it really doesn’t have its house in order. Likewise the GDPR won’t change your organisation much if you have your house in order either.

From the QA perspective on GDPR, it is perfectly clear why it has a moving baseline (through its ‘by design’ and ‘state of the art’ requirements). From the QA perspective on GDPR it is perfectly clear what the connection is to how Europe is positioning itself geopolitically in the race concerning AI. The policing perspective after all only leads to a luddite stance concerning AI, which is not what the EU is doing, far from it. From that it is clear how the legislator intends the thrust of GDPR. As QA really.

Ethics by design is adding ethical choices and values to a design process as non-functional requirements, that then are turned into functional specifications.

E.g. when you want to count the size of a group of people by taking a picture of them, adding the value of safeguarding privacy into the requirements might mean the picture will be intentionally made grainy by a camera. A more grainy pic still allows you to count the number of people in the photo, but you never captured and stored their actual faces.

When it comes to data governance and machine learning Europe’s stance towards safeguarding civic rights and enlightenment values is a unique perspective to take in a geopolitical context. Data is a very valuable resource. In the US large corporations and intelligence services have created enormous data lakes, without much restraints, resulting in a tremendous power asymmetry, and an objectification of the individual. This is surveillance capitalism.
China, and others like Russia, have created or are creating large national data spaces in which the individual is made fully transparent and described by connecting most if not all data sources and make them accessible to government, and where resulting data patterns have direct consequences for citizens. This is data driven authoritarian rule.
Europe cannot compete with either of those two models, but can provide a competing perspective on data usage by creating a path of responsible innovation in which all data is as much combined and connected as elsewhere in the world, yet with values and ethical boundaries designed into its core. With the GDPR the EU is already setting a new de-facto global standard, and doing more along similar lines, not just in terms of regulations, but also in terms of infrastructure (Estonia’s X-road for instance) is the opportunity Europe has.

Some pointers:
My blogpost Ethics by Design
A naive exploration of ethics around networked agency.
A paper (PDF) on Value Sensitive Design
The French report For a Meaningful Artificial Intelligence (PDF), that drive France’s 1.5 billion investment in value based AI.

Last week I had the pleasure to attend and to speak at the annual FOSS4G conference. This gathering of the community around free and open source software in the geo-sector took place in Bonn, in what used to be the German parliament. I’ve posted the outline, slides and video of my keynote already at my company’s website, but am now also crossposting it here.

Speaking in the former German Parliament
Speaking in the former plenary room of the German Parliament. Photo by Bart van den Eijnden

In my talk I outlined that it is often hard to see the real impact of open data, and explored the reasons why. I ended with a call upon the FOSS4G community to be an active force in driving ethics by design in re-using data.

Impact is often hard to see, because measurement takes effort
Firstly, because it takes a lot of effort to map out all the network effects, for instance when doing micro-economic studies like we did for ESA or when you need to look for many small and varied impacts, both socially and economically. This is especially true if you take a ‘publish and it will happen’ approach. Spotting impact becomes much easier if you already know what type of impact you actually want to achieve and then publish data sets you think may enable other stakeholders to create such impact. Around real issues, in real contexts, it is much easier to spot real impact of publishing and re-using open data. It does require that the published data is serious, as serious as the issues. It also requires openness: that is what brings new stakeholders into play, and creates new perspectives towards agency so that impact results. Openness needs to be vigorously defended because of it. And the FOSS4G community is well suited to do that, as openness is part of their value set.

Impact is often hard to see, because of fragmentation in availability
Secondly, because impact often results from combinations of data sets, and the current reality is that data provision is mostly much too fragmented to allow interesting combinations. Some of the specific data sets, or the right timeframe or geographic scope might be missing, making interesting re-uses impossible.
Emerging national data infrastructures, such as the Danish and the Dutch have been creating, are a good fix for this. They combine several core government data sets into a system and open it up as much as possible. Think of cadastral records, maps, persons, companies, adresses and buildings.
Geo data is at the heart of all this (maps, addresses, buildings, plots, objects), and it turns it into the linking pin for many re-uses where otherwise diverse data sets are combined.

Geo is the linking pin, and its role is shifting: ethics by design needed
Because of geo-data being the linking pin, the role of geo-data is shifting. First of all it puts geo-data in the very heart of every privacy discussion around open data. Combinations of data sets quickly can become privacy issues, with geo-data being the combinator. Privacy and other ethical questions arise even more now that geo-data is no longer about relatively static maps, but where sensors are making many more objects as well as human beings objects on the map in real time.
At the same time geo-data is becoming less visible in these combinations. ‘The map’ is not neccessarily a significant part of the result of combining data sets, just a catalyst on the way to get there. Will geo-data be a neutral ingredient, or will it be an ingredient with a strong attitude? An attitude that aims to actively promulgate ethical choices, not just concerning privacy, but also concerning what are statistically responsible combinations, and what are and are not legal steps in getting to an in itself legal result again? As with defending openness itself, the FOSS4G community is in a good position to push the ethical questions forward in the geo community as well as find ways of incorporating them directly in the tools they build and use.

The video of the keynote has been published by the FOSS4G conference organisers.
Slides are available from Slideshare and embedded below: