Bookmarked Online Course Philosophy and Ethics of Human-Technology Relations and Design (by Peter Paul Verbeek)

Treating myself to this as a refresher, and to help me get back into reading more deeply around these subjects. Have a whole stack of stuff lined up, but need a bit of push to start reading more earnestly.

A new edition of our online course ‘Philosophy of Technology and Design’ will start May 4 2020. A crash course in the philosophy and ethics of human-technology relations and design in three weeks, four hours per week,….

Peter Paul Verbeek

Ethics is the expression of values in actual behaviour. So when you want to do data ethics it is about practical issues, and reconsidering entrenched routines. In the past few weeks I successfully challenged some routine steps in a clients’ organisation, resulting in better and more ethical use of data. The provision of subsidies to individuals is arranged by specific regulations. The regulations describe the conditions and limitations for getting a subsidy, and specify a set of requirements when you apply for a subsidy grant. Such subsidy regulations, once agreed have legal status.

With the client we’re experimenting in making it vastly less of an effort for both requester and the client to process a request. As only then does it make sense to provide smaller sized subsidies to individual citizens. Currently there is a rather high lower limit for subsidies. Otherwise the costs of processing a request would be higher than the sum involved, and the administrative demands for the requester would be too big in comparison to the benefits received. Such a situation typically leads to low uptake of the available funding, and ineffective spending, which both make the intended impact lower (in this case reducing energy usage and CO2 emissions).

In a regular situation the drafting of regulation and then the later creation of an application form would be fully separate steps, and the form would probably blindly do what the regulations implies or demands and also introduce some overshoot out of caution.

Our approach was different. I took the regulation and lifted out all criteria that would require some sort of test, or demands that need a piece of information or data. Next, for each of those criteria and demands I marked what data would satisfy them, the different ways that data could be collected, and what role it played in the process. The final step is listing the fields needed in the form and/or those suggested by the form designers, and determining how filling those fields can be made easier for an applicant, (E.g. having pick up lists)

A representation of the steps taken / overview drawn

What this drawing of connections allows is to ask questions about the need and desirability of collecting a specific piece of data. It also allows to see what it means to change a field in a form, for how well the form complies with the regulation, or which fields and what data flows need to change when you change the regulation.

Allowing these questions to be asked, led to the realisation that several hard demands for information in the draft regulation actually play no role in determining eligibility for the subsidy involved (it was simply a holdover from another regulation that was used as template, and something that the drafters thought was ’nice to have’). As we were involved early, we could still influence the draft regulation and those original unneeded hard demands were removed just before the regulation came up for an approval vote. Now that we are designing the form it also allows us to ask whether a field is really needed, where the organisation is being overcautious about an unlikely scenario of abuse, or where it does not match an actual requirement in the regulation.

Questioning the need for specific data, showing how it would complicate the clients’ work because collecting it comes with added responsibilities, and being able to ask those questions before regulation was set in stone, allowed us to end up with a more responsible approach that simultaneously reduced the administrative hoops for both applicant and client to jump through. The more ethical approach now is also the more efficient and effective one. But only because we were there at the start. Had we asked those questions after the regulation was set, it would have increased the costs of doing the ethically better thing.

The tangible steps taken are small, but with real impact, even if that impact would likely only become manifest if we hadn’t taken those steps. Things that have less friction get noticed less. Baby steps for data ethics, therefore, but I call it a win.

Via Iskander Smit an interesting editorial on practices in digital ethics landed in my inbox: Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical, by Luciano Floridi, director of the Digital Ethics lab of Oxford University’s Internet Institute.

It lists 5 groups of practices that subvert (by being distracting or destructive) actual ethical principles being applied in digital technology. They are very recognisable from ethical discussions I’ve been in or witness to. The paper also provides some pointers in how to address them.
I will list and quote the five practices, and add a sixth that I come across regularly in my own work. Together they are digital ethics dark patterns so to speak:

  1. ethics shopping: “the malpractice of choosing, adapting, or revising (“mixing and matching”) ethical principles, guidelines, codes, frameworks, or other similar standards (especially but not only in the ethics of AI), from a variety of available offers, in order to retrofit some pre-existing behaviours (choices, processes, strategies, etc.), and hence justify them a posteriori, instead of implementing or improving new behaviours by benchmarking them against public, ethical standards.
  2. ethics bluewashing: the malpractice of making unsubstantiated or misleading claims about, or implementing superficial measures in favour of, the ethical values and benefits of digital processes, products, services, or other solutions in order to appear more digitally ethical than one is.
  3. ethics lobbying: the malpractice of exploiting digital ethics to delay, revise, replace, or avoid good and necessary legislation (or its enforcement) about the design, development, and deployment of digital processes, products, services, or other solutions. (can you say big tech?)
  4. ethics dumping: the malpractice of (a) exporting research activities about digital processes, products, services, or other solutions, in other contexts or places (e.g. by European organisations outside the EU) in ways that would be ethically unacceptable in the context or place of origin and (b) importing the outcomes of such unethical research activities.
  5. ethics shirking: the malpractice of doing increasingly less “ethical work” (such as fulfilling duties, respecting rights, and honouring commitments) in a given context the lower the return of such ethical work in that context is mistakenly perceived to be.
  6. To which I want to add a sixth, based on observations in my work in various organisations, and what pops up ethics-related in my feedreader:

  7. ethics futurising: the malpractice of discussing ethics and even hiring ethics advisors only for technology and processes that are 10 years into the future for the organisation in question. E.g. AI ethics in a company that as yet has nothing to do with AI. At the same time that same ethical soul searching is not applied to currently relevant and used practices, technology or processes. It has a part of ethics bluewashing in it (pattern 2, being seen as ethical rather than being ethical), but there’s something else at play as well, a blind spot to ethical questions being relevant in reflecting on current digital practices, tech choices, processes and e.g. data collection methods, the assumption being that current practice is all right.
  8. I find this sixth one both distracting and destructive: it let’s an organisation believe they are on top of digital ethics issues, but it is all unrelated to their core activities. As a result staff currently involved in real questions are left to their own devices. Which means that for instance data protection officers are lonely figures and often all but ignored by their organisations until after a (legal) issue arises.

Jerome Velociter has an interesting riff on how Diaspora, Mastodon and similar decentralised and federated tools are failing their true potential (ht Frank Meeuwsen).

He says that these decentralised federated applications are trying to mimic the existing platforms too much.

They are attempts at rebuilding decentralized Facebook and Twitter

This tendency has multiple faces
I very much recognise this tendency, for this specific example, as well as in general for digital disruption / transformation.

It is recognisable in discussions around ‘fake news’ and media literacy where the underlying assumption often is to build your own ‘perfect’ news or media platform for real this time.

It is visible within Mastodon in the missing long tail, and the persisting dominance of a few large instances. The absence of a long tail means Mastodon isn’t very decentralised, let alone distributed. In short, most Mastodon users are as much in silos as they were on Facebook or Twitter, just with a less generic group of people around them. It’s just that these new silos aren’t run by corporations, but by some individual. Which is actually worse from a responsibility and liability view point.

It is also visible in how there’s a discussion in the Mastodon community on whether the EU Copyright Directive means there’s a need for upload filters for Mastodon. This worry really only makes sense if you think of Mastodon as similar to Facebook or Twitter. But in terms of full distribution and federation, it makes no sense at all, and I feel Mastodon’s lay-out tricks people into thinking it is a platform.

This type of effect I recognise from other types of technology as well. E.g. what regularly happens in local exchange trading systems (LETS), i.e. alternative currency schemes. There too I’ve witnessed them faltering because the users kept making their alternative currency the same as national fiat currencies. Precisely the thing they said they were trying to get away from, but ending up throwing away all the different possibilities of agency and control they had for the taking.

Dump mimicry as design pattern
So I fully agree with Jerome when he says distributed and federated apps will need to come into their own by using other design patterns. Not by using the design patterns of current big platforms (who will all go the way of ecademy, orkut, ryze, jaiku, myspace, hyves and a plethora of other YASNs. If you don’t know what those were: that’s precisely the point).

In the case of Mastodon one such copied design pattern that can be done away with is the public facing pages and timelines. There are other patterns that can be used for discoverability for instance. Another likely pattern to throw out is the Tweetdeck style interface itself. Both will serve to make it look less like a platform and more like conversations.

Tools need to provide agency and reach
Tools are tools because they provide agency, they let us do things that would otherwise be harder or impossible. Tools are tools because they provide reach, as extensions of our physical presence, not just across space but also across time. For a very long time I have been convinced that tools need to be smaller than us, otherwise they’re not tools of real value. Smaller (see item 7 in my agency manifesto) than us means that the tool is under the full control of the group of users using it. In that sense e.g. Facebook groups are failed tools, because someone outside those groups controls the off-switch. The original promise of social software, when they were mostly blogs and wiki’s, and before they morphed into social media, was that it made publishing, interaction between writers and readers, and iterating on each other’s work ‘smaller’ than writers. Distributed conversations as well as emergent networks and communities were the empowering result of that novel agency.

Jerome also points to something else I think is important

In my opinion the first step is to build products that have value for the individual, and let the social aspects, the network effects, sublime this value. Value at the individual level can be many things. Let me organise my thoughts, let me curate “my” web, etc.

Although I don’t fully agree with the individual versus the network distinction. To me instead of just the individual you can put small coherent groups within a single context as well: the unit of agency in networked agency. So I’d rather talk about tools that are useful as a single instance (regardless of who is using it), and even more useful across instances.

Like blogs mentioned above and mentioned by Jerome too. This blog has value for me on its own, without any readers but me. It becomes more valuable as others react, but even more so when others write in their own space as response and distributed conversations emerge, with technology making it discoverable when others write about something posted here. Like the thermometer in my garden that tells me the temperature, but has additional value in a network of thermometers mapping my city’s microclimates. Or like 3D printers which can be put to use on their own, but can be used even better when designs are shared among printer owners, and used even better when multiple printer owners work together to create more complex artefacts (such as the network of people that print bespoke hand prostheses).

It is indeed needed to spend more energy designing tools that really take distribution and federation as a starting point. That are ‘smaller’ than us, so that user groups control their own tools and have freedom to tinker. This applies to not just online social tools, but to any software tool, and to connected products and the entire maker scene just as much.

In reply to a message by Owen Boswarva on Twitter

That holds true the other way. What does get measured, and how that is measured, is as much a political choice as what doesn’t. Metric design is political, also in the private sector. #ethicsbydesign

@darkgreener Tricky. It’s difficult to avoid the inference that some gaps in public data are mixed up with political choices (such as austerity cuts and privatisation), and the desire to avoid scrutiny of policy failures.

Owen Boswarva

This is a naive exercise to explore what ethics by design would look like for networked agency. There’s plenty of discussion about ethics by design in various places. Mostly in machine learning, where algorithmic bias is a very real issue already, and where other discussions such as around automated driving are misguided for lack of imagination and scope. It’s also an ongoing concern in adtech, especially since we know business practices don’t limit themselves to selling you stuff but also deceive you to sell political ideas. Data governance is an area where I encounter ethics by design as a topic on a regular basis, in decisions on what data to collect or not, and in questions of balancing or combining the need for transparency with the need for data protection. But I want to leave that aside, also because many organisations in those areas already have failed their customers and users. Which would make this posting a complaint and not constructive.

My current interest is in exploring what ethics means, and can be done by design, in the context of networked agency, and by extension a new civil society emerging in distributed digital transformation. A naive approach helps me find a first batch of questions and angles.

The notions that are the building blocks of networked agency are a starting point. Ethical questions follow directly from those building blocks.

First there are the building blocks related to the agency element in networked agency. These are technology and methods/processes, striking power, resilience and agility.
a) For the technologies and methods/processes involved, relevant are issues relating to who controls those tools, how these tools can be deployed by their users, and if a user group can alter the tools, adapt them to new needs and tinker with them.
b) Low thresholds of adoption need an exploration of what those thresholds are and how they play out for different groups. These are thresholds of technological and financial nature, but also barriers concerning knowledge, practicality, usability, and understandability.
c) Striking power, the actual acting part of agency provides questions about if a tool provides actual agency, and isn’t actually a pacifier. Not every action or activity constitutes agency. It’s why words like slacktivism and clicktivism have emerged.
d) Resilience in networked agency is about reducing the vulnerability to propagating failures from outside the group, and the manner in which mitigation is possible. Reduction of critical dependencies outside the group’s scope of control is something to consider here. That also works in reverse. Are you creating dependencies for others? In a similar vein, are you externalising costs onto others? Are you causing unintended consequences elsewhere, and can you be aware of them arising, or pre-empt them?
e) Agility in networked agency is about spotting and leveraging opportunities relative to your own needs in your wider network. Are you able to do that from a constructive perspective, or only a competitive/scarcity one? Do your opportunities come at the cost of other groups? When you leverage opportunities are you externalising costs or claiming exclusivity? In a networked environment externalising costs will return as feedback to your system. Networks almost by definition are endless repeats of the prisoners dilemma. Another side of this is which ways exist in which you can provide leverage to others simultaneously to creating your own, or when to be the lever in a situation.

Second there are notions that follow from the networked part of networked agency. The unit of agency in networked agency is a group of people that share some relationship (team, family, org, location, interest, history, etc), that together act upon a need shared across that group. This introduces three levels to evaluate ethical questions on, at the level of the individual in a group, at the level of the group itself, and between groups in a network. Group dynamics are thus firmly put into focus: power, control, ownership, voice, inclusion, decision making, conflict resolution, dependencies within a group, reciprocity, mutuality, verifiability, boundaries, trust, contributions, engagement, and reputations.
This in part translates back to the agency part, in terms of technology and skills to work with it. Skills won’t be evenly distributed in groups seeking agency, so potentially introduce power asymmetries, when unique capabilities mean de-facto gatekeepers or single points of failure are introduced. These may be counteracted with some mutual dependencies perhaps. More likely operational transparency in a group is of more importance so that the group can see such issues arise and calling them out is a normal thing to do, not something that has a threshold in itself. Operational transparency might build on an obligation to explain, which also is a logical element in ensuring (networked) agility.

The above output of this first exercise I will try and put in an overview. Not sure what will be useful here, a tree-like map, or a network, or a matrix. A next step is fleshing out the ethical issues in play. Then projecting them on for instance specific technologies, methods and group settings, to see what specific actions or design principles emerge from that.