This is a naive exercise to explore what ethics by design would look like for networked agency. There’s plenty of discussion about ethics by design in various places. Mostly in machine learning, where algorithmic bias is a very real issue already, and where other discussions such as around automated driving are misguided for lack of imagination and scope. It’s also an ongoing concern in adtech, especially since we know business practices don’t limit themselves to selling you stuff but also deceive you to sell political ideas. Data governance is an area where I encounter ethics by design as a topic on a regular basis, in decisions on what data to collect or not, and in questions of balancing or combining the need for transparency with the need for data protection. But I want to leave that aside, also because many organisations in those areas already have failed their customers and users. Which would make this posting a complaint and not constructive.

My current interest is in exploring what ethics means, and can be done by design, in the context of networked agency, and by extension a new civil society emerging in distributed digital transformation. A naive approach helps me find a first batch of questions and angles.

The notions that are the building blocks of networked agency are a starting point. Ethical questions follow directly from those building blocks.

First there are the building blocks related to the agency element in networked agency. These are technology and methods/processes, striking power, resilience and agility.
a) For the technologies and methods/processes involved, relevant are issues relating to who controls those tools, how these tools can be deployed by their users, and if a user group can alter the tools, adapt them to new needs and tinker with them.
b) Low thresholds of adoption need an exploration of what those thresholds are and how they play out for different groups. These are thresholds of technological and financial nature, but also barriers concerning knowledge, practicality, usability, and understandability.
c) Striking power, the actual acting part of agency provides questions about if a tool provides actual agency, and isn’t actually a pacifier. Not every action or activity constitutes agency. It’s why words like slacktivism and clicktivism have emerged.
d) Resilience in networked agency is about reducing the vulnerability to propagating failures from outside the group, and the manner in which mitigation is possible. Reduction of critical dependencies outside the group’s scope of control is something to consider here. That also works in reverse. Are you creating dependencies for others? In a similar vein, are you externalising costs onto others? Are you causing unintended consequences elsewhere, and can you be aware of them arising, or pre-empt them?
e) Agility in networked agency is about spotting and leveraging opportunities relative to your own needs in your wider network. Are you able to do that from a constructive perspective, or only a competitive/scarcity one? Do your opportunities come at the cost of other groups? When you leverage opportunities are you externalising costs or claiming exclusivity? In a networked environment externalising costs will return as feedback to your system. Networks almost by definition are endless repeats of the prisoners dilemma. Another side of this is which ways exist in which you can provide leverage to others simultaneously to creating your own, or when to be the lever in a situation.

Second there are notions that follow from the networked part of networked agency. The unit of agency in networked agency is a group of people that share some relationship (team, family, org, location, interest, history, etc), that together act upon a need shared across that group. This introduces three levels to evaluate ethical questions on, at the level of the individual in a group, at the level of the group itself, and between groups in a network. Group dynamics are thus firmly put into focus: power, control, ownership, voice, inclusion, decision making, conflict resolution, dependencies within a group, reciprocity, mutuality, verifiability, boundaries, trust, contributions, engagement, and reputations.
This in part translates back to the agency part, in terms of technology and skills to work with it. Skills won’t be evenly distributed in groups seeking agency, so potentially introduce power asymmetries, when unique capabilities mean de-facto gatekeepers or single points of failure are introduced. These may be counteracted with some mutual dependencies perhaps. More likely operational transparency in a group is of more importance so that the group can see such issues arise and calling them out is a normal thing to do, not something that has a threshold in itself. Operational transparency might build on an obligation to explain, which also is a logical element in ensuring (networked) agility.

The above output of this first exercise I will try and put in an overview. Not sure what will be useful here, a tree-like map, or a network, or a matrix. A next step is fleshing out the ethical issues in play. Then projecting them on for instance specific technologies, methods and group settings, to see what specific actions or design principles emerge from that.

One reaction on “Ethics by Design in Networked Agency

  1. Ethics by design is adding ethical choices and values to a design process as non-functional requirements, that then are turned into functional specifications.
    E.g. when you want to count the size of a group of people by taking a picture of them, adding the value of safeguarding privacy into the requirements might mean the picture will be intentionally made grainy by a camera. A more grainy pic still allows you to count the number of people in the photo, but you never captured and stored their actual faces.
    When it comes to data governance and machine learning Europe’s stance towards safeguarding civic rights and enlightenment values is a unique perspective to take in a geopolitical context. Data is a very valuable resource. In the US large corporations and intelligence services have created enormous data lakes, without much restraints, resulting in a tremendous power asymmetry, and an objectification of the individual. This is surveillance capitalism.
    China, and others like Russia, have created or are creating large national data spaces in which the individual is made fully transparent and described by connecting most if not all data sources and make them accessible to government, and where resulting data patterns have direct consequences for citizens. This is data driven authoritarian rule.
    Europe cannot compete with either of those two models, but can provide a competing perspective on data usage by creating a path of responsible innovation in which all data is as much combined and connected as elsewhere in the world, yet with values and ethical boundaries designed into its core. With the GDPR the EU is already setting a new de-facto global standard, and doing more along similar lines, not just in terms of regulations, but also in terms of infrastructure (Estonia’s X-road for instance) is the opportunity Europe has.
    Some pointers:
    My blogpost Ethics by Design
    A naive exploration of ethics around networked agency.
    A paper (PDF) on Value Sensitive Design
    The French report For a Meaningful Artificial Intelligence (PDF), that drive France’s 1.5 billion investment in value based AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.