Bookmarked Online Course Philosophy and Ethics of Human-Technology Relations and Design (by Peter Paul Verbeek)

Treating myself to this as a refresher, and to help me get back into reading more deeply around these subjects. Have a whole stack of stuff lined up, but need a bit of push to start reading more earnestly.

A new edition of our online course ‘Philosophy of Technology and Design’ will start May 4 2020. A crash course in the philosophy and ethics of human-technology relations and design in three weeks, four hours per week,….

Peter Paul Verbeek

Probably the top left gives the most realistic information. Image by Brooke Novak, license CC BY

An organisation that says it wants to work data driven as well as sees ethics as a key design ingredient, needs to take a very close look imho at how they set KPI’s and other indicators. I recently came across an organisation that says those first two things, but whose process of setting indicators looks to have been left as a naive exercise to internal teams.

To begin with, indicators easily become their own goals, and people will start gaming the measurement system to attain the set targets. (Think of call centers picking up the phone and then disconnecting, because they are scored on the number of calls answered within 3 rings, but the length of calls isn’t checked for those picked up)

Measurement also isn’t neutral. It’s an expression of values, regardless of whether you articulated your values. When you measure the number of traffic deaths for instance as an indicator for road safety, but not wounded or accidents as such, nor their location, you’ll end up minimising traffic deaths but not maximising road safety. Because the absence of deaths isn’t the presence of road safety. Deaths is just one, albeit the most irreparable one, expression of the consequences of unsafety. Different measurements lead to different real life actions and outcomes.

‘Gauges’ by Adam Kent, license CC BY

When you set indicators it is needed to evaluate what they cover, and more importantly what they don’t cover. To check if the overall set of indicators is balanced, where some indicators by definition deteriorate when others improve (so balance needs to be sought). To check if assumptions behind indicators have been expressed and when needed dealt with.

Otherwise you are bound to end up with blind spots, lack of balance, and potential injustices. Defined indicators also determine what data gets collected, and thus what your playing field is when you have a ‘data driven’ way of working. That way any blind spot, lack of balance and injustice will end up even more profoundly in your decisions. Because where indicators mostly look back in time at output, data driven use of the data underlying those indicators actively determines actions and thus (part of) future output, turning your indicators in a much more direct and sometimes even automated feedback loop.

Only if you’ve deliberately defined your true north, can you use your measurements to determine direction of your next steps. ‘Compass’ by Anthony, license CC BY ND

To me there seems to be something fundamentally wrong with plans I come across where companies would pay people for access to their personal data. This is not a well articulated thing, it just feels like the entire framing of the issue is off, so the next paragraphs are a first attempt to jot down a few notions.

To me it looks very much like a projection by companies on people of what companies themselves would do: treating data as an asset you own outright and then charging for access. So that those companies can keep doing what they were doing with data about you. It doesn’t strike me as taking the person behind that data as the starting point, nor their interests. The starting point of any line of reasoning needs to be the person the data is about, not the entity intending to use the data.

Those plans make data release, or consent for using it, fully transactional. There are several things intuitively wrong with this.

One thing it does is put everything in the context of single transactions between individuals like you and me, and the company wanting to use data about you. That seems to be an active attempt to distract from the notion that there’s power in numbers. Reducing it to me dealing with a company, and you dealing with them separately makes it less likely groups of people will act in concert. It also distracts from the huge power difference between me selling some data attributes to some corp on one side, and that corp amassing those attributes over wide swaths of the population on the other.

Another thing is it implies that the value is in the data you likely think of as yours, your date of birth, residence, some conscious preferences, type of car you drive, health care issues, finances etc. But a lot of value is in data you actually don’t have about you but create all the time: your behaviour over time, clicks on a site, reading speed and pauses in an e-book, minutes watched in a movie, engagement with online videos, the cell towers your phone pinged, the logs about your driving style of your car’s computer, likes etc. It’s not that the data you’ll think of as your own is without value, but that it feels like the magician wants you to focus on the flower in his left hand, so you don’t notice what he does with his right hand.
On top of that it also means that whatever they offer to pay you will be too cheap: your data is never worth much in itself, only in aggregate. Offering to pay on individual transaction basis is an escape for companies, not an emancipation of citizens.

One more element is the suggestion that once such a transaction has taken place everything is ok, all rights have been transferred (even if limited to a specific context and use case) and that all obligations have been met. It strikes me as extremely reductionist. When it comes to copyright authors can transfer some rights, but usually not their moral rights to their work. I feel something similar is at play here. Moral rights attached to data that describes a person, which can’t be transferred when data is transacted. Is it ok to manipulate you into a specific bubble and influence how you vote, if they paid you first for the type of stuff they needed to be able to do that to you? The EU GDPR I think takes that approach too, taking moral rights into account. It’s not about ownership of data per se, but the rights I have if your data describes me, regardless of whether it was collected with consent.

The whole ownership notion is difficult to me in itself. As stated above, a lot of data about me is not necessarily data I am aware of creating or ‘having’, and likely don’t see a need for to collect about myself. Unless paying me is meant as incentive to start collecting stuff about me for the sole purpose of selling it to a company, who then doesn’t need my consent nor make the effort to collect it about me themselves. There are other instances where me being the only one able to determine to share some data or withhold it mean risks or negative impact for others. It’s why cadastral records and company beneficial ownership records are public. So you can verify that the house or company I’m trying to sell you is mine to sell, who else has a stake or claim on the same asset, and to what amount. Similar cases might be made for new and closely guarded data, such as DNA profiles. Is it your sole individual right to keep those data closed, or has society a reasonable claim to it, for instance in the search for the cure for cancer? All that to say, that seeing data as a mere commodity is a very limited take, and that ownership of data isn’t a clear cut thing. Because of its content, as well as its provenance. And because it is digital data, meaning it has non-rivalrous and non-exclusionary characteristics, making it akin to a public good. There is definitely a communal and network side to holding, sharing and processing data, currently conveniently ignored in discussions about data ownership.

In short talking about paying for personal data and data lockers under my control seem to be a framing that presents data issues as straightforward but doesn’t solve any of data’s ethical aspects, just pretends that it’s taken care of. So that things may continue as usual. And that’s even before looking into the potential unintended consequences of payments.

Some links I thought worth reading the past few days

  • Peter Rukavina pointed me to this excellent posting on voting, in the context of violence as a state monopoly and how that vote contributes to violence. It’s this type of long form blogging that I often find so valuable as it shows you the detailed reasoning of the author. Where on FB or Twitter would you find such argumentation, and how would it ever surface in a algorithmic timeline? Added Edward Hasbrouck to my feedreader : The Practical Nomad blog: To vote, or not to vote?
  • This quote is very interesting. Earlier in the conversation Stephen Downes mentions “networks are grown, not constructed”. (true for communities too). Tanya Dorey adds how from a perspective of indigenous or other marginalised groups ‘facts’ my be different, and that arriving a truth therefore is a process: “For me, “truth growing” needs to involve systems, opportunities, communities, networks, etc. that cause critical engagement with ideas, beliefs and ways of thinking that are foreign, perhaps even contrary to our own. And not just on the content level, but embedded within the fabric of the system et al itself.“: A conversation during EL30.mooc.ca on truth, data, networks and graphs.
  • This article has a ‘but’ title, but actually is a ‘yes, and’. Saying ethics isn’t enough because we also need “A society-wide debate on values and on how we want to live in the digital age” is saying the same thing. The real money quote though is “political parties should be able to review technology through the lens of their specific world-views and formulate political positions accordingly. A party that has no position on how their values relate to digital technology or the environment cannot be expected to develop any useful agenda for the challenges we are facing in the 21st century.” : Gartner calls Digital Ethics a strategic trend for 2019 – but ethics are not enough
  • A Dutch essay on post-truth. Says it’s not the end of truth that’s at issue but rather that everyone claims it for themselves. Pits Foucault’s parrhesia, speaking truth to power against the populists : Waarheidsspreken in tijden van ‘post-truth’: Foucault, ‘parrèsia’ en populisme
  • When talking about networked agency and specifically resilience, increasingly addressing infrastructure dependencies gets important. When you run decentralised tools so that your instance is still useful when others are down, then all of a sudden your ISP and energy supplier are a potential risk too: disaster.radio | a disaster-resilient communications network powered by the sun
  • On the amplification of hate speech. It’s not about the speech to me, but about the amplification and the societal acceptability that signals, and illusion of being mainstream it creates: Opinion | I Thought the Web Would Stop Hate, Not Spread It
  • One of the essential elements of the EU GDPR is that it applies to anyone having data about EU citizens. As such it can set a de facto standard globally. As with environmental standards market players will tend to use one standard, not multiple for their products, and so the most stringent one is top of the list. It’s an element in how data is of geopolitical importance these days. This link is an example how GDPR is being adopted in South-Africa : Four essential pillars of GDPR compliance
  • A great story how open source tools played a key role in dealing with the Sierra Leone Ebola crisis a few years ago: How Open Source Software Helped End Ebola – iDT Labs – Medium
  • This seems like a platform of groups working towards their own networked agency, solving issues for their own context and then pushing them into the network: GIG – we are what we create together
  • An article on the limits on current AI, and the elusiveness of meaning: Opinion | Artificial Intelligence Hits the Barrier of Meaning

Some links I thought worth reading the past few days

Some links I think worth reading today.