My first reading of the yet to be published EU Regulation on the European Approach for Artificial Intelligence, based on a leaked version, I find pretty good. A logical approach, laid out in the 92 recitals preceding the articles, based on risk assessment, where erosion of human and citizen rights or risk to key infrastructure and services and product safety is deemed high risk by definition. High risk means more strict conditions, following some of the building blocks of the GDPR, also when it comes to governance and penalties. Those conditions are tied to being allowed to put a product on the market, and are tied to how they perform in practice (not just how they’re intended). I find that an elegant combination, risk assessment based on citizen rights and critical systems, and connected to well-worn mechanisms of market access and market monitoring. It places those conditions on both producers and users, as well as other parties involved along the supply chain. The EU approach to data and AI align well this way it seems, and express the European geopolitical proposition concerning data and AI, centered on civic rights, into codified law. That codification, like the GDPR, is how the EU exports its norms to elsewhere.

The text should be published soon by the EC, and I’ll try a write-up in more detail then.

De Open State Foundation en SETUP lanceren de SOS Tech Awards gericht op transparantie en verantwoordelijkheid in de digitale samenleving.

De Glass & Black Box Awards gaan over openheid en transparantie. De Dode & Levende Mussen Award gaan over verantwoordelijkheid nemen na technologische missers, en de mate waarin bedrijven en overheden niet alleen excuus aanbieden maar ook echt hun handelen aanpassen. De genomineerden worden in de komende weken bekend gemaakt. De SOS Tech Awards worden op dinsdag 23 maart 2021 uitgereikt via een livestream vanuit de centrale Bibliotheek Utrecht.

(In het kader van transparantie: ik ben bestuurslid bij de Open State Foundation)

Two bookmarks, concerning GDPR enforcement. The GDPR is an EU law with global reach and as such ‘exports’ the current European approach to data protection as a key citizen right. National Data Protection Agencies (DPAs) are tasked with enforcing the GDPR against companies not complying with its rules. The potential fines for non-compliance are very steep, but much depends on DPAs being active. Various DPAs at this point, 2 years after GDPR enforcement commencing, seem understaffed, indecisive, or dragging their feet.

Now the DPAs are being sued by citizens to force them to do their job properly. The Luxembourg DPA is sued for the surprising ruling that the GDPR is basically unenforcable outside the EU (which isn’t true, as it could block services into the EU, seize assets etc.) And there’s a case before the EUCJ, based on the Irish DPA being extremely slow in starting investigations of the Big Tech companies registered within its jurisdiction, that would allow other national DPAs to start their own cases against these companies. (Normally the DPA of the country where a company is registered is responsible, but in certain cases DPA’s of the countries of residence of the complaining citizen can get involved too.)

The DPAs are the main factor in whether the GDPR is an actual force for data protection or an empty gesture. And it seems patience with DPAs to take up their defined role is running out with various EU citizens. Rightly so.

Today it is Global Ethics Day. My colleague Emily wanted to mark it given our increasing involvement in information ethics, and organised an informal online get together, a Global Ethics Day party, with a focus on data ethics. We showed the participants our work on an ethical reference for using geodata, and the thesis work our newest colleague Pauline finished this spring on ethical leadership within municipal governments. I was asked to kick the event off with some remarks to spark discussion.

I took the opportunity to define and launch a new moniker, Ethics as a Practice (EaaP).(1)
The impulse for me to do that comes out of two things that I have a certain dislike for, in how I see organisations deal with the ethics of working with data and using data to directly inform decisions.

The first concerns treating the philosophy of technology, information and data ethics in general as a purely philosophical and scientific debate. It, due to abstraction, then has no immediate bearing on the things organisations, I and others do in practice. Worse, regularly it approaches actual problems purely starting from that abstraction, ending up with posing ethical questions I think are irrelevant to reality on the ground. An example would be MIT’s notion that classical trolly problems have bearing on how to create autonomous vehicles. It seems to me because they don’t appreciate that saying autonomous vehicle, does not mean the vehicle is an indepenent actor to which blame etc can be applied, and that ‘autonomous’ merely means that a vehicle is independent from its previous driver, but otherwise fully embedded in a wide variety of other dependencies. Not autonomous at all, no ghost in the machine.


The campus of University of Twente, where they do some great ethics work w.r.t. to technology. But in itself it’s not sufficient. (image by me, CC BY SA)

The second concerns seeing ‘Ethics by design’ as a sufficient fix. I dislike that because it carries 2 assumptions that are usually not acknowledged. Ethics by design in practice seems to be perceived as ethics being only a concern in the design phase of a new technology, process, approach or method. Whereas at least 95% of what organisations and professionals deal with isn’t new but existing, so as a result remains out of scope of ethical considerations. It’s an assumption that everything that exists has been thoroughly ethically evaluated, which isn’t true, not at all even when it comes to existing data collection. Ethics has no role at all in existing data governance for instance, and data governance usually doesn’t cover data collection choices or its deletion/archiving.
The other assumption conveyed by the term ‘ethics by design’ is that once the design phase is completed, ethics has been sufficiently dealt with. The result is, with 95% of our environment remaining the same, that ethics by design is forward looking but not backwards compatible. Ethics by design is seen as doing enough, but it isn’t enough at all.


Ethics by design in itself does not provide absolution (image by Jordanhill School D&T Dept, license CC BY)

Our everyday actions and choices in our work are the expression of our individual and organisational values. The ‘ethics by design’ label sidesteps that everyday reality.

Both taken together, ethics as academic endeavour and ethics by design, result in ethics basically being outsourced to someone specific outside or in the organisation, or at best to a specific person in your team, and starts getting perceived as something external being delivered to your work reality. Ethics as a Service (EaaS) one might say, a service that takes care of the ethical aspects. That perception means you yourself can stop thinking about ethics, it’s been allocated, and you can just take its results and run with it. The privacy officer does privacy, the QA officer does quality assurance, the CISO does information security, and the ethics officer covers everything ethical…..meaning I can carry on as usual. (e.g. Enron had a Code of Ethics, but it had no bearing on the practical work or decisions taken.)

That perception of EaaS, ethics as an externally provided service to your work has real detrimental consequences. It easily becomes an outside irritant to the execution of your work. Someone telling you ‘no’ when you really want to do something. A bureaucratic template to fill in to be able to claim compliance (similarly as how privacy, quality, regulations are often treated). Ticking the boxes on a checklist without actual checks. That way it becomes something overly reductionist, which denies and ignores the complexity of everyday knowledge work.


Externally applied ethics become an irritant (image by Iain Watson, license CC BY)

Ethical questions and answers are actually an integral part of the complexity of your work. Your work is the place where clear boundaries can be set (by the organisation, by general ethics, law), ánd the place where you can notice as well as introduce behavioural patterns and choices. Complexity can only be addressed from within that complexity, not as an outside intervention. Ethics therefore needs to be dealt with from within the complexity of actual work and as one of the ingredients of it.

Placing ethics considerations in the midst of the complexity of our work, means that the spot where ethics are expressed in real work choices overlaps where such aspects are considered. It makes EaaS as a stand alone thing impossible, and instead brings those considerations into your everyday work not as an external thing but as an ingredient.

That is what I mean by Ethics as a Practice. Where you use academic and organisational output, where ethics is considered in the design stage, but never to absolve you from your professional responsibilities.
It still means setting principles and hard boundaries from the organisational perspective, but also an ongoing active reflection on them and on the heuristics that guide your choices, and it actively seeks out good practice. It never assumes a yes or no to an ethical question by default, later to be qualified or rationalised, but also does not approach those questions as neutral (as existing principles and boundaries are applied).(2) That way (data) ethical considerations become an ethics of your agency as a professional, informing your ability to act. It embraces the actual complexity of issues, acknowledges that daily reality is messy, engages all relevant stakeholders, and deliberately seeks out a community of peers to spot good practices.

Ethics is part and parcel of your daily messy work, it’s your practice to hone. (image by Neil Cummings, license CC BY SA)

Ethics as a Practice (EaaP) is a call to see yourself as an ethics practitioner, and a member of a community of practice of such practitioners, not as someone ethics ‘is done to’. Ethics is part and parcel of your daily messy work, it’s your practice to hone. Our meet-up today was a step to have such an exchange between peers.

I ended my remarks with a bit of a joke, saying, EaaP is so you can always do the next right thing, a quote from a Disney movie my 4 year old watches, and add a photo of a handwritten numbered list headed ‘things to do’ that I visibly altered so it became a ‘right things to do’ list.

(1) the ‘..as a Practice’ notion I took from Anne-Laure Le Cunff’s Ness Labs posting that mentioned ‘playfulness as a practice’.
(2) not starting from yes or no, nor from a neutral position, taken from the mediation theory by University of Twente’s prof Peter Paul Verbeek

I just realised that it’s a month this Friday that I started using markdown textfiles and Obsidian for notes, and that I have not used my local WordPress install at all during that time, nor Evernote much. I made 4 notes in EN in a month: 1 bookmark, 1 shopping list, 2 call logs. Compared to 47 notes the month prior to it.

Day logs and work notes are now in markdown files, internal wikipages are now my Garden of the Forking Path notes in markdown files. Those were previously in my local WP install. Bookmarks aren’t mindlessly send to Evernote at a touch of a button anymore, with the vague intention of reading later and/or having it come up in a search at some point in the future. Reading ‘later’ never really works for me (Instapaper never succeeded in really landing in my workflow). So now it’s either I read it and want to keep it for reference by adding a snapshot to Zotero, or I did not read it and trust that if it’s important it will resurface at some point again. Other elements in my use of Evernote I’ve recreated on the go in text files quite naturally: Folders for each of my areas of activity match up with what I have as Notebooks in EN.

It feels like coming full circle, as I have for the most part been note taking in simple text files since the late ’80s. I started paying for Evernote in 2010, after using the free version for a while, and used wiki in parallel to text files for note taking for a number of years before that (2004-2008 I think). Textfiles always had my preference, as they’re fast and easy to create, but it needed a way to connect them, add tags etc., and that was always the sticking point. Tools like Obsidian, Foam and others like it are mere viewers on top of those text files in my file system. Viewers that add useful things like visualising connections, and showing multiple queries on the underlying files in parallel. It adds what was missing. So after a month, I am getting more convinced that I am on a path ditching Evernote.

Time to start syncing some of my notes folders to my phone (through NextCloud), and choose a good editor for Android, so I can add/use/edit them there too.

Really interesting step for IRMA: they’re now offering BigBlueButton enabled videoconferencing for meetings where participants have their identities verified.

IRMA is a Dutch mobile app that allows you to share specific aspects of your identity with different parties, relevant to a specific context. For instance if you have to proof you’re over 18 to order an alcoholic beverage, showing your ID is the current norm. But that discloses much more than just your age, as it shows your ID number, full name, date and place of birth etc. IRMA is an app that you can preload with verified identifying aspects, such as your date of birth as registered with the local government’s citizens database, which you can then disclose partially where needed. When ordering a drink, you can show the bartender that you’re ‘over 18’ as verified by your municipality, without having to show your actual date of birth or your full name.

In our pandemic age video conferencing has grown enormously, including for conversations where identity is important. E.g. conversations between patients and doctors, or job interviews, conversations with your bank, exams etc. IRMA-Meet now offers BigBlueButton videocalls from their site, where all participants have been verified on the relevant identity aspects for the call.

Looking forward to hearing user experiences.