Since the start of this year I am actively tracking the suite of new European laws being proposed on digitisation and data. Together they are the expression into law of the geopolitical position the EU is taking on everything digital and data, and all the proposed laws follow the same logic and reasoning. Taken together they shape how Europe wants to use the potential and benefits of digitisation and data use, including specifically for a range of societal challenges, while defending and strengthening citizen rights. Of course other EU legal initiatives in parallel sometimes point in different directions (e.g. EU copyright regulations leading to upload filters, and the attempts at backdooring end-to-end encryption in messaging apps for mass surveillance), but that is precisely why to me this suite of regulations stands out. Where other legal initiatives often seem to stand on their own, and bear the marks of lobbying and singular industry interests, this group of measures all build on the same logic and read internally consistent as well as an expression of an actual vision.

My work is to help translate the proposed legal framework to how it will impact and provide opportunity to large Dutch government data holders and policy departments, and to build connections and networks between all kinds of stakeholders around relevant societal issues and related use cases. This to shape the transition from the data provision oriented INSPIRE program (sharing and harmonising geo-data across the EU), to a use needs and benefits oriented approach (reasoning from a societal issue to solve towards with a network of relevant parties towards the data that can provide agency for reaching a solution). My work follows directly from the research I did last year to establish a list of EU wide high value data sets to be opened, where I dived deeply into all government data and its governance concerning earth observation, environment and meteorology, while other team members did the same for geo-data, statistics, company registers, and mobility.

All the elements in the proposed legal framework will be decided upon in the coming year or so, and enter into force probably after a 2 year grace period. So by 2025 this should be in place. In the meantime many organisations, as well as public funding, will focus on already implementing elements of it even while nothing is mandatory yet. As with the GDPR, the legal framework once in place will also be an export mechanism of the notions and values expressed in it to the rest of the world. This as compliance is tied to EU market access and having EU citizens as clients wherever they are.

One element of the framework is already in place, the GDPR. The newly proposed elements mimic the fine structures of the GDPR for non-compliance.
The new elements take the EU Digital Compass and EU Digital Rights and Principles for which a public consultation is now open until 2 September as a starting point.

The new proposed laws are:

Digital Markets Act (download), which applies to all dominant market parties, in terms of platform providers as well as physical network providers, that de facto are gatekeepers to access by both citizens and market entities. It aims for a digital unified market, and sets requirements for interoperability, ‘service neutrality’ of platforms, and to prevent lock-in. Proposed in November 2020.

Digital Services Act (download), applies to both gatekeepers (see previous point) and other digital service providers that act as intermediaries. Aims for a level playing field and diversity of service providers, protection of citizen rights, and requires transparency and accountability mechanisms. Proposed in November 2020.

AI Regulatory Proposal (download), does not regulate AI technology, but the EU market access of AI applications and usage. Market access is based on an assessment of risk to citizen rights and to safety (think of use in vehicles etc). It’s a CE mark for AI. It periodically updates a list of technologies considered within scope, and a list of areas that count as high risk. With increasing risk more stringent requirements on transparency, accountability and explainability are set. Creates GDPR style national and European authorities for complaints and enforcement. Responsibilities are given to the producer of an application, distributors as well as users of such an application. It’s the world’s first attempt of regulating AI and I think it is rather elegant in tying market access to citizen rights. Proposed in April 2021.

Data Governance Act (download), makes government held data that isn’t available under open data regulations available for use (but not for sharing), introduces the European dataspace (created from multiple sectoral data spaces), mandates EU wide interoperable infrastructure around which data governance and standardisation practices are positioned, and coins the concept of data altruism (meaning you can securely share your personal data or company confidential data for specific temporary use cases). This law aims at making more data available for usage, if not for (public) sharing. Proposed November 2020.

Data Act, currently open for public consultation until 2 September 2021. Will introduce rules around the possibilities the Data Governance Act creates, will set conditions and requirements for B2B cross-border and cross-sectoral data sharing, for B2G data sharing in the context of societal challenges, and will set transparency and accountability requirements for them. To be proposed towards the end of 2021.

Open Data Directive, which sets the conditions and requirements for open government data (which build on the national access to information regulations in the member states, hence the Data Governance Act as well which does not build on national access regimes). The Open Data Directive was proposed in 2018 and decided in 2019, as the new iteration of the preceding Public Sector Information directives. It should have been transposed into national law by 1 July 2021, but not all MS have done so (in fact the Netherlands has just recently started the work). An important element in this Directive is EU High Value Data list, which will make publication of open data through APIs and machine readable bulk download mandatory for all EU member states for the data listed. As mentioned above, last year I was part of the research team that did the impact assessments and proposed the policy options for that list (I led the research for earth observation, environment and meteorology). The implementation act for the EU High Value Data list will be published in September, and I expect it to e.g. add an open data requirement to most of the INSPIRE themes.

Most of the elements in this list are proposed as Acts, meaning they will have power of law across the EU as soon as they are agreed between the European Parliament, the EU council of heads of government and the European Commission and don’t require transposition into national law first. Also of note is that currently ongoing revisions and evaluations of connected EU directives (INSPIRE, ITS etc.) are being shaped along the lines of the Acts mentioned above. This means that more specific data oriented regulations closer to specific policy domains are already being changed in this direction. Similarly policy proposals such as the European Green Deal are very clearly building on the EU digital and data strategies to achieving and monitoring those policy ambitions. All in all it will be a very interesting few years in which this legal framework develops and gets applied, as it is a new fundamental wave of changes after the role the initial PSI Directive and INSPIRE directive had 15 to 20 years ago, with a much wider scope and much more at stake.

Risk Board Game
The geopolitics of digitisation and data. Image ‘Risk Board Game’ by Rob Bertholf, license CC BY

My first reading of the yet to be published EU Regulation on the European Approach for Artificial Intelligence, based on a leaked version, I find pretty good. A logical approach, laid out in the 92 recitals preceding the articles, based on risk assessment, where erosion of human and citizen rights or risk to key infrastructure and services and product safety is deemed high risk by definition. High risk means more strict conditions, following some of the building blocks of the GDPR, also when it comes to governance and penalties. Those conditions are tied to being allowed to put a product on the market, and are tied to how they perform in practice (not just how they’re intended). I find that an elegant combination, risk assessment based on citizen rights and critical systems, and connected to well-worn mechanisms of market access and market monitoring. It places those conditions on both producers and users, as well as other parties involved along the supply chain. The EU approach to data and AI align well this way it seems, and express the European geopolitical proposition concerning data and AI, centered on civic rights, into codified law. That codification, like the GDPR, is how the EU exports its norms to elsewhere.

The text should be published soon by the EC, and I’ll try a write-up in more detail then.

Ik zat vanmiddag in een middagsessie van Geonovum rondom digitale tweelingen waarin geodata wordt gebruikt. Mooie voorbeelden en goede vragen en bedenkingen voorbij zien komen. Ook het Kadaster kwam veel voorbij uiteraard, als grote datahouder op dit vlak.

Daarbij werd ook de pagina Kadaster Labs genoemd, een interessant overzicht van diverse dingen die het Kadaster technisch heeft uitgeprobeerd rondom 3d, linked data en meer. Ter inspiratie en voor hergebruik.

Een ander mooi laagdrempelig voorbeeld dat werd genoemd is de 3d versie van Amsterdam en Utrecht in wording, gemaakt in de game engine Unity. Hiermee is een 3D versie van de stad gemaakt die gewoon in je browser werkt, en waarin je bestaande data kunt verkennen maar ook interventies kunt visualiseren. Mooi laagdrempelig door die browsertoegankelijkheid, maar vooral ook omdat je wat je in beeld brengt direct ook weer als geometrische data kunt downloaden en in je eigen software kunt hergebruiken. Inclusief informatie over de ondergrond zoals riolering. Amsterdam en Utrecht werken in één ontwikkelteam, en trekken dus echt gezamenlijk op.

Een screenshot van 3d.amsterdam.nl in mijn browser, kijkend op Amsterdam Centraal Station en over het IJ naar Noord, vanaf een paar honderd meter hoogte.

Niet alles wat digitale tweeling wordt genoemd is een digitale tweeling. Veelal ontbreekt nog de dynamische kant van data, de tijdsas, terwijl soms de visuele presentatie dat wel bij gebruikers suggereert. Dat is al een valkuil van veel dashboards, dat gebrekkige kwaliteit of bruikbaarheid van onderliggende data verbloemd wordt door de presentatie, laat staan als we ook nog onder de indruk raken van mooie 3d visualisaties en bewegende elementen. Dat iedereen volop aan het experimenteren is met digitale tweelingen rond publieke vraagstukken betekent ook dat er nog weinig aandacht is voor hoe je de verbinding legt tussen al die digitale versies van onze omgeving, en welke praktijken en standaarden je daarvoor nodig hebt.

I installed delta.chat on my phone, to play with, nudged by Frank’s posting. It’s a E2E encrypted chat application with a twist: it uses e-mail as infrastructure. You set it up like an e-mail client, giving it access to one of your e-mail accounts. It will then use your e-mail account to send PGP encrypted messages.

So it’s actually a tool that brings you encrypted mail without the usual hassle of PGP set-up. Because it uses mail, you can find your messages in your regular mail archive (but encrypted), and you can contact anyone from the app if you have an e-mail address. The first message you send will be unencrypted (because you nor the app knows if the receiver has delta.chat installed), afterwards it will be encrypted as the app will have exchanged public encryption keys. Using e-mail means it’s robust, it doesn’t suffer from ‘there’s noone on here’ and there’s no silo lock-in. It also doesn’t need your phone number. It does ask for access to your contacts, which I denied as it is not at all a given that people will run delta.chat with the e-mail addresses they normally use.

I’ve tied it to my gmail address for now (ton dot zijlstra at gmail, ping me on delta.chat if you use it), because I wanted to have an easy interface to check what is going on in my inbox, and I have gmail on my phone anyway (even if I don’t use it for anything). I may switch over to a dedicated e-mail address later.

Some screenshots to illustrate:

Screenshot_20210218-090559_Delta Chat
How my initial exchange with Frank looked in Delta.chat


How my message to Frank looked in my mail. As it’s the first message it was unencrypted.


How I received Frank’s reply, which has an encrypted attachment.


The encrypted attachment when opened in a text editor shows it’s PGP.

I haven’t explored whether I can export my keys from Delta.chat. If you can’t, without Delta.chat I have no way of opening them. It’s a local tool only, so I suspect I might be able to get access to the keys outside of the app.

It seems to me e-readers don’t fully exploit the affordances digital publishing provides. Specifically when it comes to non-linear reading of non-fiction.
My Nova2 at least allow me to see both the table of contents alongside my current page, as well as my notes. This makes flipping back and forth easier. Kindle doesn’t.

But other things that would be possible are missing. With a paper book you have an immediate sense of both the size of the document and your current point within it. My e-reader can show me I am at 12% or position 123 of 456, but not a visual cue that doesn’t require interpretation.

More importantly my e-readers don’t manipulate a book like they should be able to given it is digital. Why can’t I collapse a document in various ways? E.g. show me the first and last paragraph of each chapter. Now add in all subheadings. Now add in all first and last sentences of a sub header and show all images. Etc. More advanced things would be e.g. highlighting referenced books also in my library and being able to jump between them. Or am I overlooking functionalities in my e-readers?

Also welcome: more publishers that sell a combination of a the physical and digital book.

How do you read non-linearly in e-books? What are your practices?

Today it is Global Ethics Day. My colleague Emily wanted to mark it given our increasing involvement in information ethics, and organised an informal online get together, a Global Ethics Day party, with a focus on data ethics. We showed the participants our work on an ethical reference for using geodata, and the thesis work our newest colleague Pauline finished this spring on ethical leadership within municipal governments. I was asked to kick the event off with some remarks to spark discussion.

I took the opportunity to define and launch a new moniker, Ethics as a Practice (EaaP).(1)
The impulse for me to do that comes out of two things that I have a certain dislike for, in how I see organisations deal with the ethics of working with data and using data to directly inform decisions.

The first concerns treating the philosophy of technology, information and data ethics in general as a purely philosophical and scientific debate. It, due to abstraction, then has no immediate bearing on the things organisations, I and others do in practice. Worse, regularly it approaches actual problems purely starting from that abstraction, ending up with posing ethical questions I think are irrelevant to reality on the ground. An example would be MIT’s notion that classical trolly problems have bearing on how to create autonomous vehicles. It seems to me because they don’t appreciate that saying autonomous vehicle, does not mean the vehicle is an indepenent actor to which blame etc can be applied, and that ‘autonomous’ merely means that a vehicle is independent from its previous driver, but otherwise fully embedded in a wide variety of other dependencies. Not autonomous at all, no ghost in the machine.


The campus of University of Twente, where they do some great ethics work w.r.t. to technology. But in itself it’s not sufficient. (image by me, CC BY SA)

The second concerns seeing ‘Ethics by design’ as a sufficient fix. I dislike that because it carries 2 assumptions that are usually not acknowledged. Ethics by design in practice seems to be perceived as ethics being only a concern in the design phase of a new technology, process, approach or method. Whereas at least 95% of what organisations and professionals deal with isn’t new but existing, so as a result remains out of scope of ethical considerations. It’s an assumption that everything that exists has been thoroughly ethically evaluated, which isn’t true, not at all even when it comes to existing data collection. Ethics has no role at all in existing data governance for instance, and data governance usually doesn’t cover data collection choices or its deletion/archiving.
The other assumption conveyed by the term ‘ethics by design’ is that once the design phase is completed, ethics has been sufficiently dealt with. The result is, with 95% of our environment remaining the same, that ethics by design is forward looking but not backwards compatible. Ethics by design is seen as doing enough, but it isn’t enough at all.


Ethics by design in itself does not provide absolution (image by Jordanhill School D&T Dept, license CC BY)

Our everyday actions and choices in our work are the expression of our individual and organisational values. The ‘ethics by design’ label sidesteps that everyday reality.

Both taken together, ethics as academic endeavour and ethics by design, result in ethics basically being outsourced to someone specific outside or in the organisation, or at best to a specific person in your team, and starts getting perceived as something external being delivered to your work reality. Ethics as a Service (EaaS) one might say, a service that takes care of the ethical aspects. That perception means you yourself can stop thinking about ethics, it’s been allocated, and you can just take its results and run with it. The privacy officer does privacy, the QA officer does quality assurance, the CISO does information security, and the ethics officer covers everything ethical…..meaning I can carry on as usual. (e.g. Enron had a Code of Ethics, but it had no bearing on the practical work or decisions taken.)

That perception of EaaS, ethics as an externally provided service to your work has real detrimental consequences. It easily becomes an outside irritant to the execution of your work. Someone telling you ‘no’ when you really want to do something. A bureaucratic template to fill in to be able to claim compliance (similarly as how privacy, quality, regulations are often treated). Ticking the boxes on a checklist without actual checks. That way it becomes something overly reductionist, which denies and ignores the complexity of everyday knowledge work.


Externally applied ethics become an irritant (image by Iain Watson, license CC BY)

Ethical questions and answers are actually an integral part of the complexity of your work. Your work is the place where clear boundaries can be set (by the organisation, by general ethics, law), ánd the place where you can notice as well as introduce behavioural patterns and choices. Complexity can only be addressed from within that complexity, not as an outside intervention. Ethics therefore needs to be dealt with from within the complexity of actual work and as one of the ingredients of it.

Placing ethics considerations in the midst of the complexity of our work, means that the spot where ethics are expressed in real work choices overlaps where such aspects are considered. It makes EaaS as a stand alone thing impossible, and instead brings those considerations into your everyday work not as an external thing but as an ingredient.

That is what I mean by Ethics as a Practice. Where you use academic and organisational output, where ethics is considered in the design stage, but never to absolve you from your professional responsibilities.
It still means setting principles and hard boundaries from the organisational perspective, but also an ongoing active reflection on them and on the heuristics that guide your choices, and it actively seeks out good practice. It never assumes a yes or no to an ethical question by default, later to be qualified or rationalised, but also does not approach those questions as neutral (as existing principles and boundaries are applied).(2) That way (data) ethical considerations become an ethics of your agency as a professional, informing your ability to act. It embraces the actual complexity of issues, acknowledges that daily reality is messy, engages all relevant stakeholders, and deliberately seeks out a community of peers to spot good practices.

Ethics is part and parcel of your daily messy work, it’s your practice to hone. (image by Neil Cummings, license CC BY SA)

Ethics as a Practice (EaaP) is a call to see yourself as an ethics practitioner, and a member of a community of practice of such practitioners, not as someone ethics ‘is done to’. Ethics is part and parcel of your daily messy work, it’s your practice to hone. Our meet-up today was a step to have such an exchange between peers.

I ended my remarks with a bit of a joke, saying, EaaP is so you can always do the next right thing, a quote from a Disney movie my 4 year old watches, and add a photo of a handwritten numbered list headed ‘things to do’ that I visibly altered so it became a ‘right things to do’ list.

(1) the ‘..as a Practice’ notion I took from Anne-Laure Le Cunff’s Ness Labs posting that mentioned ‘playfulness as a practice’.
(2) not starting from yes or no, nor from a neutral position, taken from the mediation theory by University of Twente’s prof Peter Paul Verbeek