Een paar weken geleden had ik een gesprek van een uur met Bart Ensink van Little Rocket over mijn werk en mijn bedrijf The Green Land. Dat gesprek is als zesde aflevering van de Datadriftig podcast nu te beluisteren. We ‘kwamen elkaar tegen’ in de interactie op een draadje op Mastodon in december. Little Rocket is een ebusiness bedrijf en maakt voor hun zakelijke klanten data bruikbaarder. Het is gevestigd in Enschede, dus bracht ik een bezoek aan de stad waar ik tot 6 jaar geleden woonde, en dook Enschede en de Universiteit Twente vaker op in het gesprek.
Last Tuesday I provided the opening keynote at BeGeo, the annual conference of Belgium’s geospatial sector, organised by the Belgian National Geographic Institute. My talk was part of the opening plenary session, after the welcome by NGI’s administrator-general Ingrid Vanden Berghe, and opening remarks by Belgian Minister for Defence Ludivine Dedonder.
With both the Digital and Data strategies the EU is shaping a geopolitical proposition w.r.t. digitisation and data. Anchored to maximising societal benefits and strengthening citizen rights and European values as measures of success, a wide range of novel legal instruments is being created. Those instruments provide companies, citizens, knowledge institutes and governments alike with new opportunities as well as responsibilities concerning the use of data. The forming of a single European market for data sharing and usage, the EU data space, also has consequences for all applications, including AI and digital twins, that are dependent on that data and produce data in return.
Geo-data has a key role to play, not just in the Green Deal data space, and finds itself at the center of a variety of ethical issues as it is often the linking pin for other data sources. The EU data space will shape the environment in which data and geo-data is being shared and used for the coming decade, and requires elevating the role and visibility of geo data across other sectors. I explored the EU Data space as geo-data’s new frontier, to provide the audience with an additional perspective and some questions for their participation at the BeGeo conference.
The slides are embedded below, which you can also embed in your own website, and can be downloaded as PDF.
Bookmarked Google engineer put on leave after saying AI chatbot has become sentient (by Richard Luscombe in The Guardian)
A curious and interesting case is emerging from Google, one of its engineers claims that a chatbot AI (LaMDA) they created has become sentient. The engineer is suspended because of discussing confidential info in public. There is however an intruiging tell about their approach to ethics in how Google phrases a statement about it. “He is a software engineer, not an ethicist“. In other words, the engineer should not worry about ethics, they’ve got ethicists on the payroll for that. Worrying about ethics is not the engineer’s job. That perception means you yourself can stop thinking about ethics, it’s been allocated, and you can just take its results and run with it. The privacy officer does privacy, the QA officer does quality assurance, the CISO does information security, and the ethics officer covers everything ethical…..meaning I can carry on as usual. I read that as a giant admission as to how Google perceives ethics, and that ethics washing is their main aim. It’s definitely not welcomed to treat ethics as a practice, going by that statement. Maybe they should open a conversation with that LaMDA AI chatbot about those ethics to help determine the program’s sentience 🙂
Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist.
End of July the once-every-4-years hacker mass event, this edition titled May Contain Hackers, will take place. As usual it takes place in the midst of the summer holidays, meaning as a parent of a school age kid I won’t be able to make it personally. However I’m very pleased that my company and our team, together with friends from our immediate professional network (not coincidentally veterans of E’s 2018 birthday unconference), are working together to host one of the Villages at MCH2022!
Our Village is called ‘ethisch party’ in Dutch, ethical party in English. In Dutch ethical rhymes with 80s in English. Therefore it’s listed as the Village 80s Party, ‘putting the 80s back into the ethics’. Data ethics is the general context for the village’s program.
You’re welcome to join and get involved!
Bookmarked Data altruism: how the EU is screwing up a good idea (by Winfried Veil)
I find this an unconvincing critique of the data altruism concept in the new EU Data Governance Act (caveat: the final consolidated text of the new law has not been published yet).
“If the EU had truly wanted to facilitate processing of personal data for altruistic purposes, it could have lifted the requirements of the GDPR”
GDPR slackened for common good purposes? Let’s loosen citizen rights requirements? It assumes common good purposes can be well enough defined to not endanger citizen rights, turtles all the way down. The GDPR is a foundational block, one in which the author, some googling shows, is disappointed with having had some first hand experience in its writing process. The GDPR is a quality assurance instrument, meaning, like with ISO style QA systems, it doesn’t make anything impossible or unallowed per se but does require you organise it responsibly upfront. That most organisations have implemented it as a compliance checklist to be applied post hoc is the primary reason for it being perceived as “straight jacket” and for the occurring GDPR related breaches to me.
It is also worth noting that data altruism also covers data that is not covered by the GDPR. It’s not just about person identifiable data, but also about otherwise non-public or confidential organisational data.
The article suggests it makes it harder for data altruistic entities to do something that already now can be done under the GDPR by anyone, by adding even more rules.
The GDPR pertains to the grounds for data collection in the context of usage specified at the time of collection. Whereas data altruism is also aimed at non-specified and at not yet known future use of data collected here and now. As such it covers an unaddressed element in the GDPR and offers a path out of the purpose binding the GDPR stipulates. It’s not a surprise that a data altruism entity needs to comply with both the GDPR and a new set of rules, because those additional rules do not add to the GDPR responsibilities but cover other activities. The type of entities envisioned for it already exist in the Netherlands, common good oriented entities called public benefit organisations: ANBI‘s. These too do not absolve you from other legal obligations, or loosen the rules for you. On the contrary these too have additional (public) accountability requirements, similar to those described in the DGA (centrally registered, must publish year reports). The DGA creates ANBI’s for data, Data-ANBI’s. I’ve been involved in data projects that could have benefited from that possibility but never happened in the end because it couldn’t be made to work without this legal instrument.
To me the biggest blind spot in the criticism is that each of the examples cited as probably more hindered than helped by the new rules are single projects that set up their own data collection processes. That’s what I think data altruism is least useful for. You won’t be setting up a data altruism entity for your project, because by then you already know what you want the data for and start collecting that data after designing the project. It’s useful as a general purpose data holding entity, without pre-existing project designs, where later, with the data already collected, such projects as cited as example will be applicants to use the data held. A data altruistic entity will not cater to or be created for a single project but will serve data as a utility service to many projects. I envision that universities, or better yet networks of universities, will set up their own data altruistic entities, to cater to e.g. medical or social research in general. This is useful because there currently are many examples where handling the data requirements being left to the research team is the source of not just GDPR breaches but also other ethical problems with data use. It will save individual projects such as the examples mentioned a lot of time and hassle if there’s one or more fitting data altruistic entities for them to go to as a data source. This as there will then be no need for data collection, no need to obtain your own consent or other grounds for data collection for each single respondent, or create enough trust in your project. All that will be reduced to guaranteeing your responsible data use and convince an ethical board of having set up your project in a responsible way so that you get access to pre-existing data sources with pre-existing trust structures.
It seems to me sentences cited below require a lot more thorough argumentation than the article and accompanying PDF try to provide. Ever since I’ve been involved in open data I’ve seen plenty of data innovations, especially if you switch your ‘only unicorns count’ filter off. Barriers that unintentionally do exist typically stem more from a lack of a unified market for data in Europe, something the DGA (and the GDPR) is actually aimed at.
“So long as the anti-processing straitjacket of the GDPR is not loosened even a little for altruistic purposes, there will be little hope for data innovations from Europe.” “In any case, the EU’s bureaucratic ideas threaten to stifle any altruism.”
The AdTech industry club since a long time uses a highly irritating pseudo-consent form (you know the kind, it takes one click to give away everything, and a day of clicks to deny consent). Today the good news is that IAB’s ‘Transparency and Consent Framework‘ is deemed illegal by the EU data protection authorities, because it is neither transparent nor has any meaningful connection with the word consent. This verdict was to be expected since last year November. This impacts over 1000 companies who as IAB members pay for the privilege of IAB violating the GDPR for them, amongst which Google, Amazon and Microsoft, but also to my surprise Automattic (WordPress) whom I expect much better of.
It should also impact the real time bidding system for adverts (OpenRTB) based on the data involved. This decision isn’t about that real time bidding system, but it does draw welcome attention to “the great risks to the fundamental rights and freedoms of the data subjects posed by OpenRTB, in particular in view of the large scale of personal data involved, the profiling activities, the prediction of behaviour, and the ensuing surveillance“. Which amounts to ‘please bring some complaints about OpenRTB before us asap’.
The decision finds IAB is non-compliant with no less than 11 different GDPR articles. The Belgian DPA called IAB negligent and TCF systematically deficient. IAB must within 2 months provide a plan to reach compliance within at most 6 months. Every day after those two time limits will cost 5000 Euro. A fine of 250.000 Euro is also ordered.
I am grateful to the organisations who brought this complaint, amongst which is the Dutch foundation ‘Bits of Freedom’ which I support financially. The Timelex law office, whom I had the pleasure of closely working with in the past, deserve thanks for their legal assistance in this complaint.
Ceterum censeo AdTech is fundamentally non-compatible with the GDPR, and needs to die.