Bookmarked Google engineer put on leave after saying AI chatbot has become sentient (by Richard Luscombe in The Guardian)

A curious and interesting case is emerging from Google, one of its engineers claims that a chatbot AI (LaMDA) they created has become sentient. The engineer is suspended because of discussing confidential info in public. There is however an intruiging tell about their approach to ethics in how Google phrases a statement about it. “He is a software engineer, not an ethicist“. In other words, the engineer should not worry about ethics, they’ve got ethicists on the payroll for that. Worrying about ethics is not the engineer’s job. That perception means you yourself can stop thinking about ethics, it’s been allocated, and you can just take its results and run with it. The privacy officer does privacy, the QA officer does quality assurance, the CISO does information security, and the ethics officer covers everything ethical…..meaning I can carry on as usual. I read that as a giant admission as to how Google perceives ethics, and that ethics washing is their main aim. It’s definitely not welcomed to treat ethics as a practice, going by that statement. Maybe they should open a conversation with that LaMDA AI chatbot about those ethics to help determine the program’s sentience 🙂

Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist.

End of July the once-every-4-years hacker mass event, this edition titled May Contain Hackers, will take place. As usual it takes place in the midst of the summer holidays, meaning as a parent of a school age kid I won’t be able to make it personally. However I’m very pleased that my company and our team, together with friends from our immediate professional network (not coincidentally veterans of E’s 2018 birthday unconference), are working together to host one of the Villages at MCH2022!

Our Village is called ‘ethisch party’ in Dutch, ethical party in English. In Dutch ethical rhymes with 80s in English. Therefore it’s listed as the Village 80s Party, ‘putting the 80s back into the ethics’. Data ethics is the general context for the village’s program.

You’re welcome to join and get involved!

Bookmarked Data altruism: how the EU is screwing up a good idea (by Winfried Veil)

I find this an unconvincing critique of the data altruism concept in the new EU Data Governance Act (caveat: the final consolidated text of the new law has not been published yet).

“If the EU had truly wanted to facilitate processing of personal data for altruistic purposes, it could have lifted the requirements of the GDPR”
GDPR slackened for common good purposes? Let’s loosen citizen rights requirements? It asumes common good purposes can be well enough defined to not endanger citizen rights, turtles all the way down. The GDPR is a foundational block, one in which the author, some googling shows, is disappointed with having had some first hand experience in its writing process. The GDPR is a quality assurance instrument, meaning, like with ISO style QA systems, it doesn’t make anything impossible or unallowed per se but does require you organise it responsibly upfront. That most organisations have implemented it as a compliance checklist to be applied post hoc is the primary reason for it being perceived as “straight jacket” and for the occurring GDPR related breaches to me.
It is also worth noting that data altruism also covers data that is not covered by the GDPR. It’s not just about person identifiable data, but also about otherwise non-public or confidential organisational data.

The article suggests it makes it harder for data altruistic entities to do something that already now can be done under the GDPR by anyone, by adding even more rules.
The GDPR pertains to the grounds for data collection in the context of usage specified at the time of collection. Whereas data altruism is also aimed at non-specified and at not yet known future use of data collected here and now. As such it covers an unaddressed element in the GDPR and offers a path out of the purpose binding the GDPR stipulates. It’s not a surprise that a data altruism entity needs to comply with both the GDPR and a new set of rules, because those additional rules do not add to the GDPR responsibilities but cover other activities. The type of entities envisioned for it already exist in the Netherlands, common good oriented entities called public benefit organisations: ANBI‘s. These too do not absolve you from other legal obligations, or loosen the rules for you. On the contrary these too have additional (public) accountability requirements, similar to those described in the DGA (centrally registered, must publish year reports). The DGA creates ANBI’s for data, Data-ANBI’s. I’ve been involved in data projects that could have benefited from that possibility but never happened in the end because it couldn’t be made to work without this legal instrument.

To me the biggest blind spot in the criticism is that each of the examples cited as probably more hindered than helped by the new rules are single projects that set up their own data collection processes. That’s what I think data altruism is least useful for. You won’t be setting up a data altruism entity for your project, because by then you already know what you want the data for and start collecting that data after designing the project. It’s useful as a general purpose data holding entity, without pre-existing project designs, where later, with the data already collected, such projects as cited as example will be applicants to use the data held. A data altruistic entity will not cater to or be created for a single project but will serve data as a utility service to many projects. I envision that universities, or better yet networks of universities, will set up their own data altruistic entities, to cater to e.g. medical or social research in general. This is useful because there currently are many examples where handling the data requirements being left to the research team is the source of not just GDPR breaches but also other ethical problems with data use. It will save individual projects such as the examples mentioned a lot of time and hassle if there’s one or more fitting data altruistic entities for them to go to as a data source. This as there will then be no need for data collection, no need to obtain your own consent or other grounds for data collection for each single respondent, or create enough trust in your project. All that will be reduced to guaranteeing your responsible data use and convince an ethical board of having set up your project in a responsible way so that you get access to pre-existing data sources with pre-existing trust structures.

It seems to me sentences cited below require a lot more thorough argumentation than the article and accompanying PDF try to provide. Ever since I’ve been involved in open data I’ve seen plenty of data innovations, especially if you switch your ‘only unicorns count’ filter off. Barriers that unintentionally do exist typically stem more from a lack of a unified market for data in Europe, something the DGA (and the GDPR) is actually aimed at.

“So long as the anti-processing straitjacket of the GDPR is not loosened even a little for altruistic purposes, there will be little hope for data innovations from Europe.” “In any case, the EU’s bureaucratic ideas threaten to stifle any altruism.”

Winfried Veil

The AdTech industry club since a long time uses a highly irritating pseudo-consent form (you know the kind, it takes one click to give away everything, and a day of clicks to deny consent). Today the good news is that IAB’s ‘Transparency and Consent Framework‘ is deemed illegal by the EU data protection authorities, because it is neither transparent nor has any meaningful connection with the word consent. This verdict was to be expected since last year November. This impacts over 1000 companies who as IAB members pay for the privilege of IAB violating the GDPR for them, amongst which Google, Amazon and Microsoft, but also to my surprise Automattic (WordPress) whom I expect much better of.

It should also impact the real time bidding system for adverts (OpenRTB) based on the data involved. This decision isn’t about that real time bidding system, but it does draw welcome attention to “the great risks to the fundamental rights and freedoms of the data subjects posed by OpenRTB, in particular in view of the large scale of personal data involved, the profiling activities, the prediction of behaviour, and the ensuing surveillance“. Which amounts to ‘please bring some complaints about OpenRTB before us asap’.

The decision finds IAB is non-compliant with no less than 11 different GDPR articles. The Belgian DPA called IAB negligent and TCF systematically deficient. IAB must within 2 months provide a plan to reach compliance within at most 6 months. Every day after those two time limits will cost 5000 Euro. A fine of 250.000 Euro is also ordered.

I am grateful to the organisations who brought this complaint, amongst which is the Dutch foundation ‘Bits of Freedom’ which I support financially. The Timelex law office, whom I had the pleasure of closely working with in the past, deserve thanks for their legal assistance in this complaint.

Ceterum censeo AdTech is fundamentally non-compatible with the GDPR, and needs to die.

The spam about GDPR and CCPA I received last week, turns out to be part of a study by the US based Princeton university, with one of the researchers recently having joined the Dutch Radboud University. The more recently sent out mails apparantly had a link to the project page added, I assume in light of feedback received, which then was shared in my Mastodon timeline by someone who as a Mastodon moderator had received these mails.

I sent a mail to the research team explaining my complaint about the mails I received. I also approached the Radboud University’s Digital Security (RU DiS) research group where one of the researchers works, and filed a complaint there.
In the past few days I’ve had e-mail exchanges with the research team, as well as with the RU DiS department head. All those I approached have been very responsive and willing to provide information, which I very much appreciate.

That doesn’t make the mails I received ok though. The research team itself may have come to the same notion, as they informed me they’ve stopped sending out new mails for now. They are also working to add have added a FAQ to the project page. [UPDATE 2021-12-19 Jonathan Mayer, the Principal Investigator in this Princeton research project has now issued an apology. These are welcome words.]

On the research

The research project is interested in how companies have set up their process for responding to requests for data access under the European general data protection regulation (GDPR) and the California Consumer Privacy Act (CCPA). They also intended these requests for organisations who don’t a priori fall within scope of those acts. Both acts are intended to set a norm for those not covered by it. The GDPR is written to export the EU’s norms for data protection to the rest of the world, and the CCPA is set up to encourage companies not active in California to follow its rules regardless. So far I have no issues.

How I ended up in the list of sites approached

My blog is a personal website, so it falls outside of the declared scope of the study (companies). It can’t fall under the CCPA, as it only applies to businesses (that do business in California, with a certain turnover, or selling data). It is less clear if it falls under the GDPR: In my reading of the GDPR it doesn’t, but at the same time have written a personal data protection policy as if it does (out of professional interest). So how did I end up in Princeton’s list of site owners to approach? In my conversation with one of the researchers they indicated that the list of sites to approach was a selection taken out of the Tranco list. That list combines the results from various lists of the 1 million most popular websites. Such as Alexa (soon to be discontinued), Cisco Umbrella, and Majestic Million. My URL is in both the Alexa and the Majectic list. Cisco’s list looks at DNS requests for domains on their hardware, and unsurprisingly I’m not in their current list as it is based on today’s web traffic. The Majestic list seems to use backlinks to a site as a ranking factor. This favors old websites, as they build up a sediment of such backlinks over time. Such as weblogs that are some 20 years old, such as mine. Unsurprising then that blogs like Dave‘s, David‘s, and those of longtime blogging friends feature in the list. In the graph below you see my and their blogs as they rank in the Tranco list.


The relative positions of the blogs of several old time blogging friends and myself in the Tranco list of over 1 million sites.

That I might be on the long list when the Tranco list is used makes sense. However the research group says they used filtering and categorisation to then select the websites to approach. A meaningful selection seems less likely, given that they approached personal sites like mine (and judging by other sites approached as apparent from other online comments on the mails sent).

Still it’s wrong

The research was designed by Princeton’s computer science department, and was discussed with Princeton’s Institutional Review Board (IRB) they say. During this process the team ‘extensively discussed potential risks of our study, and took measures to minimize undue burden on websites, especially websites with less traffic and resources’.
The IRB concluded the research doesn’t constitute human subject research. True, from a design perspective, but as shown by me as a private individual receiving their e-mails not true in practice. Better determination of which sites to approach and not to approach would have been needed for that.

The e-mails sent out for this study are also worryingly problematic in two aspects:
First they pretend to be actual e-mails by individuals, nowhere is made clear it’s research. On top of that the names used for these individuals are clearly fake, and the domains from which e-mails were sent also easily raise suspicion. Furthermore the request lacks any context, an individual with a real request would never use a generic text or use the domain name and not the actual name of a website. This makes it unclear to recipients what the very purpose of the e-mails is. This is not only true for individuals or e.g. small non-profits, this is confusing and suspicious to every recipient even if they had limited their inquiries to major corporations. I’m sure that negatively impacts the results, and thus the validity of conclusions. It also means many recipients will have spent time evaluating, or worse bringing in advice, on how to deal with these suspicious looking requests.

Second the wording of the e-mail makes it worse. The mails have a legalese ring to them (e.g. stating it is not a formal data access request at this time though it might still follow, another thing a real individual would not phrase like that). What is worse each mail suggests a legal threat at the end. They say that a response is required within a month based on Article 12 of the GDPR, or within 45 days based on Section 1798.130 of the California Civil Code. Both those statements are lies. Art 12 GDPR sets a response deadline for data access requests, which this mail emphasises it is not, and the same is true for the California Civil Code.

It’s exactly this wording, with false legal threats, and lacking any context to evaluate what the purpose of the e-mails is, that makes people worry, spend time or even money figuring out what they might be exposed to. As an individual I concluded to ignore the mails, others didn’t, but would you if you are a small non-profit, or other business that does not have the inhouse legal knowledge to deal with this? Precisely those who have some knowledge about the GDPR or CCPA but not enough to be fully sure of themselves will spend unnecessary time on these requests. Princeton is thus externalising a burden and cost on website owners. Falsifying the very thing Princeton states about aiming to “minimize undue burden on websites“. Using the word websites obfuscates that every mail will have to be answered by a real person. They could have just mailed me asking me straight up for their research if I have a process for the GDPR in place. I would have replied to them and be done with it.

Filed complaint

Originally I had filed a complaint with the Digital Security research team at Radboud University, as they are named as partners in the study. Yesterday I withdrew my complaint with them, as they weren’t part of the study design, just have recently hired one of the researchers involved. Nevertheless they informed me they have alerted their own ethics board about this, to take lessons from it w.r.t guidelines and good practices, even as the head of department said to me it is now too late to prevent damage. At the same time he wrote, they cannot let it pass because “Even if privacy researchers do these projects with the best of intentions, it doesn’t mean they aren’t required to set them up well”.
It also means that I will refile my complaint with Princeton’s Review Board. Meanwhile this has spilled out online (it’s what you get if you target the 1 million most popular websites…), and I am not the only one filing a complaint judging by the responses of a tonedeaf tweet by one of the researchers.

Others blogging about this study:
Questions About GDPR Data Access Process Spam from Virginia
Free Radical: CCPA Scam
What’s the deal with those weird GDPR emails?
I Was Part of a Human Subject Research Study Without My Consent

This is quite something to read. The Irish data protection authority is where most GDPR complaints against US tech companies like Facebook end up, because the European activities of these companies are registered there. It has been quite clear in the past few years how enormously slow the Irish DPA has been in dealing with those complaints. Up to the point where the other DPA’s complained about it, and up to the point where the European DPA intervened in setting higher fine levels than the Irish DPA suggested when a decision finally was made. Now noyb publishes documents they obtained, that show how the Irish DPA tried to get the other national DPA’s to accept a general guideline they worked out with Facebook in advance. It would allow Facebook to contractually do away with informed consent by adding boiler plate consent to their TOS. This has been the FB defense until now, that there’s a contract between user and FB, which makes consent unnecessary. I’ve seen this elsewhere w.r.t. to transparency and open data in the past as well, where government entities tried to prevent transparency contractually. Contractually circumventing and doing away with general legal requirements isn’t admissable however, yet that is precisely what the Irish DPA attempted to make possible here through a EU DPA Guideline.

Reading this, the noticeable lack of progress by the Irish DPA seems not to be because of limited resources (as has been an issue in other MS), but because it has been actively working to undermine the intent and impact of the GDPR itself. Their response to realising that adtech is not workable under the GDPR seems to be to sabotage the GDPR.

The Irish DPA failed to get other DPA’s to accept a contractual consent bypass, and that is the right and expected outcome. That leaves us with what this says about the Irish DPA, that they attempted it in the first place, to replace their role as regulator with that of lobbyist:

It renders the Irish DPA unfit for purpose.