The spam about GDPR and CCPA I received last week, turns out to be part of a study by the US based Princeton university, with one of the researchers recently having joined the Dutch Radboud University. The more recently sent out mails apparantly had a link to the project page added, I assume in light of feedback received, which then was shared in my Mastodon timeline by someone who as a Mastodon moderator had received these mails.

I sent a mail to the research team explaining my complaint about the mails I received. I also approached the Radboud University’s Digital Security (RU DiS) research group where one of the researchers works, and filed a complaint there.
In the past few days I’ve had e-mail exchanges with the research team, as well as with the RU DiS department head. All those I approached have been very responsive and willing to provide information, which I very much appreciate.

That doesn’t make the mails I received ok though. The research team itself may have come to the same notion, as they informed me they’ve stopped sending out new mails for now. They are also working to add have added a FAQ to the project page. [UPDATE 2021-12-19 Jonathan Mayer, the Principal Investigator in this Princeton research project has now issued an apology. These are welcome words.]

On the research

The research project is interested in how companies have set up their process for responding to requests for data access under the European general data protection regulation (GDPR) and the California Consumer Privacy Act (CCPA). They also intended these requests for organisations who don’t a priori fall within scope of those acts. Both acts are intended to set a norm for those not covered by it. The GDPR is written to export the EU’s norms for data protection to the rest of the world, and the CCPA is set up to encourage companies not active in California to follow its rules regardless. So far I have no issues.

How I ended up in the list of sites approached

My blog is a personal website, so it falls outside of the declared scope of the study (companies). It can’t fall under the CCPA, as it only applies to businesses (that do business in California, with a certain turnover, or selling data). It is less clear if it falls under the GDPR: In my reading of the GDPR it doesn’t, but at the same time have written a personal data protection policy as if it does (out of professional interest). So how did I end up in Princeton’s list of site owners to approach? In my conversation with one of the researchers they indicated that the list of sites to approach was a selection taken out of the Tranco list. That list combines the results from various lists of the 1 million most popular websites. Such as Alexa (soon to be discontinued), Cisco Umbrella, and Majestic Million. My URL is in both the Alexa and the Majectic list. Cisco’s list looks at DNS requests for domains on their hardware, and unsurprisingly I’m not in their current list as it is based on today’s web traffic. The Majestic list seems to use backlinks to a site as a ranking factor. This favors old websites, as they build up a sediment of such backlinks over time. Such as weblogs that are some 20 years old, such as mine. Unsurprising then that blogs like Dave‘s, David‘s, and those of longtime blogging friends feature in the list. In the graph below you see my and their blogs as they rank in the Tranco list.


The relative positions of the blogs of several old time blogging friends and myself in the Tranco list of over 1 million sites.

That I might be on the long list when the Tranco list is used makes sense. However the research group says they used filtering and categorisation to then select the websites to approach. A meaningful selection seems less likely, given that they approached personal sites like mine (and judging by other sites approached as apparent from other online comments on the mails sent).

Still it’s wrong

The research was designed by Princeton’s computer science department, and was discussed with Princeton’s Institutional Review Board (IRB) they say. During this process the team ‘extensively discussed potential risks of our study, and took measures to minimize undue burden on websites, especially websites with less traffic and resources’.
The IRB concluded the research doesn’t constitute human subject research. True, from a design perspective, but as shown by me as a private individual receiving their e-mails not true in practice. Better determination of which sites to approach and not to approach would have been needed for that.

The e-mails sent out for this study are also worryingly problematic in two aspects:
First they pretend to be actual e-mails by individuals, nowhere is made clear it’s research. On top of that the names used for these individuals are clearly fake, and the domains from which e-mails were sent also easily raise suspicion. Furthermore the request lacks any context, an individual with a real request would never use a generic text or use the domain name and not the actual name of a website. This makes it unclear to recipients what the very purpose of the e-mails is. This is not only true for individuals or e.g. small non-profits, this is confusing and suspicious to every recipient even if they had limited their inquiries to major corporations. I’m sure that negatively impacts the results, and thus the validity of conclusions. It also means many recipients will have spent time evaluating, or worse bringing in advice, on how to deal with these suspicious looking requests.

Second the wording of the e-mail makes it worse. The mails have a legalese ring to them (e.g. stating it is not a formal data access request at this time though it might still follow, another thing a real individual would not phrase like that). What is worse each mail suggests a legal threat at the end. They say that a response is required within a month based on Article 12 of the GDPR, or within 45 days based on Section 1798.130 of the California Civil Code. Both those statements are lies. Art 12 GDPR sets a response deadline for data access requests, which this mail emphasises it is not, and the same is true for the California Civil Code.

It’s exactly this wording, with false legal threats, and lacking any context to evaluate what the purpose of the e-mails is, that makes people worry, spend time or even money figuring out what they might be exposed to. As an individual I concluded to ignore the mails, others didn’t, but would you if you are a small non-profit, or other business that does not have the inhouse legal knowledge to deal with this? Precisely those who have some knowledge about the GDPR or CCPA but not enough to be fully sure of themselves will spend unnecessary time on these requests. Princeton is thus externalising a burden and cost on website owners. Falsifying the very thing Princeton states about aiming to “minimize undue burden on websites“. Using the word websites obfuscates that every mail will have to be answered by a real person. They could have just mailed me asking me straight up for their research if I have a process for the GDPR in place. I would have replied to them and be done with it.

Filed complaint

Originally I had filed a complaint with the Digital Security research team at Radboud University, as they are named as partners in the study. Yesterday I withdrew my complaint with them, as they weren’t part of the study design, just have recently hired one of the researchers involved. Nevertheless they informed me they have alerted their own ethics board about this, to take lessons from it w.r.t guidelines and good practices, even as the head of department said to me it is now too late to prevent damage. At the same time he wrote, they cannot let it pass because “Even if privacy researchers do these projects with the best of intentions, it doesn’t mean they aren’t required to set them up well”.
It also means that I will refile my complaint with Princeton’s Review Board. Meanwhile this has spilled out online (it’s what you get if you target the 1 million most popular websites…), and I am not the only one filing a complaint judging by the responses of a tonedeaf tweet by one of the researchers.

Others blogging about this study:
Questions About GDPR Data Access Process Spam from Virginia
Free Radical: CCPA Scam
What’s the deal with those weird GDPR emails?
I Was Part of a Human Subject Research Study Without My Consent

This is quite something to read. The Irish data protection authority is where most GDPR complaints against US tech companies like Facebook end up, because the European activities of these companies are registered there. It has been quite clear in the past few years how enormously slow the Irish DPA has been in dealing with those complaints. Up to the point where the other DPA’s complained about it, and up to the point where the European DPA intervened in setting higher fine levels than the Irish DPA suggested when a decision finally was made. Now noyb publishes documents they obtained, that show how the Irish DPA tried to get the other national DPA’s to accept a general guideline they worked out with Facebook in advance. It would allow Facebook to contractually do away with informed consent by adding boiler plate consent to their TOS. This has been the FB defense until now, that there’s a contract between user and FB, which makes consent unnecessary. I’ve seen this elsewhere w.r.t. to transparency and open data in the past as well, where government entities tried to prevent transparency contractually. Contractually circumventing and doing away with general legal requirements isn’t admissable however, yet that is precisely what the Irish DPA attempted to make possible here through a EU DPA Guideline.

Reading this, the noticeable lack of progress by the Irish DPA seems not to be because of limited resources (as has been an issue in other MS), but because it has been actively working to undermine the intent and impact of the GDPR itself. Their response to realising that adtech is not workable under the GDPR seems to be to sabotage the GDPR.

The Irish DPA failed to get other DPA’s to accept a contractual consent bypass, and that is the right and expected outcome. That leaves us with what this says about the Irish DPA, that they attempted it in the first place, to replace their role as regulator with that of lobbyist:

It renders the Irish DPA unfit for purpose.

Bookmarked Meta’s failed Giphy deal could end Big Tech’s spending spree (by Ars Technica)

This is indeed a very interesting decision by the UK competition and markets authority. I recognise what Ars Technica writes. It’s not just a relevant decision in its own right, it’s also part of an emergent pattern. A pattern various components of which are zeroing in on large silo’d market players. In the EU the Digital Markets Act was approved in recent weeks by both the council of member state ministers and the European Parliament, with the negotation of a final shared text to be finished by next spring. The EU ministers also agreed the Digital Services Act between the member states (the EP still needs to vote on it in committee). The DMA and DSA make requirements w.r.t. interoperability, service neutrality and portability, democratic control and disinformation. On top of the ongoing competition complaints and data protection complaints this will lead to new investigations of FB et al, if not to immediate changes in functionality and accessibility of their platforms. And then there’s also the incoming AI Regulation which classifies manipulation of people’s opinion and sentiment as high risk and a to a certain extent prohibited application. This has meaning for algorithmic timelines and profile based sharing of material in those timelines. All of these, the competition issues, GDPR issues, DMA and DSA issues, and AI risk mitigation will hit FB and other big platforms simultaneously in the near future. They’re interconnected and reinforce each other. That awareness is already shining through in decisions made by competent authorities and judges here and now. Not just within the EU, but also outside it as the European GDPR, DMA, DSA and AI acts are also deliberate export vehicles for the norms written down within them.

….the strange position taken by Britain’s competition watchdog in choosing to block Meta’s takeover of GIF repository Giphy. Meta, the UK’s Competition and Markets Authority (CMA) ruled, must now sell all the GIFs—just 19 months after it reportedly paid $400 million for them. It’s a bold move—and a global first. ……regulators everywhere will now be on high alert for what the legal world calls “killer acquisitions”—where an established company buys an innovative startup in an attempt to squash the competition it could pose in the future.

Morgan Meaker, wired.com / Ars Technica

Bookmarked Een coronapas is repressief, niet progressief. (by Jaap Henk Hoepman)

Hoepman loopt op de feiten vooruit (er is nog geen akkoord voor een wetswijziging naar 2G beleid, doch poneert het als al voldongen feit), maar beschrijft adequaat de situatie, of beter De Situatie. En hoe de brandstichters het debat kapen, of beter door de andere partijen het narratief láten kapen. Wat weer een rem zet op een open debat. (Al moet ik er aan toevoegen dat ik heel wat mensen die oorspronkelijk een redelijk en beargumenteerd kritisch geluid lieten horen heb zien opschuiven, correctie: heb zien afglijden, naar de samenzweringshoek en dat dat me ook minder genegen heeft gemaakt om anderen het voordeel van de twijfel te geven.)

Bovendien moeten we niet vergeten dat de situatie waar we nu inzitten het gevolg [is] van eerder gemaakte keuzes. Daarmee is dit pleidooi juist ook een progressief geluid. … We moeten het verzet tegen een repressief, discriminerend, en op dwang gebaseerd antwoord op een door neo-liberaal beleid veroorzaakt probleem niet door extreem rechts laten kapen! Een grote groep mensen, inclusief in eerste instantie ikzelf, voelen zich daardoor zeer bezwaard zich uit te spreken.

Jaap Henk Hoepman

I don’t use iCloud in any way. I don’t have anything to sync as this is my only Apple device. I don’t let my Mac store things in its Apple key chain. So I block connections to iCloud, there is no reason known to me for them to exist other than Apple being overly eager in collecting data.

This results in my Mac doing 16 attempts per second in reaching iCloud on a keychain related domain. Over 10 million times this week. I allowed it this week once to see if it would shut up consequent attempts, but the crazy speed of trying to connect resumes not long afterwards. At 16Hz it’s just a few attempts per second shy of being within hearing range, otherwise I’d hear my Mac doing it 😉


Little Snitch showing what’s going on. Click to enlarge.

Bookmarked Dust Rising: Machine learning and the ontology of the real (by David Weinberger)

I am looking forward to reading this. Will need to put aside some time to be able to really focus, given the author, and the amount of time taken to write it.

…an article I worked on for a couple of years. It’s only 2,200 words, but they were hard words to find because the ideas were, and are, hard for me. … The article argues, roughly, that the sorts of generalizations that machine learning models embody are very different from the sort of generalizations the West has taken as the truths that matter.

David Weinberger