Over at Netzpolitik two leaked draft texts for new EC proposals w.r.t. data and digital legislation have been published. I’ve been reading them the past days, though not yet finished. In a week the final proposal should be announced by the EC. That they have been leaked beforehand tells you there’s some differences of opinion within the EC on this, giving the outside a way to read ahead and mount criticism in time.

The EC’s goals for digital regulation this period are simplification, consistency and clarity. In consultations for the upcoming European Data Union Strategy, I and others put forward to please not merely interpret ‘simplification’ as rule slashing. Simplification can also mean making it much easier to demonstrate compliance. And it would also help if the EC would come out and say the quiet part out loud: that a lot of wat is now presented by third parties as cumbersome regulation is in reality malicious compliance by those third parties. The annoying cookie walls of the past years e.g. are not in any way required by regulation, it’s just the single most annoying way for third parties to deal with it so you might think the EC is the problem. Tracking is the problem, that adtech is fundamentally in conflict with the rules is the problem. It’s not a ‘compliance burden’ if your actions bump into the law. That’s properly called ‘illegal actions’. Simplification in short could also mean a much clearer enforcing of existing rules, as most digital regulation now has very little in the way of actual consequences for third parties, and none that rise above the ‘cost of doing business’.

There are two ‘Omnibus’ proposals in the works, meaning a proposal that makes changes to a number of existing laws at the same time.

One deals with data regulations. It amends the Data Act in such a way that the Data Governance Act, the Open Data Directive and the Free Flow of Non-Personal Data Regulation all get repealed, and mostly incorporated into the Data Act. I’m working my way through the meaning of that still, at 90 pages of text it’s not a quick read. But one thing stands out immediately to me: the Open Data rules until now were a Directive, meaning every Member State would create a national law to implement it. The entirety now gets added to a Regulation (Act), meaning it has immediate working across the EU. This is something I and others have long (like since 2008 more or less) called for, because as a directive it means there’s differences between countries in how open data gets interpreted. What can be open data is currently based on the national information access regimes and not on a unified European notion. I still need to explore how that would play out in the new Omnibus. This first Omnibus also touches the GDPR, and that is something to be careful about too.

The other Omnibus is aimed at the AI Act and the GDPR. I haven’t looked at this one at all yet. But around the web I see fears and first takes that the GDPR will get weakened to feed AI model training, a.o. by stretching the notion of ‘legitimate interest’ in ways that make Facebook’s attempt at interpretation of the term in the past years seem conservative. It used to be that legitimate should be read as ‘lawful’ (e.g. I need your name if I’m to send you an invoice, because I’m legally obliged to put that on the invoice), but we seem to shift to where the interpretation of legitimate is as ‘justifiable’, and at that in the very generic meaning of ‘well, I have my reasons, ok?’. Another step, judging by what others have posted, seems to do away with the notion that inferred data can be collection of personal data (As in, I did not ask you about your religion and stored that, but I inferred it from tracking your visits to websites of houses of worship).

In a week we will know what the proposals of the EC really are. Until then I will be reading the leaked drafts, to see what mechanisms are being created, dumped and altered.

Favorited EDPB Urgent Binding Decision on processing of personal data for behavioural advertising by Meta by EDPB

This is very good news. The European Data Protection Board, at the request of the Norwegian DPA, has issued a binding decision instructing the Irish DPA and banning the processing of personal data for behavioural targeting by Meta. Meta must cease processing data within two weeks. Norway already concluded a few years ago that adtech is mostly illegal, but European cases based on the 2018 GDPR moved through the system at a glacial pace, in part because of a co-opted and dysfunctional Irish Data Protection Board. Meta’s ‘pay for privacy‘ ploy is also torpedoed with this decision. This is grounds for celebration, even if this will likely lead to legal challenges first. And it is grounds for congratulations to NOYB and Max Schrems whose complaints filed the first minute the GDPR enforcement started in 2018 kicked of the process of which this is a result.

…take, within two weeks, final measures regarding Meta Ireland Limited (Meta IE) and to impose a ban on the processing of personal data for behavioural advertising on the legal bases of contract and legitimate interest across the entire European Economic Area (EEA).

European Data Protection Board

In discussions about data usage and sharing and who has a measure of control over what data gets used and shared how, we easily say ‘my data’ or get told about what you can do with ‘your data’ in a platform.

‘My data’.

While it sounds clear enough, I think it is a very imprecise thing to say. It distracts from a range of issues about control over data, and causes confusion in public discourse and in addressing those issues. Such distraction is often deliberate.

Which one of these is ‘my data’?

  • Data that I purposefully collected (e.g. temperature readings from my garden), but isn’t about me.
  • Data that I purposefully collected (e.g. daily scale readings, quantified self), that is about me.
  • Data that is present on a device I own or external storage service, that isn’t about me but about my work, my learning, my chores, people I know.
  • Data that describes me, but was government created and always rests in government databases (e.g. birth/marriage registry, diploma’s, university grades, criminal records, real estate ownership), parts of which I often reproduce/share in other contexts while not being the authorative source (anniversaries, home address, CV).
  • Data that describes me, but was private sector created and always rests in private sector databases (e.g. credit ratings, mortgage history, insurance and coverage used, pension, phone location and usage, hotel stays, flights boarded)
  • Data that describes me, that I entered into my profiles on online platforms
  • Data that I created, ‘user generated content’, and shared through platforms
  • Data that I caused to be through my behaviour, collected by devices or platforms I use (clicks through sites, time spent on a page, how I drive my car, my e-reading habits, any IoT device I used/interacted with, my social graphs), none of which is ever within my span of control, likely not accessible to me, and I may not even be aware it exists.
  • Data that was inferred about me from patterns in data that I caused to be through my behaviour, none of which is ever within my span of control, and which I mostly don’t know about or even suspect exists. Which may say things I don’t know about myself (moods, mental health) or that I may not have made explicit anywhere (political or religious orientation, sexual orientation, medical conditions, pregnancy etc)

Most of the data that holds details about me wasn’t created by me, and wasn’t within my span of control at any time.
Most of the data I purposefully created or have or had in my span of control, isn’t about me but about my environment, about other people near me, things external and of interest to me.

They’re all ‘my data’. Yet, whenever someone says ‘my data’, and definitely when someone says ‘your data’, that entire scope isn’t what is indicated. My data as a label easily hides the complicated variety of data we are talking about. And regularly, specifically when someone says ‘your data’, hiding parts of the list is deliberate.
The last bullets, data that we created through our behaviour and what is inferred about us, is what the big social media platforms always keep out of sight when they say ‘your data’. Because that’s the data their business models run on. It’s never part of the package when you click ‘export my data’ in a platform.

The core issues aren’t about whether it is ‘my data’ in terms of control or provenance. The core issues are about what others can/cannot will/won’t do with any data that describes me or is circumstantial to me. Regardless in whose span of control such data resides, or where it came from.

There are also two problematic suggestions packed into the phrase ‘my data’.
One is that with saying ‘my data’ you are also made individually responsible for the data involved. While this is partly true (mostly in the sense of not carelessly leaving stuff all over webforms and accounts), almost all responsibility for the data about you resides with those using it. It’s other’s actions with data that concern you, that require responsibility and accountability, and should require your voice being taken into account. "Nothing about us, without us" holds true for data too.
The other is that ‘my data’ is easily interpreted and positioned as ownership. That is a sleight of hand. Property claims and citizen rights are very different things and different areas of law. If ‘your data’ is your property, all that is left is to haggle about price, and each context is framed as merely transactional. It’s not in my own interest to see my data or myself as a commodity. It’s not a level playing field when I’m left to negotiating my price with a global online platform. That’s so asymmetric that there’s only one possible outcome. Which is the point of the suggestion of ownership as opposed to the framing as human rights. Contracts are the preferred tool of the biggest party, rights that of the individual.

Saying ‘my data’ and ‘your data’ is too imprecise. Be precise, don’t let others determine the framing.

Bookmarked 1.2 billion euro fine for Facebook as a result of EDPB binding decision (by European Data Protection Board)

Finally a complaint against Facebook w.r.t. the GDPR has been judged by the Irish Data Protection Authority. This after the EDPB instructed the Irish DPA to do so in a binding decision (PDF) in April. The Irish DPA has been extremely slow in cases against big tech companies, to the point where they became co-opted by Facebook in trying to convince the other European DPA’s to fundamentally undermine the GDPR. The fine is still mild compared to what was possible, but still the largest in the GDPR’s history at 1.2 billion Euro. Facebook is also instructed to bring their operations in line with the GDPR, e.g. by ensuring data from EU based users is only stored and processed in the EU. This as there is no current way of ensuring GDPR compliance if any data gets transferred to the USA in the absence of an adequacy agreement between the EU and the US government.

A predictable response by FB is a threat to withdraw from the EU market. This would be welcome imo in cleaning up public discourse and battling disinformation, but is very unlikely to happen. The EU is Meta’s biggest market after their home market the US. I’d rather see FB finally realise that their current adtech models are not possible under the GDPR and find a way of using the GDPR like it is meant to: a quality assurance tool, under which you can do almost anything, provided you arrange what needs to be arranged up front and during your business operation.

This fine … was imposed for Meta’s transfers of personal data to the U.S. on the basis of standard contractual clauses (SCCs) since 16 July 2020. Furthermore, Meta has been ordered to bring its data transfers into compliance with the GDPR.

EDPB

This looks like a very welcome development: The European Commission (EC) is to ask for status updates of all international GDPR cases with all the Member State Data Protection Authorities (DPAs) every other month. This in response to a formal complaint by the Irish Council for Civil Liberties starting in 2021 about the footdragging of the Irish DPA in their investigations of BigTech cases (which mostly have their EU activities domiciled in Ireland).

The GDPR, the EU’s data protection regulation, has been in force since mid 2018. Since then many cases have been progressing extremely slowly. To a large extent because it seems that Ireland’s DPA has been the subject of regulatory capture by BigTech, up to the point where it is defying direct instructions by the EU data protection board and taking an outside position relative to all other European DPA’s.

With bi-monthly status updates of ongoing specific cases from now being requested by the EC of each Member State, this is a step up from the multi-year self-reporting by MS that usually is done to determine potential infringements. This should have an impact on the consistency with which the GDPR gets applied, and above all on ensuring cases are being resolved at adequate speed. The glacial pace of bigger cases risks eroding confidence in the GDPR especially if smaller cases do get dealt with (the local butcher getting fined for sloppy marketing, while Facebook makes billions of person-targeted ads without people’s consent).

So kudos to ICCL for filing the complaint and working with the EU Ombudsman on this, and to the EC for taking it as an opportunity to much more closely monitor GDPR enforcement.

De Nederlandse overheid gaat mogelijk van Facebook af. In Duitsland is het overheidsgebruik van Facebook door de Duitse autoriteit persoonsgegevens stilgelegd in 2021, omdat Meta zich (uiteraard) niet aan de AVG houdt. De AP doet dat soort uitspraken sinds 2018 niet meer zelf, omdat Facebook in Ierland is gevestigd. De Duitse instantie heeft dat soort consideratie niet, en gaat uit van een eigen verantwoordelijkheid. In plaats van veel te lang wachten op een Iers oordeel over Facebook, steekt ze de hand in eigen boezem en stelt dat de eigen overheid in ieder geval niet aan de eigen AVG verplichtingen kan voldoen door Facebook pages te hebben.

Staatssecretaris van Digitalisering Alexandra van Huffelen bereidt nu mogelijk een besluit voor langs dezelfde lijnen. Terecht lijkt me. Meta houdt zich zelf niet aan de AVG, en bovendien is de algemene uitwisseling van Europese persoonsgegevens met de VS geheel niet juridisch gedekt op dit moment.

De overheid moet zelf het goede voorbeeld geven bij online interactie met burgers en de omgang met eigen gegevens. Dit geldt voor Meta, voor Twitter, maar ook voor cloud diensten en de Microsoft lock-in waar de overheid zich grotendeels in bevindt. Facebook zelf niet meer gebruiken is een bescheiden eerste signaal, dat al verrassend lastig lijkt voor de overheid om helder af te geven.

Ik hoop dat de staatssecretaris de knoop snel doorhakt.

Ein datenschutzkonformer Betrieb einer Facebook-Fanpage sei nicht möglich, schrieb Kelber in einem Brief an alle Bundesministerien und obersten Bundesbehörden.

Bundesdatenschutzbeauftragte