Amazon has been fined 746 million Euro by the Luxembourg DPA (where Amazon’s EU activities reside). In its response Amazon shows it isn’t willing to publicly acknowledge to even understand the EU data protection rules.

There has been no data breach, and no customer data has been exposed to any third party. These facts are undisputed., said an Amazon spokesperson according to Techcrunch.

Those facts are of course undisputed because a data breach or exposure of data to third parties is not a prerequisite for being in breach of GDPR rules. Using the data yourself in ways that aren’t allowed is plenty reason in itself for fines the size of a few percentage points of your global yearly turnover. In Amazon’s case the fine isn’t even a third of a percentage point of their turnover, so about a day’s worth of turnover for them: they’re being let-off pretty lightly actually compared to what is possible under the GDPR.

How Amazon uses the data it collects, not any breach or somesuch, is the actual reason for the complaint by La Quadrature du Net (PDF) filed with the Luxembourg DPA: the complaint “alleges that Amazon manipulates customers for commercial means by choosing what advertising and information they receive.” (emphasis mine)

The complaint and the ruling are laying bare the key fact Amazon and other tech companies aren’t willing to publicly comment upon: adtech in general is in breach of the GDPR.

There are a range of other complaints along these lines being processed by various DPA’s in the EU, though for some of those it will be a long wait as e.g. the Irish DPA is working at a snail’s pace w.r.t. complaints against Apple and Facebook. (The slow speed of the Irish DPA is itself now the subject of a complaint.)

Meanwhile two new European laws have been proposed that don’t chime with the current modus operandi of Amazon et al, the Digital Markets Act and the Digital Services Act, which both contain still bigger potential fines than the GDPR for non-compliance w.r.t. e.g. interoperability, service-neutrality, and transparency and accountability measures. And of course there are the European anti-trust charges against Amazon as well.

Amazon will of course appeal, but it can only ever be an attempt to gaslight and gloss over the fundamental conflict between adtech and GDPR. Let’s hope the Luxembourg DPA continues to see through that.

My first reading of the yet to be published EU Regulation on the European Approach for Artificial Intelligence, based on a leaked version, I find pretty good. A logical approach, laid out in the 92 recitals preceding the articles, based on risk assessment, where erosion of human and citizen rights or risk to key infrastructure and services and product safety is deemed high risk by definition. High risk means more strict conditions, following some of the building blocks of the GDPR, also when it comes to governance and penalties. Those conditions are tied to being allowed to put a product on the market, and are tied to how they perform in practice (not just how they’re intended). I find that an elegant combination, risk assessment based on citizen rights and critical systems, and connected to well-worn mechanisms of market access and market monitoring. It places those conditions on both producers and users, as well as other parties involved along the supply chain. The EU approach to data and AI align well this way it seems, and express the European geopolitical proposition concerning data and AI, centered on civic rights, into codified law. That codification, like the GDPR, is how the EU exports its norms to elsewhere.

The text should be published soon by the EC, and I’ll try a write-up in more detail then.

Two bookmarks, concerning GDPR enforcement. The GDPR is an EU law with global reach and as such ‘exports’ the current European approach to data protection as a key citizen right. National Data Protection Agencies (DPAs) are tasked with enforcing the GDPR against companies not complying with its rules. The potential fines for non-compliance are very steep, but much depends on DPAs being active. Various DPAs at this point, 2 years after GDPR enforcement commencing, seem understaffed, indecisive, or dragging their feet.

Now the DPAs are being sued by citizens to force them to do their job properly. The Luxembourg DPA is sued for the surprising ruling that the GDPR is basically unenforcable outside the EU (which isn’t true, as it could block services into the EU, seize assets etc.) And there’s a case before the EUCJ, based on the Irish DPA being extremely slow in starting investigations of the Big Tech companies registered within its jurisdiction, that would allow other national DPAs to start their own cases against these companies. (Normally the DPA of the country where a company is registered is responsible, but in certain cases DPA’s of the countries of residence of the complaining citizen can get involved too.)

The DPAs are the main factor in whether the GDPR is an actual force for data protection or an empty gesture. And it seems patience with DPAs to take up their defined role is running out with various EU citizens. Rightly so.

My colleagues Emily and Frank have in the past months been contributing our company’s work on ethical data use to the W3C’s Spatial Data on the Web Interest Group.

The W3C now has published a draft document on the responsible use of spatial data, to invite comments and feedback. It is not a normative document but aims to promote discussion. Comments can be filed directly on the Github link mentioned, or through the group’s mailing list (subscribe, archives).

The purpose of this document is to raise awareness of the ethical responsibilities of both providers and users of spatial data on the web. While there is considerable discussion of data ethics in general, this document illustrates the issues specifically associated with the nature of spatial data and both the benefits and risks of sharing this information implicitly and explicitly on the web.

Spatial data may be seen as a fingerprint: For an individual every combination of their location in space, time, and theme is unique. The collection and sharing of individuals spatial data can lead to beneficial insights and services, but it can also compromise citizens’ privacy. This, in turn, may make them vulnerable to governmental overreach, tracking, discrimination, unwanted advertisement, and so forth. Hence, spatial data must be handled with due care. But what is careful, and what is careless? Let’s discuss this.

"Here"
2013 artwork by Jon Thomson and Alison Craighead. Located at the Greenwich Meridian, the sign marks the distance from itself in miles around the globe. Image by Alex Liivet, license CC-BY

Bookmarked Facebook says it may quit Europe over ban on sharing data with US (The Guardian)
Facebook has warned that it may pull out of Europe if the Irish data protection commissioner enforces a ban on sharing data with the US, after a landmark ruling by the European court of justice found in July that there were insufficient safeguards against snooping by US intelligence agencies.

Never issue a threat you’re not really willing to follow up on… FB says it might stop servicing EU citizens because it isn’t allowed to transfer their data to US servers over data protection concerns. To me it would seem good news if the FB data-kraken would withdraw its tentacles. It is also an open admission that they can’t provide their service if it is not tied to adtech and the rage-fed algorithmic timeline built on detailed data collection. Call it, I’d say.

Bookmarked Only 9% of visitors give GDPR consent to be tracked (markosaric.com)
Privacy regulations such as the GDPR say that you need to seek permission from your website visitors before tracking them. Most GDPR consent banner implementations are deliberately engineered to be difficult to use and are full of dark patterns that are illegal according to the law..... If you implement a proper GDPR consent banner, a vast majority of visitors will most probably decline to give you consent. 91% to be exact out of 19,000 visitors in my study.

GDPR and adtech tracking cannot be reconciled, a point the bookmark below shows once more: 91% will not provide consent when given a clear unambiguous choice. GDPR enforcement needs a boost. So that adtech may die.

Marko Saric points to various options available to adtech users: targeted ads for consenting visitors only, showing ads just based on the page visited (as he says, “Google made their first billions that way“), use GDPR compliant statistics tools, and switch to more ethical monetisation methods. A likely result of publishers trying to get consent without offering a clear way to not opt-in (it’s not about opting-out, GDPR requires informed and unforced consent through opt-in, no consent is the default and may not impact service), while most websurfers don’t want to share their data, will mean blanket solutions like ad and tracker blocking by browsers as default. As Saric says most advertisers are very aware that visitors don’t want to be tracked, they might just be waiting to be actively stopped by GDPR enforcement and the cash stops coming in (FB e.g. has some $6 billion reasons every single month to continue tracking you).

(ht Peter O’Shaughnessy)