My first reading of the yet to be published EU Regulation on the European Approach for Artificial Intelligence, based on a leaked version, I find pretty good. A logical approach, laid out in the 92 recitals preceding the articles, based on risk assessment, where erosion of human and citizen rights or risk to key infrastructure and services and product safety is deemed high risk by definition. High risk means more strict conditions, following some of the building blocks of the GDPR, also when it comes to governance and penalties. Those conditions are tied to being allowed to put a product on the market, and are tied to how they perform in practice (not just how they’re intended). I find that an elegant combination, risk assessment based on citizen rights and critical systems, and connected to well-worn mechanisms of market access and market monitoring. It places those conditions on both producers and users, as well as other parties involved along the supply chain. The EU approach to data and AI align well this way it seems, and express the European geopolitical proposition concerning data and AI, centered on civic rights, into codified law. That codification, like the GDPR, is how the EU exports its norms to elsewhere.
The text should be published soon by the EC, and I’ll try a write-up in more detail then.
Two bookmarks, concerning GDPR enforcement. The GDPR is an EU law with global reach and as such ‘exports’ the current European approach to data protection as a key citizen right. National Data Protection Agencies (DPAs) are tasked with enforcing the GDPR against companies not complying with its rules. The potential fines for non-compliance are very steep, but much depends on DPAs being active. Various DPAs at this point, 2 years after GDPR enforcement commencing, seem understaffed, indecisive, or dragging their feet.
Now the DPAs are being sued by citizens to force them to do their job properly. The Luxembourg DPA is sued for the surprising ruling that the GDPR is basically unenforcable outside the EU (which isn’t true, as it could block services into the EU, seize assets etc.) And there’s a case before the EUCJ, based on the Irish DPA being extremely slow in starting investigations of the Big Tech companies registered within its jurisdiction, that would allow other national DPAs to start their own cases against these companies. (Normally the DPA of the country where a company is registered is responsible, but in certain cases DPA’s of the countries of residence of the complaining citizen can get involved too.)
The DPAs are the main factor in whether the GDPR is an actual force for data protection or an empty gesture. And it seems patience with DPAs to take up their defined role is running out with various EU citizens. Rightly so.
My colleagues Emily and Frank have in the past months been contributing our company’s work on ethical data use to the W3C’s Spatial Data on the Web Interest Group.
The W3C now has published a draft document on the responsible use of spatial data, to invite comments and feedback. It is not a normative document but aims to promote discussion. Comments can be filed directly on the Github link mentioned, or through the group’s mailing list (subscribe, archives).
“The purpose of this document is to raise awareness of the ethical responsibilities of both providers and users of spatial data on the web. While there is considerable discussion of data ethics in general, this document illustrates the issues specifically associated with the nature of spatial data and both the benefits and risks of sharing this information implicitly and explicitly on the web.
Spatial data may be seen as a fingerprint: For an individual every combination of their location in space, time, and theme is unique. The collection and sharing of individuals spatial data can lead to beneficial insights and services, but it can also compromise citizens’ privacy. This, in turn, may make them vulnerable to governmental overreach, tracking, discrimination, unwanted advertisement, and so forth. Hence, spatial data must be handled with due care. But what is careful, and what is careless? Let’s discuss this.”
2013 artwork by Jon Thomson and Alison Craighead. Located at the Greenwich Meridian, the sign marks the distance from itself in miles around the globe. Image by Alex Liivet, license CC-BY
Facebook has warned that it may pull out of Europe if the Irish data protection commissioner enforces a ban on sharing data with the US, after a landmark ruling by the European court of justice found in July that there were insufficient safeguards against snooping by US intelligence agencies.
Never issue a threat you’re not really willing to follow up on… FB says it might stop servicing EU citizens because it isn’t allowed to transfer their data to US servers over data protection concerns. To me it would seem good news if the FB data-kraken would withdraw its tentacles. It is also an open admission that they can’t provide their service if it is not tied to adtech and the rage-fed algorithmic timeline built on detailed data collection. Call it, I’d say.
Privacy regulations such as the GDPR say that you need to seek permission from your website visitors before tracking them.
Most GDPR consent banner implementations are deliberately engineered to be difficult to use and are full of dark patterns that are illegal according to the law..... If you implement a proper GDPR consent banner, a vast majority of visitors will most probably decline to give you consent. 91% to be exact out of 19,000 visitors in my study.
GDPR and adtech tracking cannot be reconciled, a point the bookmark below shows once more: 91% will not provide consent when given a clear unambiguous choice. GDPR enforcement needs a boost. So that adtech may die.
Marko Saric points to various options available to adtech users: targeted ads for consenting visitors only, showing ads just based on the page visited (as he says, “Google made their first billions that way“), use GDPR compliant statistics tools, and switch to more ethical monetisation methods. A likely result of publishers trying to get consent without offering a clear way to not opt-in (it’s not about opting-out, GDPR requires informed and unforced consent through opt-in, no consent is the default and may not impact service), while most websurfers don’t want to share their data, will mean blanket solutions like ad and tracker blocking by browsers as default. As Saric says most advertisers are very aware that visitors don’t want to be tracked, they might just be waiting to be actively stopped by GDPR enforcement and the cash stops coming in (FB e.g. has some $6 billion reasons every single month to continue tracking you).
(ht Peter O’Shaughnessy)
Ian Forrester over at Cubic Garden has submitted a GDPR request to ClearView AI, the alt-right linked company that is hawking their facial recognition database (based on scraped online images) to law enforcement as well as commercial outfits. Should be interesting to follow along. Recently IBM stopped facial recognition work (they previously showed not being up to speed with CC licensing it seemed to me), and others like Amazon and MicroSoft did too when it comes to law enforcement. Facial recognition is extremely sensitive to bias.
Facial recognition 1, by EFF, license CC BY