Category Archives: ethics

Data Protection By Design, The Little Blue Book

To celebrate the launch of the GDPR last week Friday, Jaap-Henk Hoekman released his ‘little blue book’ (pdf)’ on Privacy Design Strategies (with a CC-BY-NC license). Hoekman is an associate professor with the Digital Security group of the ICS department at the Radboud University.

I heard him speak a few months ago at a Tech Solidarity meet-up, and enjoyed his insights and pragmatic approaches (PDF slides here).

Data protection by design (together with a ‘state of the art’ requirement) forms the forward looking part of the GDPR where the minimum requirements are always evolving. The GDPR is designed to have a rising floor that way.
The little blue book has an easy to understand outline, which cuts up doing privacy by design into 8 strategies, each accompanied by a number of tactics, that can all be used in parallel.

Those 8 strategies (shown in the image above) are divided into 2 groups, data oriented strategies and process oriented strategies.

Data oriented strategies:
Minimise (tactics: Select, Exclude, Strip, Destroy)
Separate (tactics: Isolate, Distribute)
Abstract (tactics: Summarise, Group, Perturb)
Hide (tactics: Restrict, Obfuscate, Dissociate, Mix)

Process oriented strategies:
Inform (tactics: Supply, Explain, Notify)
Control (tactics: Consent, Choose, Update, Retract)
Enforce (tactics: Create, Maintain, Uphold)
Demonstrate (tactics: Record, Audit, Report)

All come with examples and the final chapters provide suggestions how to apply them in an organisation.

WP Values Tracking You At $30 Per Year

The Washington Post now has a premium ‘EU’ option, suggesting you pay more for them to comply with the GDPR.

Reading what the offer entails of course shows something different.
The basic offer is the price you pay to read their site, but you must give consent for them to track you and to serve targeted ads.
The premium offer is the price you pay to have an completely ad-free, and thus tracking free, version of the WP. Akin to what various other outlets and e.g. many mobile apps do too.

This of course has little to do with GDPR compliance. For the free and basic subscription they still need to be compliant with the GDPR but you enter into a contract that includes your consent to get to that compliance. They will still need to explain to you what they collect and what they do with it for instance. And they do, e.g. listing all their partners they exchange visitor data with.

The premium version gives you an ad-free WP so the issue of GDPR compliance doesn’t even come up (except of course for things like commenting which is easy to handle). Which is an admission of two things:

  1. They don’t see any justification for how their ads work other than getting consent from a reader. And they see no hassle-free way to provide informed consent options, or granular controls to readers, that doesn’t impact the way ad-tech works, without running afoul of the rule that consent cannot be tied to core services (like visiting their website).
  2. They value tracking you at $30 per year.

Of course their free service is still forced consent, and thus runs afoul of the GDPR, as you cannot see their website at all without it.

Yet, just to peruse an occasional article, e.g. following a link, that forced consent is nothing your browser can’t handle with a blocker or two, and VPN if you want. After all your browser is your castle.

GDPR as QA

Today I was at a session at the Ministry for Interior Affairs in The Hague on the GDPR, organised by the center of expertise on open government.
It made me realise how I actually approach the GDPR, and how I see all the overblown reactions to it, like sending all of us a heap of mail to re-request consent where none’s needed, or taking your website or personal blog even offline. I find I approach the GDPR like I approach a quality assurance (QA) system.

One key change with the GDPR is that organisations can now be audited concerning their preventive data protection measures, which of course already mimics QA. (Next to that the GDPR is mostly an incremental change to the previous law, except for the people described by your data having articulated rights that apply globally, and having a new set of teeth in the form of substantial penalties.)

AVG mindmap
My colleague Paul facilitated the session and showed this mindmap of GDPR aspects. I think it misses the more future oriented parts.

The session today had three brief presentations.

In one a student showed some results from his thesis research on the implementation of the GDPR, in which he had spoken with a lot of data protection officers or DPO’s. These are mandatory roles for all public sector bodies, and also mandatory for some specific types of data processing companies. One of the surprising outcomes is that some of these DPO’s saw themselves, and were seen as, ‘outposts’ of the data protection authority, in other words seen as enforcers or even potentially as moles. This is not conducive to a DPO fulfilling the part of its role in raising awareness of and sensitivity to data protection issues. This strongly reminded me of when 20 years ago I was involved in creating a QA system from scratch for my then employer. Some of my colleagues saw the role of the quality assurance manager as policing their work. It took effort to show how we were not building a straightjacket around them that kept them within strict boundaries, but providing a solid skeleton to grow on, and move faster. Where audits are not hunts for breaches of compliance but a way to make emergent changes in the way people worked visible, and incorporate professionally justified ones in that skeleton.

In another presentation a civil servant of the Ministry involved in creating a register of all person related data being processed. What stood out most for me was the (rightly) pragmatic approach they took with describing current practices and data collections inside the organisation. This is a key element of QA as well. You work from descriptions of what happens, and not at what ’should’ happen or ‘ideally’ happens. QA is a practice rooted in pragmatism, where once that practice is described and agreed it will be audited.
Of course in the case of the Ministry it helps that they only have tasks mandated by law, and therefore the grounds for processing are clear by default, and if not the data should not be collected. This reduces the range of potential grey areas. Similarly for security measures, they already need to adhere to national security guidelines (called the national baseline information security), which likewise helps with avoiding new measures, proves compliance for them, and provides an auditable security requirement to go with it. This no doubt helped them to be able to take that pragmatic approach. Pragmatism is at the core of QA as well, it takes its cues from what is really happening in the organisation, what the professionals are really doing.

A third one dealt with open standards for both processes and technologies by the national Forum for Standardisation. Since 2008 a growing list of currently some 40 or so standards is mandatory for Dutch public sector bodies. In this list of standards you find a range of elements that are ready made to help with GDPR compliance. In terms of support for the rights of those described by the data, such as the right to export and portability for instance, or in terms of preventive technological security measures, and ‘by design’ data protection measures. Some of these are ISO norms themselves, or, as the mentioned national baseline information security, a compliant derivative of such ISO norms.

These elements, the ‘police’ vs ‘counsel’ perspective on the rol of a DPO, the pragmatism that needs to underpin actions, and the building blocks readily to be found elsewhere in your own practice already based on QA principles, made me realise and better articulate how I’ve been viewing the GDPR all along. As a quality assurance system for data protection.

With a quality assurance system you can still famously produce concrete swimming vests, but it will be at least done consistently. Likewise with GDPR you will still be able to do all kinds of things with data. Big Data and developing machine learning systems are hard but hopefully worthwile to do. With GDPR it will just be hard in a slightly different way, but it will also be helped by establishing some baselines and testing core assumptions. While making your purposes and ways of working available for scrutiny. Introducing QA upon its introduction does not change the way an organisation works, unless it really doesn’t have its house in order. Likewise the GDPR won’t change your organisation much if you have your house in order either.

From the QA perspective on GDPR, it is perfectly clear why it has a moving baseline (through its ‘by design’ and ‘state of the art’ requirements). From the QA perspective on GDPR it is perfectly clear what the connection is to how Europe is positioning itself geopolitically in the race concerning AI. The policing perspective after all only leads to a luddite stance concerning AI, which is not what the EU is doing, far from it. From that it is clear how the legislator intends the thrust of GDPR. As QA really.

This Blog Is Now GDPR Compliant

At least I think it is…. Personal blogs don’t need to comply with the new European personal data protection regulations (already in force but enforceable from next week May 25th), says Article 2.2.c. However my blog does have a link with my professional activities, as I blog here about professional interests. One of those interests is data protection (the more you’re active in transparency and open data, the more you also start caring about data protection).

In the past few weeks Frank Meeuwsen has been writing about how to get his blog GDPR compliant (GDPR and the IndieWeb 1, 2 and 3, all in Dutch), and Peter Rukavina has been following suit. Like yours, my e-mail inbox is overflowing with GDPR related messages and requests from all the various web services and mailing lists I’m using. I had been thinking about adding a GDPR statement to this blog, but clearly needed a final nudge.

That nudge came this morning as I updated the Jetpack plugin of my WordPress blog. WordPress is the software I use to create this website, and Jetpack is a module for it, made by the same company that makes WordPress itself, Automattic. After the update, I got a pop-up stating that in my settings a new option now exists called “Privacy Policy”, which comes with a guide and suggested texts to be GDPR compliant. I was pleasantly surprised by this step by Automattic.

So I used that to write a data protection policy for this site. It is rather trivial in the sense that this website doesn’t do much, yet it is also surprisingly complicated as there are many different potential rabbit holes to go down. As it concerns not just comments or webmentions but also server logs my web hoster makes, statistics tools (some of which I don’t use but cannot switch off either), third party plugins for WordPress, embedded material from data hungry platforms like Youtube etc. I have a relatively bare bones blog (over the years I made it ever more minimalistic, stripping out things like sharing buttons most recently), and still as I’m asking myself questions that normally only legal departments would ask themselves, there are many aspects to consider. That is of course the whole point, that we ask these types of questions more often, not just of ourselves, but of every service provider we engage with.

The resulting Data Protection Policy is now available from the menu above.

Suggested Reading: Open Science, Apologies, Dark Patterns and more

Some links I think worth reading today.

Twitter Not GDPR Compliant (nor Flickr, nor ….)

Many tech companies are rushing to arrange compliance with GDPR, Europe’s new data protection regulations. What I have seen landing in my inbox thus far is not encouraging. Like with Facebook, other platforms clearly struggle, or hope to get away, with partially or completely ignoring the concepts of informed consent and unforced consent and proving consent. One would suspect the latter as Facebooks removal of 1.5 billion users from EU jurisdiction, is a clear step to reduce potential exposure.

Where consent by the data subject is the basis for data collection: Informed consent means consent needs to be explicitly given for each specific use of person related data, based on a for laymen clear explanation of the reason for collecting the data and how precisely it will be used.
Unforced means consent cannot be tied to core services of the controlling/processing company when that data isn’t necessary to perform a service. In other words “if you don’t like it, delete your account” is forced consent. Otherwise, the right to revoke one or several consents given becomes impossible.
Additionally, a company needs to be able to show that consent has been given, where consent is claimed as the basis for data collection.

Instead I got this email from Twitter earlier today:

“We encourage you to read both documents in full, and to contact us as described in our Privacy Policy if you have questions.”

and then

followed by

You can also choose to deactivate your Twitter account.

The first two bits mean consent is not informed and that it’s not even explicit consent, but merely assumed consent. The last bit means it is forced. On top of it Twitter will not be able to show content was given (as it is merely assumed from using their service). That’s not how this is meant to work. Non-compliant in other words. (IANAL though)

GDPR as De Facto Norm: Sonos Speakers

Just received an email from Sonos (the speaker system for streaming) about the changes they are making to their privacy statement. Like with FB in my previous posting this is triggered by the GDPR starting to be enforced from the end of May.

The mail reads in part

We’ve made these changes to comply with the high demands made by the GDPR, a law adopted in the European Union. Because we think that all owners of Sonos equipment deserve these protections, we are implementing these changes globally.

This is precisely the hoped for effect, I think. Setting high standards in a key market will lift those standards globally. It is usually more efficient to internally work according to one standard, than maintaining two or more in parallel. Good to see it happening, as it is a starting point for the positioning of Europe as a distinct player in global data politics, with ethics by design as the distinctive proposition. GDPR isn’t written as a source of red tape and compliance costs, but to level the playing field and enable companies to compete by building on data protection compliance (by demanding ‘data protection by design’ and following ‘state of the art’, which are both rising thresholds). Non-compliance in turn is becoming the more costly option (if GDPR really gets enforced, that is).

Macron’s 1.5 Billion for Values, Data and AI

Data, especially lots of it, is the feedstock of machine learning and algorithms. And there’s a race on for who will lead in these fields. This gives it a geopolitical dimension, and makes data a key strategic resource of nations. In between the vast data lakes in corporate silos in the US and the national data spaces geared towards data driven authoritarianism like in China, what is the European answer, what is the proposition Europe can make the world? Ethics based AI. “Enlightenment Inside”.

French President Macron announced spending 1.5 billion in the coming years on AI last month. Wired published an interview with Macron. Below is an extended quote of I think key statements.

AI will raise a lot of issues in ethics, in politics, it will question our democracy and our collective preferences……It could totally dismantle our national cohesion and the way we live together. This leads me to the conclusion that this huge technological revolution is in fact a political revolution…..Europe has not exactly the same collective preferences as US or China. If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution . That’s the condition of having a say in designing and defining the rules of AI. That is one of the main reasons why I want to be part of this revolution and even to be one of its leaders. I want to frame the discussion at a global scale….The key driver should not only be technological progress, but human progress. This is a huge issue. I do believe that Europe is a place where we are able to assert collective preferences and articulate them with universal values.

Macron’s actions are largely based on the report by French MP and Fields Medal winning mathematician Cédric Villani, For a Meaningful Artificial Intelligence (PDF)

Ethics by Design

My current thinking about what to bring to my open data and data governance work, as well as to technology development, especially in the context of networked agency, can be summarised under the moniker ‘ethics by design’. In a practical sense this means setting non-functional requirements at the start of a design or development process, or when tweaking or altering existing systems and processes. Non-functional requirements that reflect the values you want to safeguard or ensure, or potential negative consequences you want to mitigate. Privacy, power asymmetries, individual autonomy, equality, and democratic control are examples of this.

Today I attended the ‘Big Data Festival’ in The Hague, organised by the Dutch Ministry of Infrastructure and Water Management. Here several government organisations presented themselves and the work they do using data as an intensive resource. Stuff that speaks to the technologist in me. In parallel there were various presentations and workshops, and there I was most interested in what was said about ethical issues around data.

Author and interviewer Bas Heijne set the scene at the start by pointing to the contrast between the technology optimism concerning digitisation of years back and the more dystopian discussion (triggered by things like the Cambridge Analytica scandal and cyberwars), and sought the balance in the middle. I think that contrast is largely due to the difference in assumptions underneath the utopian and dystopian views. The techno-optimist perspective, at least in the webscene I frequented in the late 90’s and early 00’s assumed the tools would be in the hands of individuals, who would independently weave the world wide web, smart at the edges and dumb at the center. The dystopian views, including those of early criticaster like Aron Lanier, assumed, and were proven at least partly right, a centralisation into walled gardens where individuals are mere passive users or an object, and no longer a subject with autonomy. This introduces wildly different development paths concerning power distribution, equality and agency.

In the afternoon a session with professor Jeroen van den Hoven, of Delft University, focused on making the ethical challenges more tangible as well as pointed to the beginnings of practical ways to address them. It was the second time I heard him present in a month. A few weeks ago I attended an Ethics and Internet of Things workshop at University of Twente, organised by UNESCO World Commission on the Ethics of Science and Technology (COMEST). There he gave a very worthwile presentation as well.


Van den Hoven “if we don’t design for our values…”

What I call ethics by design, a term I first heard from prof Valerie Frissen, Van den Hoven calls value sensitive design. That term sounds more pragmatic but I feel conveys the point less strongly. This time he also incorporated the geopolitical aspects of data governance, which echoed what Rob van Kranenburg (IoT Council, Next Generation Internet) presented at that workshop last month (and which I really should write down separately). It was good to hear it reinforced for today’s audience of mainly civil servants, as currently there is a certain level of naivety involved in how (mainly local governments) collaborate with commercial partners around data collection and e.g. sensors in the public space.

(Malfunctioning) billboard at Utrecht Central Station a few days ago, with not thought through camera in a public space (to measure engagement with adverts). Civic resistance taped over the camera.

Value sensitive design, said Van den Hoven, should seek to combine the power of technology with the ethical values, into services and products. Instead of treating it as a dilemma with an either/or choice, which is the usual way it is framed: Social networking OR privacy, security OR privacy, surveillance capitalism OR personal autonomy, smart cities OR human messiness and serendipity. In value sensitive design it is about ensuring the individual is still a subject in the philosophical sense, and not merely the object on which data based services feed. By addressing both values and technological benefits as the same design challenge (security AND privacy, etc.), one creates a path for responsible innovation.

The audience saw both responsibilities for individual citizens as well as governments in building that path, and none thought turning one’s back on technology to fictitious simpler times would work, although some were doubtful if there was still room to stem the tide.

Cory Doctorow’s Walkaway- Hey, I Could Help Do That!

In a case of synchronicity I’ve read Cory Doctorow’s novel Walkaway when I was ill recently, just as Bryan Alexander scheduled it for his near future science fiction reading group. I loved reading the book, and in contrast to some other works of Doctorow the storyline kept working for me until the end.

Bryan amazingly has managed to get Doctorow to participate in a webcast as part of the Future Trends in learning series Bryan hosts. The session is planned for May 16th, and I marked my calendar for it.

In the comments Vanessa Vaile shares two worthwile links. One is an interesting recording from May last year at the New York public library in which Doctorow and Edward Snowden discuss some of the elements and underlying topics and dynamics of the Walkaway novel.

The other is a review in TOR.com, that resonates a lot with me. The reviewer writes how, in contrast with lots of other science fiction that takes one large idea or large change and extrapolates on that, Doctorow takes a number of smaller ideas and smaller changes, and then works out how those might interplay and weave new complexities, where the impact on “manufacturing, politics, the economy, wealth disparity, diversity, privilege, partying, music, sex, beer, drugs, information security, tech bubbles, law, and law enforcement” is all presented in one go.

It seems futuristic, until you realize that all of these things exist today.
….. most of it could start right now, if it’s the world we choose to create.

By not having any one idea jump too far from reality, Walkaway demonstrates how close we are, right now, to enormous promise and imminent peril.

That is precisely the effect reading Walkaway had on me, leading me to think how I could contribute to bringing some of the described effects about. And how some of those things I was/am already trying to create as part of my own work flow and information processes.