Public Spaces is an effort to reshape the internet experience towards a much larger emphasis on, well, public spaces. Currently most online public debate is taking place in silos provided by monopolistic corporations, where public values will always be trumped by value extraction regardless of externalised costs to communities, ethics, and society. Today the Public Spaces 2022 conference took place. I watched the 2021 edition online, but this time decided to be in the room. This to have time to interact with other participants and see who sees itself as part of this effort. Public Spaces is supported by 50 or so organisations, one of which I’m a board member of. Despite that nominal involvement I am still somewhat unclear about what the purpose of Public Spaces as a movement, not as an intention, is. This first day of the 2-day conference didn’t make that clearer to me, but the actual sessions and conversations were definitely worthwile to me.
Some first observations that I jotted down on the way home, below the photo taken just before the start of the conference.
a) In the audience and on stage there were some known faces, but mostly people unknown to me. Good thing, as it demonstrates how many new entrants into these discussions there are. At the same time there was also a notable absence of faces, e.g. from the organisations part of the Public Spaces effort. Maybe it’s because they rather show up tomorrow when the deputy minister is also present. As an awareness raising exercise, despite this still being a rather niche and like minded audience, this conference is certainly valuable.
b) That value was I think mostly expressed by the attention given to explaining some of the newly agreed European laws, the Digital Markets Act, Digitals Services Act and Data Governance Act. For most of the audience this looks like the first actual encounter with what those laws say, and one panel moderator upon hearing its contents showed themselves surprised this was already decided regulation and not stuck somewhere in a long and slow pipeline of debate and lobbying.
c) It was very good to hear people on stage actually speaking enthusiastically about the things these new laws deliver, despite being cautious about the pace of implementation and when we’ll see the actual impact of these rules. Lotje Beek of Bits of Freedom was enthusiastic about the Digital Services Act and I applaud the work BoF has done in the past years on this. (Disclosure: I’m on the board of an NGO that joins forces with BoF and Waag, organiser of this conference, in the so-called Digital Four, which lobbies the Dutch government on digital affairs.)
Similarly Kim van Sparrentak, MEP for the Greens, talked with energy about the Digital Markets Act. This was very important I think, and helps impress on the audience to engage with these new laws and the tools they provide.
d) The opening talk by Miriam Rasch I enjoyed a lot. Her earlier book Friction E felt seemingly lacked some deeper understanding of the technologies involved to build the conclusions and arguments on, so I was interested in hearing her talk in person. The focus today was more on her second book Autonomy. I‘ll buy bought it and will read it, also to clarify whether some of the things I think I heard are my misunderstanding or parts of the ideas expressed in the book. Rasch positions autonomy as the key thing to guard and strengthen. She doesn’t mean autonomy in the sense of being fully disconnected from everyone else in your decisions, but in a more interdependent way. To make your own choices, within the network of relationships around you. Also as an emotionally rooted thing, which I thought is a useful insight. She does position it as something exclusively individual. At the same time it seems she equates autonomy with agency, and I think agency does not merely reside on the individual level but also in groups of relationships in a given context (I call it networked agency). It seemed a very westernised individualistic viewpoint, that I think sets you up for less autonomy because it pits you individually against the much bigger systems and structures that erode your autonomy, dumping you in a very assymetric power struggle. A second thing that stood out to me is how she expresses the me-against-the-system issue as one of autonomy versus automation. It’s a nice alliteration, but I don’t accept that juxtaposition. It’s definitely the case that automation is frequently used to dehumanise lots of decisions, and thus eroding the autonomy of those being decided about. But to me it’s not inherent in automation. When you have the logic of (corporate) bureaucracies doing the automation, you’ll end up with automation that mimics that logic. If I do the automation it will mimic my logic. I use automation a lot for my own purposes (personal software), and it increases my agency, it’s a direct expression of my autonomy (or that of the groups I’m part of). There’s more to be said, in a separate post, a.o. about the 3 or 4 thinking exercises she took us through to explore autonomy as a concept for ourselves. After all it wouldn’t make us more autonomous if she would prescribe us her definition of autonomy, precisely because she underscores that it’s not a purely rational concept but an emotional one as well.
e) Prof. Tamar Sharon of Radboud University spoke about the influence tech companies have in other domains than tech itself because of their technology being expanded into or used in those domains such as health, education, spatial planning, media. She calls it sphere transgressions. This may bring value, but may also be problematic. She showed a very cool tool that visualises how various tech companies are influential in domains you don’t immediately associate them with. A good thinking aid I think also in the upcoming discussion about sectoral European data spaces and being alert to the pitfall of it turning into a tech dominated discussion, rather than a societal benefit and impact discussion.
f) Kudos to the conference organisers. Every panel composition was nicely balanced, it shows good care in curating the program and having tapped into a high quality network. I know from experience that it takes deliberate effort to make it so. Also the catering was fully vegetarian and vegan, no words wasted on it, just by default. That’s the way to go.
I find this an unconvincing critique of the data altruism concept in the new EU Data Governance Act (caveat: the final consolidated text of the new law has not been published yet).
“If the EU had truly wanted to facilitate processing of personal data for altruistic purposes, it could have lifted the requirements of the GDPR”
GDPR slackened for common good purposes? Let’s loosen citizen rights requirements? It assumes common good purposes can be well enough defined to not endanger citizen rights, turtles all the way down. The GDPR is a foundational block, one in which the author, some googling shows, is disappointed with having had some first hand experience in its writing process. The GDPR is a quality assurance instrument, meaning, like with ISO style QA systems, it doesn’t make anything impossible or unallowed per se but does require you organise it responsibly upfront. That most organisations have implemented it as a compliance checklist to be applied post hoc is the primary reason for it being perceived as “straight jacket” and for the occurring GDPR related breaches to me.
It is also worth noting that data altruism also covers data that is not covered by the GDPR. It’s not just about person identifiable data, but also about otherwise non-public or confidential organisational data.
The article suggests it makes it harder for data altruistic entities to do something that already now can be done under the GDPR by anyone, by adding even more rules.
The GDPR pertains to the grounds for data collection in the context of usage specified at the time of collection. Whereas data altruism is also aimed at non-specified and at not yet known future use of data collected here and now. As such it covers an unaddressed element in the GDPR and offers a path out of the purpose binding the GDPR stipulates. It’s not a surprise that a data altruism entity needs to comply with both the GDPR and a new set of rules, because those additional rules do not add to the GDPR responsibilities but cover other activities. The type of entities envisioned for it already exist in the Netherlands, common good oriented entities called public benefit organisations: ANBI‘s. These too do not absolve you from other legal obligations, or loosen the rules for you. On the contrary these too have additional (public) accountability requirements, similar to those described in the DGA (centrally registered, must publish year reports). The DGA creates ANBI’s for data, Data-ANBI’s. I’ve been involved in data projects that could have benefited from that possibility but never happened in the end because it couldn’t be made to work without this legal instrument.
To me the biggest blind spot in the criticism is that each of the examples cited as probably more hindered than helped by the new rules are single projects that set up their own data collection processes. That’s what I think data altruism is least useful for. You won’t be setting up a data altruism entity for your project, because by then you already know what you want the data for and start collecting that data after designing the project. It’s useful as a general purpose data holding entity, without pre-existing project designs, where later, with the data already collected, such projects as cited as example will be applicants to use the data held. A data altruistic entity will not cater to or be created for a single project but will serve data as a utility service to many projects. I envision that universities, or better yet networks of universities, will set up their own data altruistic entities, to cater to e.g. medical or social research in general. This is useful because there currently are many examples where handling the data requirements being left to the research team is the source of not just GDPR breaches but also other ethical problems with data use. It will save individual projects such as the examples mentioned a lot of time and hassle if there’s one or more fitting data altruistic entities for them to go to as a data source. This as there will then be no need for data collection, no need to obtain your own consent or other grounds for data collection for each single respondent, or create enough trust in your project. All that will be reduced to guaranteeing your responsible data use and convince an ethical board of having set up your project in a responsible way so that you get access to pre-existing data sources with pre-existing trust structures.
It seems to me sentences cited below require a lot more thorough argumentation than the article and accompanying PDF try to provide. Ever since I’ve been involved in open data I’ve seen plenty of data innovations, especially if you switch your ‘only unicorns count’ filter off. Barriers that unintentionally do exist typically stem more from a lack of a unified market for data in Europe, something the DGA (and the GDPR) is actually aimed at.
“So long as the anti-processing straitjacket of the GDPR is not loosened even a little for altruistic purposes, there will be little hope for data innovations from Europe.” “In any case, the EU’s bureaucratic ideas threaten to stifle any altruism.”
This is the presentation I gave at the Open Belgium 2018 Conference in Louvain-la-Neuve this week, titled ‘The role and value of data inventories, a key step towards mature data governance’. The slides are embedded further below, and as PDF download at grnl.eu/in. It’s a long read (some 3000 words), so I’ll start with a summary.
Summary, TL;DR
The quality of information households in local governments is often lacking.
Things like security, openness and privacy are safeguarded by putting separate fences for each around the organisation, but those safeguards lack having detailed insight into data structures and effective corresponding processes. As archiving, security, openness and privacy in a digitised environment are basically inseparable, doing ‘everything by design’ is the only option. The only effective way is doing everything at the level of the data itself. Fences are inefficient, ineffective, and the GDPR due to its obligations will show how the privacy fence fails, forcing organisations to act. Only doing data governance for privacy is senseless, doing it also for openness, security and archiving at the same time is logical. Having good detailed inventories of your data holdings is a useful instrument to start asking the hard questions, and have meaningful conversations. It additionally allows local government to deploy open or shared data as policy instrument, and releasing the inventory itself will help articulate civic demand for data. We’ve done a range of these inventories with local government.
1: High time for mature data governance in local and regional government
Digitisation changes how we look at things like openness, privacy, security and archiving, as it creates new affordances now that the content and its medium have become decoupled. It creates new forms of usage, and new needs to manage those. As a result of that e.g. archivists find they now need to be involved at the very start of digital information processes, whereas earlier their work would basically start when the boxes of papers were delivered to them.
The reality is that local and regional governments have barely begun to fully embrace and leverage the affordances that digitisation provides them with. It shows in how most of them deal with information security, openness and privacy: by building three fences.
Security is mostly interpreted as keeping other people out, so a fence is put between the organisation and the outside world. Inside it nothing much is changed. Similarly a second fence is put in place for determining openness. What is open can reach the outside world, and the fence is there to do the filtering. Finally privacy is also dealt with by a fence, either around the entire organisation or a specific system, keeping unwanted eyes out. All fences are a barrier between outside and in, and within the organisation usually no further measures are taken. All three fences exist separately from each other, as stand alone fixes for their singular purpose.
The first fence: security
In the Netherlands for local governments a ‘baseline information security’ standard applies, and it determines what information should be regarded as business critical. Something is business critical if its downtime will stop public service delivery, or of its lack of quality has immediate negative consequences for decision making (e.g. decisions on benefits impacting citizens). Uptime and downtime are mostly about IT infrastructure, dependencies and service level agreements, and those fit the fence tactic quite well. Quality in the context of security is about ensuring data is tamper free, doing audits, input checks, and knowing sources. That requires a data-centric approach, and it doesn’t fit the fence-around-the-organisation tactic.
The second fence: openness
Openness of local government information is mostly at request, or at best as a process separate from regular operational routines. Yet the stated end game is that everything should be actively open by design, meaning everything that can be made public will be published the moment it is publishable. We also see that open data is becoming infrastructure in some domains. The implementation of the digitisation of the law on public spaces, requires all involved stakeholders to have the same (access to) information. Many public sector bodies, both local ones and central ones like the cadastral office, have concluded that doing that through open data is the most viable way. For both the desired end game and using open data as infrastructure the fence tactic is however very inefficient.
At the same time the data sovereignty of local governments is under threat. They increasingly collaborate in networks or outsource part of their processes. In most contracts there is no attention paid to data, other than in generic terms in the general procurement conditions. We’ve come across a variety of examples where this results 1) in governments not being able to provide data to citizens, even though by law they should be able to 2) governments not being able to access their own data, only resulting graphs and reports, or 3) the slowest partner in a network determining the speed of disclosure. In short, the fence tactic is also ineffective. A more data-centric approach is needed.
The third fence: personal data protection
Mostly privacy is being dealt with by identifying privacy sensitive material (but not what, where and when), and locking it down by putting up the third fence. The new EU privacy regulations GDPR, which will be enforced from May this year, is seen as a source of uncertainty by local governments. It is also responded to in the accustomed way: reinforcing the fence, by making a ‘better’ list of what personal data is used within the organisation but still not paying much attention to processes, nor the shape and form of the personal data.
However in the case of the GDPR, if it indeed will be really enforced, this will not be enough.
GDPR an opportunity for ‘everything by design’
The GDPR confers rights to the people described by data, like the right to review, to portability, and to be forgotten. It also demands compliance is done ‘by design’, and ‘state of the art’. This can only be done by design if you are able to turn the rights of the GDPR into queries on your data, and have (automated) processes in place to deal with requests. It cannot be done with a ‘better’ fence. In the case of the GDPR, the first data related law that takes the affordances of digitisation as a given, the fence tactic is set to fail spectacularly. This makes the GDPR a great opportunity to move to a data focus not just for privacy by design, but to do openness, archiving and information security (in terms of quality) by design at the same time, as they are converging aspects of the same thing and can no longer be meaningfully separated. Detailed knowledge about your data structures then is needed.
Local governments inadvertently admit fence-tactic is failing
Governments already clearly yet indirectly admit that the fences don’t really work as tactic.
Local governments have been loudly complaining for years about the feared costs of compliance, concerning both openness and privacy. Drilling down into those complaints reveals that the feared costs concern the time and effort involved in e.g. dealing with requests. Because there’s only a fence, and usually no processes or detailed knowledge of the data they hold, every request becomes an expedition for answers. If local governments had detailed insight in the data structures, data content, and systems in use, the cost of compliance would be zero or at least indistinguishable from the rest of operations. Dealing with a request would be nothing more than running a query against their systems.
Complaints about compliance costs are essentially an admission that governments do not have their house in order when it comes to data.
The interviews I did with various stakeholders as part of the evaluation of the PSI Directive confirm this: the biggest obstacle stakeholders perceive to being more open and to realising impact with open data is the low quality of information systems and processes. It blocks fully leveraging the affordances digitisation brings.
Towards mature data governance, by making inventory
Changing tactics, doing away with the three fences, and focusing on having detailed knowledge of their data is needed. Combining what now are separate and disconnected activities (information security, openness, archiving and personal data protection), into ‘everything by design’. Basically it means turning all you know about your data into metadata that becomes part of your data. So that it will be easy to see which parts of a specific data set contain what type of person related data, which data fields are public, which subset is business critical, the records that have third party rights attached, or which records need to be deleted after a specific amount of time. Don’t man the fences where every check is always extra work, but let the data be able to tell exactly what is or is(n’t) possible, allowed, meant or needed. Getting there starts with making an inventory of what data a local or regional government currently holds, and describing the data in detailed operational, legal and technological terms.
Mature digital data governance: all aspects about the data are part of the data, allowing all processes and decisions access to all relevant material in determining what’s possible.
2: Ways local government data inventories are useful
Inventories are a key first step in doing away with the ineffective fences and towards mature data governance. Inventories are also useful as an instrument for several other purposes.
Local is where you are, but not the data pro’s
There’s a clear reason why local governments don’t have their house in order when it comes to data.
Most of our lives are local. The streets we live on, the shopping center we frequent, the schools we attend, the spaces we park in, the quality of life in our neighbourhood, the parks we walk our dogs in, the public transport we use for our commutes. All those acts are local.
Local governments have a wide variety of tasks, reflecting the variety of our acts. They hold a corresponding variety of data, connected to all those different tasks. Yet local governments are not data professionals. Unlike singular-task, data heavy national government bodies, like the Cadastre, the Meteo institute or the department for motor vehicles, local governments usually don’t have the capacity or capability. As a result local governments mostly don’t know their own data, and don’t have established effective processes that build on that data knowledge. Inventories are a first step. Inventories point to where contracts, procurement and collaboration leads to loss of needed data sovereignty. Inventories also allow determining what, from a technology perspective, is a smooth transition path to the actively open by design end-game local governments envision.
Open data as a policy instrument
Where local governments want to use the data they have as a way to enable others to act differently or in support of policy goals, they need to know in detail which data they hold and what can be done with it. Using open data as policy instrument means creating new connections between stakeholders around a policy issue, by putting the data into play. To be able to see which data could be published to engage certain stakeholders it takes knowing what you have, what it contains, and in which shape you have it first.
Better articulated citizen demands for data
Making public a list of what you have is also important here, as it invites new demand for your data. It allows people to be aware of what data exists, and contemplate if they have a use case for it. If a data set hasn’t been published yet, its existence is discoverable, so they can request it. It also enables local government to extend the data they publish based on actual demand, not assumed demand or blindly. This increases the likelihood data will be used, and increases the socio-economic impact.
Emerging data
More and more new data is emerging, from sensor networks in public and private spaces. This way new stakeholders and citizens are becoming agents in the public space, where they meet up with local governments. New relationships, and new choices result. For instance the sensor in my garden measuring temperature and humidity is part of the citizen-initiated Measure your city network, but also an element in the local governments climate change adaptation policies. For local governments as regulators, as guardian of public space, as data collector, and as source of transparency, this is a rebalancing of their position. It again takes knowing what data you own and how it relates to and complements what others collect and own. Only then is a local government able to weave a network with those stakeholders that connects data into valuable agency for all involved. (We’ve built a guidance tool, in Dutch, for the role of local government with regard to sensors in public spaces)
Having detailed data inventories are a way to start having the right conversations for local governments on all these points.
3: Getting to inventories
To create useful and detailed inventories, as I and my colleagues did for half a dozen local governments, some elements are key in my view. We looked at structured data collections only, so disregarded the thousands of individual once-off spreadsheets. They are not irrelevant, but obscure the wood for the trees. Then we scored all those data sets on up to 80(!) different facets, concerning policy domain, internal usage, current availability, technical details, legal aspects, and concerns etc. A key element in doing that is not making any assumptions:
don’t assume your list of applications will tell you what data you have. Not all your listed apps will be used, others won’t be on the list, and none of it tells you in detail what data actually is processed in them, just a generic pointer
don’t assume information management knows it all, as shadow information processes will exist outside of their view
don’t assume people know when you ask them how they do their work, as their description and rationalisation of their acts will not match up with reality,
let them also show you
don’t assume people know the details of the data they work with, sit down with them and look at it together
don’t assume what it says on the tin is correct, as you’ll find things that don’t belong there (we’ve e.g. found domestic abuse data in a data set on litter in public spaces)
Doing an inventory well means
diving deeply into which applications are actually used,
talking to every unit in the organisation about their actual work and seeing it being done,
looking closely at data structures and real data content,
looking closely at current metadata and its quality
separately looking at large projects and programs as they tend to have their own information systems,
going through external communications as it may refer to internally held data not listed elsewhere,
looking at (procurement and collaboration) contracts to determine what claims other might have on data,
and then cross-referencing it all, and bringing it together in one giant list, scored on up to 80 facets.
Another essential part, especially to ensure the resulting inventory will be used as an instrument, is from the start ensuring the involvement and buy-in of the various parts of local government that usually are islands (IT, IM, legal, policy departments, archivists, domain experts, data experts). So that the inventory is something used to ask a variety of detailed questions of.
We’ve followed various paths to do inventories, sometimes on our own as external team, sometimes in close cooperation with a client team, sometimes a guide for a client team while their operational colleagues do the actual work. All three yield very useful results but there’s a balance to strike between consistency and accuracy, the amount of feasible buy-in, and the way the hand-over is planned, so that the inventory becomes an instrument in future data-discussions.
What comes out as raw numbers is itself often counter-intuitive to local government. Some 98% of data typically held by Dutch Provinces can be public, although usually some 20% is made public (15% open data, usually geo-data). At local level the numbers are a bit different, as local governments hold much more person related data (concerning social benefits for instance, chronic care, and the persons register). About 67% of local data could be public, but only some 5% usually is. This means there’s still a huge gap between what can be open, and what is actually open. That gap is basically invisible if a local government deploys the three fences, and as a consequence they run on assumptions and overestimate the amount that needs the heaviest protection. The gap becomes visible from looking in-depth at data on all pertinent aspects by doing an inventory.
(Interested in doing an inventory of the data your organisations holds? Do get in touch.)