Granularity (photo by Emily, license: CC-BY-NC)

A client, after their previous goal of increasing the volume of open data provided, is now looking to improve data quality. One element in this is increasing the level of detail of the already published data. They asked for input on how one can approach and define granularity. I formulated some thoughts for them as input, which I am now posting here as well.

Data granularity in general is the level of detail a data set provides. This granularity can be thought of in two dimensions:
a) whether a combination of data elements in the set is presented in one field or split out into multiple fields: atomisation
b) the relative level of detail the data in a set represents: resolution

On Atomisation
Improving this type of granularity can be done by looking at the structure of a data set itself. Are there fields within a data set that can be reliably separated into two or more fields? Common examples are separating first and last names, zipcodes and cities, streets and house numbers, organisations and departments, or keyword collections (tags, themes) into single keywords. This allows for more sophisticated queries on the data, as well as more ways it can potentially be related to or combined with other data sets.

For currently published data sets improving this type of granularity can be done by looking at the existing data structure directly, or by asking the provider of the data set if they have combined any fields into a single field when they created the dataset for publication.

This type of granularity increase changes the structure of the data but not the data itself. It improves the usability of the data, without improving the use value of the data. The data in terms of information content stays the same, but does become easier to work with.

On Resolution
Resolution can have multiple components such as: frequency of renewal, time frames represented, geographic resolution, or splitting categories into sub-categories or multilevel taxonomies. An example is how one can publish average daily temperature in a region. Let’s assume it is currently published monthly with one single value per day. Resolution of such a single value can be increased in multiple ways: publishing the average daily temperature daily, not monthly. Split up the average daily temperature for the region, into average daily temperature per sensor in that region (geographic resolution). Split up the average single sensor reading into hourly actual readings, or even more frequent. The highest resolution would be publishing real-time individual sensor readings continuously.

Improving resolution can only be done in collaboration with the holder of the actual source of the data. What level of improvement can be attained is determined by:

  1. The level of granularity and frequency at which the data is currently collected by the data holder
  2. The level of granularity or aggregation at which the data is used by the data holder for their public tasks
  3. The level of granularity or aggregation at which the data meets professional standards.

Item 1 provides an absolute limit to what can be done: what isn’t collected cannot be published. Usually however data is not used internally in the exact form it was collected either. In terms of access to information the practical limit to what can be published is usually the way that data is available internally for the data holder’s public tasks. Internal systems and IT choices are shaped accordingly usually. Generally data holders can reliably provide data at the level of Item 2, because that is what they work with themselves.

However, there are reasons why data sometimes cannot be publicly provided the same way it is available to the data holder internally. These can be reasons of privacy or common professional standards. For instance energy companies have data on energy usage per household, but in the Netherlands such data is aggregated to groups of at least 10 households before publication because of privacy concerns. National statistics agencies comply with international standards concerning how data is published for external use. Census data for instance will never be published in the way it was collected, but only at various levels of aggregation.

Discussions on the desired level of resolution need to be in collaboration with potential re-users of the data, not just the data holders. At what point does data become useful for different or novel types of usage? When is it meeting needs adequately?

Together with data holders and potential data re-users the balance needs to be struck between re-use value and considerations of e.g. privacy and professional standards.

This type of granularity increase changes the content of the data. It improves the usage value of the data as it allows new types of queries on the data, and enables more nuanced contextualisation in combination with other datasets.

Last week I presented to a provincial procurement team about how to better support open data efforts. Below is what I presented and discussed.

Open data as policy instrument and the legal framework demands better procurement

Publishing open data creates new activity. It does so in two ways. It allows existing stakeholders to do more themselves or do things differently. It also allows people who could not participate before become active as well. We’ve seen for instance how opening up provincial and national geographic data increases the independent usage of that data by local governments. We’ve also seen how for instance the Dutch hiking association started using national geographic data to create and better document routes. To the surprise of the Cadastre a whole new area of usage appeared as well, by cultural organisations who before had never requested such data. So open data is an enabler for agency.

If as a government data holder you know this effect takes place, you can also try and achieve it deliberately. For policy domains and groups of stakeholders where you would like to see more activity, publishing data then is an instrument in for instance achieving your own policy goals. Next to regulation and financing, publishing open data is a new third policy instrument. It also happens to be the cheapest of those three to deploy.

Open data in the EU has a legal framework where over time more things are mandated. There is a right to re-use. Upon request dataholders must be able to provide machine readable data formats. In the Netherlands open standards are compulsory for government entities since 2008. Exclusive access to government data for re-use is, except for a few very strictly regulated situations, illegal.

To be able to comply with the legal framework, and to be able to actively use open data as a policy instrument, public sector bodies must pay more attention to how they acquire data, and as a consequence must pay more attention to what happens during procurement processes. If you don’t the government entity’s data sovereignty is strongly diminished, which carries costs.

Procurement awareness needed on multiple levels

The goal is to ensure full data sovereignty. This means paying real attention to various things on different levels of abstraction around procurement.

  • Ensuring data is received in open standards and regular domain specific standards
  • Ensure when reports are received that the data used, such as for graphs and tables, are also received
  • Ensure when information products are received (maps, visualisations) the data used for them are also received
  • Ensure procurement and collaboration contracts do not preclude sharing data with third parties, apart from on grounds already mentioned as exceptions in the law on freedom of information and re-use
  • Ensure that when raw data is provided to service providers, that data is still available to the government entity
  • Ensure that when data is collected by external entities who in turn outsource the collection, all parties involved know the data falls under the decision making power of the government entity
  • Ensure in collaborations you do not sign away decision power over the data you contribute, you have rights to the data you collectively create, and have as little restriction as possible on the data others contribute.

What could go wrong?

Unless you always pay attention to these points, you run the risk of losing your data sovereignty. This can lead to situations where a government entity is no longer able to comply with its own legal obligations concerning data provision and transparency.

A few existing examples from what can go wrong.

  • A province is counting bicycle traffic through a network of sensors they deployed themselves. The data is directly transmitted to a service provider in a different country. The province can see dashboards and download reports, but has no access to the sensor data itself, and cannot download the sensor data. While any citizen requesting the data could not be provided with that data, the service provider itself does base commercial services on that and other data it receives, having de facto exclusive access to it.
  • Another province is outsourcing bird inventory counting to nature preservation organisations, who in turn rely on volunteers to do the bird watching. The province pays for the effort. When it comes to sharing the data publicly, the nature preservation organisations say their volunteers actually own the data, so nothing can be publicly shared. This is untrue for multiple reasons (database rights do not apply, it is a paid for effort so procurement terms that unequivocally transfer such rights should they exist to the province etc), but as the province doesn’t want to waste time on this, nor wants to get into a fight, it leaves it be, resulting in the data not being made available.
  • An energy network provider pools a lot of different data sources concerning energy usage in their service area from a network of collaborating entities, both private and public. They also publish a lot of open data already. As part of the national effort towards energy transition they receive many data requests from local governments, housing associations and other entities. They would like to provide data, as they see it as a way of contributing to an essential public task (energy transition), but still say no to data requests in 60% of all cases. Because they can’t figure out which contractual obligations apply to which parts of the data, or cannot reconcile conflicting or ambiguous contract clauses concerning the data.
  • All provinces pool data concerning economic activity and the labor market in a private foundation in which also private entities participate. That foundation sells data subscriptions. Currently they also publish some open data, but if any of the provinces would like to do more, they would have to wait for full agreement. The slowest in the group would determine the actual level of transparency.
  • A province has outsourced the creation of a ‘heat transition atlas’, in which the potential for moving away from natural gas burning heating systems in homes using various alternatives is mapped. The resulting interactive website contains different data layers, but those data layers are themselves unavailable. Although there is a general list of which data sources have been used, it is not precisely stating its sources and not providing details on how the data has been transformed for the website.

In all cases the public sector data holder has put itself in a position that could have been prevented had they paid more attention at the time of procurement or at the time of entering into collaboration. All these situations can be fixed later on, but they require additional effort, time and costs to arrange, which are unnecessary if dealt with during procurement.

But we have procurement regulations already!

What about procurement regulations. We have those, so don’t they cover all this? Mostly not it turns out.

Terms of procurement talk about rights transfer of all deliverables, but in many cases the data involved isn’t listed as a deliverable, so not covered by those terms.
The terms talk about transfer of database rights, but those hardly ever apply as usually the scale of data collection and structuring into a database is limited.
Concerning research there is some talk about also transferring the data concerned, but a lot of reports aren’t research but consultancy services.

In the general regulations that apply to provincial procurement, the word data only is used in the context of personal data protection, as the dutch plural for date, and in the context of data carriers (hard drives etc). The word standards never occurs, nor does it contain references to data formats (even though legal obligations exist for government entities concerning standards and data formats)

The procurement terms are neither broad enough, nor detailed enough.

How to improve the situation

So what needs to be arranged to ensure government entities arrange their data needs correctly during procurement? How to plug the holes? A few things at the very least:

  • Likely, when it comes to standards and formats (which may differ per domain), the only viable place is in the mandatory technical requirements in a call for tender / request for proposals.
  • To get the data behind graphs, tables, info products and reports, including a list of resources and transformations applied, it needs to be specified in the list of deliverables.
  • Collaboration contracts entered into should always have articles on sharing the data you contribute, being able to share the data resulting from the collaboration, and rules about data that others contribute.

It is important to realise that you cannot through contracts do away with any mandatory transparency, open data, or data governance aspects. Any resulting issues will mean time consuming and likely costly repair activities.

Who needs to be involved

In order to prevent the costs of repair or mitigation of consequences, there are a number of questions concerning who should be doing what, inside a government entity.

  • What needs to be arranged at the point of tender, who will check it?
  • What needs to be part of all project starts (e.g. Checklists, data paragraphs), is the project manager aware of this, and who will check it?
  • Who at the writing and signing of any contract will check data aspects?
  • Who at the time of delivery will check if data requirements are met?
  • What part of this is more about awareness and operatios, what needs to be done through regulation?

Our work in the next steps

We intend to assist the province involved in making sure procurement better enables data sharing from now on. Steps we are currently taking to move this forward are:

  • We’ve put data sovereignty into the organisations strategy document, and tied it into overall data governance improvement.
  • With the information management department we’ll visit all main procurers to discuss and propose actions
  • We’ll likely build one or more checklists for different aspects
  • We’ll work with a 3 person team from the procurement department to more deeply embed data awareness and amend procurement processes

All this is basically a preventative step to ensure the province has its house in order concerning data.

Today I was at a session at the Ministry for Interior Affairs in The Hague on the GDPR, organised by the center of expertise on open government.
It made me realise how I actually approach the GDPR, and how I see all the overblown reactions to it, like sending all of us a heap of mail to re-request consent where none’s needed, or taking your website or personal blog even offline. I find I approach the GDPR like I approach a quality assurance (QA) system.

One key change with the GDPR is that organisations can now be audited concerning their preventive data protection measures, which of course already mimics QA. (Next to that the GDPR is mostly an incremental change to the previous law, except for the people described by your data having articulated rights that apply globally, and having a new set of teeth in the form of substantial penalties.)


My colleague Paul facilitated the session and showed this mindmap of GDPR aspects. I think it misses the more future oriented parts.

The session today had three brief presentations.

In one a student showed some results from his thesis research on the implementation of the GDPR, in which he had spoken with a lot of data protection officers or DPO’s. These are mandatory roles for all public sector bodies, and also mandatory for some specific types of data processing companies. One of the surprising outcomes is that some of these DPO’s saw themselves, and were seen as, ‘outposts’ of the data protection authority, in other words seen as enforcers or even potentially as moles. This is not conducive to a DPO fulfilling the part of its role in raising awareness of and sensitivity to data protection issues. This strongly reminded me of when 20 years ago I was involved in creating a QA system from scratch for my then employer. Some of my colleagues saw the role of the quality assurance manager as policing their work. It took effort to show how we were not building a straightjacket around them that kept them within strict boundaries, but providing a solid skeleton to grow on, and move faster. Where audits are not hunts for breaches of compliance but a way to make emergent changes in the way people worked visible, and incorporate professionally justified ones in that skeleton.

In another presentation a civil servant of the Ministry involved in creating a register of all person related data being processed. What stood out most for me was the (rightly) pragmatic approach they took with describing current practices and data collections inside the organisation. This is a key element of QA as well. You work from descriptions of what happens, and not at what ’should’ happen or ‘ideally’ happens. QA is a practice rooted in pragmatism, where once that practice is described and agreed it will be audited.
Of course in the case of the Ministry it helps that they only have tasks mandated by law, and therefore the grounds for processing are clear by default, and if not the data should not be collected. This reduces the range of potential grey areas. Similarly for security measures, they already need to adhere to national security guidelines (called the national baseline information security), which likewise helps with avoiding new measures, proves compliance for them, and provides an auditable security requirement to go with it. This no doubt helped them to be able to take that pragmatic approach. Pragmatism is at the core of QA as well, it takes its cues from what is really happening in the organisation, what the professionals are really doing.

A third one dealt with open standards for both processes and technologies by the national Forum for Standardisation. Since 2008 a growing list of currently some 40 or so standards is mandatory for Dutch public sector bodies. In this list of standards you find a range of elements that are ready made to help with GDPR compliance. In terms of support for the rights of those described by the data, such as the right to export and portability for instance, or in terms of preventive technological security measures, and ‘by design’ data protection measures. Some of these are ISO norms themselves, or, as the mentioned national baseline information security, a compliant derivative of such ISO norms.

These elements, the ‘police’ vs ‘counsel’ perspective on the rol of a DPO, the pragmatism that needs to underpin actions, and the building blocks readily to be found elsewhere in your own practice already based on QA principles, made me realise and better articulate how I’ve been viewing the GDPR all along. As a quality assurance system for data protection.

With a quality assurance system you can still famously produce concrete swimming vests, but it will be at least done consistently. Likewise with GDPR you will still be able to do all kinds of things with data. Big Data and developing machine learning systems are hard but hopefully worthwile to do. With GDPR it will just be hard in a slightly different way, but it will also be helped by establishing some baselines and testing core assumptions. While making your purposes and ways of working available for scrutiny. Introducing QA upon its introduction does not change the way an organisation works, unless it really doesn’t have its house in order. Likewise the GDPR won’t change your organisation much if you have your house in order either.

From the QA perspective on GDPR, it is perfectly clear why it has a moving baseline (through its ‘by design’ and ‘state of the art’ requirements). From the QA perspective on GDPR it is perfectly clear what the connection is to how Europe is positioning itself geopolitically in the race concerning AI. The policing perspective after all only leads to a luddite stance concerning AI, which is not what the EU is doing, far from it. From that it is clear how the legislator intends the thrust of GDPR. As QA really.

Funny how #datagovernance companies publishing #gdpr compliance guides aren’t compliant themselves when asking personal data for downloads: no explicit opt-ins, hidden opt-ins (such as hitting download also subscribes you to their newsletter), no specific explanations on what data will be used how, asking more personal information than necessary.

This is the presentation I gave at the Open Belgium 2018 Conference in Louvain-la-Neuve this week, titled ‘The role and value of data inventories, a key step towards mature data governance’. The slides are embedded further below, and as PDF download at grnl.eu/in. It’s a long read (some 3000 words), so I’ll start with a summary.

Summary, TL;DR

The quality of information households in local governments is often lacking.
Things like security, openness and privacy are safeguarded by putting separate fences for each around the organisation, but those safeguards lack having detailed insight into data structures and effective corresponding processes. As archiving, security, openness and privacy in a digitised environment are basically inseparable, doing ‘everything by design’ is the only option. The only effective way is doing everything at the level of the data itself. Fences are inefficient, ineffective, and the GDPR due to its obligations will show how the privacy fence fails, forcing organisations to act. Only doing data governance for privacy is senseless, doing it also for openness, security and archiving at the same time is logical. Having good detailed inventories of your data holdings is a useful instrument to start asking the hard questions, and have meaningful conversations. It additionally allows local government to deploy open or shared data as policy instrument, and releasing the inventory itself will help articulate civic demand for data. We’ve done a range of these inventories with local government.

Data Inventories for Local Data Governance by Ton Zijlstra

1: High time for mature data governance in local and regional government

Hight time! (clock in Louvain-la-Neuve)Digitisation changes how we look at things like openness, privacy, security and archiving, as it creates new affordances now that the content and its medium have become decoupled. It creates new forms of usage, and new needs to manage those. As a result of that e.g. archivists find they now need to be involved at the very start of digital information processes, whereas earlier their work would basically start when the boxes of papers were delivered to them.

The reality is that local and regional governments have barely begun to fully embrace and leverage the affordances that digitisation provides them with. It shows in how most of them deal with information security, openness and privacy: by building three fences.

Security is mostly interpreted as keeping other people out, so a fence is put between the organisation and the outside world. Inside it nothing much is changed. Similarly a second fence is put in place for determining openness. What is open can reach the outside world, and the fence is there to do the filtering. Finally privacy is also dealt with by a fence, either around the entire organisation or a specific system, keeping unwanted eyes out. All fences are a barrier between outside and in, and within the organisation usually no further measures are taken. All three fences exist separately from each other, as stand alone fixes for their singular purpose.

The first fence: security
In the Netherlands for local governments a ‘baseline information security’ standard applies, and it determines what information should be regarded as business critical. Something is business critical if its downtime will stop public service delivery, or of its lack of quality has immediate negative consequences for decision making (e.g. decisions on benefits impacting citizens). Uptime and downtime are mostly about IT infrastructure, dependencies and service level agreements, and those fit the fence tactic quite well. Quality in the context of security is about ensuring data is tamper free, doing audits, input checks, and knowing sources. That requires a data-centric approach, and it doesn’t fit the fence-around-the-organisation tactic.


The second fence: openness
Openness of local government information is mostly at request, or at best as a process separate from regular operational routines. Yet the stated end game is that everything should be actively open by design, meaning everything that can be made public will be published the moment it is publishable. We also see that open data is becoming infrastructure in some domains. The implementation of the digitisation of the law on public spaces, requires all involved stakeholders to have the same (access to) information. Many public sector bodies, both local ones and central ones like the cadastral office, have concluded that doing that through open data is the most viable way. For both the desired end game and using open data as infrastructure the fence tactic is however very inefficient.
At the same time the data sovereignty of local governments is under threat. They increasingly collaborate in networks or outsource part of their processes. In most contracts there is no attention paid to data, other than in generic terms in the general procurement conditions. We’ve come across a variety of examples where this results 1) in governments not being able to provide data to citizens, even though by law they should be able to 2) governments not being able to access their own data, only resulting graphs and reports, or 3) the slowest partner in a network determining the speed of disclosure. In short, the fence tactic is also ineffective. A more data-centric approach is needed.

The third fence: personal data protection
Mostly privacy is being dealt with by identifying privacy sensitive material (but not what, where and when), and locking it down by putting up the third fence. The new EU privacy regulations GDPR, which will be enforced from May this year, is seen as a source of uncertainty by local governments. It is also responded to in the accustomed way: reinforcing the fence, by making a ‘better’ list of what personal data is used within the organisation but still not paying much attention to processes, nor the shape and form of the personal data.
However in the case of the GDPR, if it indeed will be really enforced, this will not be enough.

GDPR an opportunity for ‘everything by design’
The GDPR confers rights to the people described by data, like the right to review, to portability, and to be forgotten. It also demands compliance is done ‘by design’, and ‘state of the art’. This can only be done by design if you are able to turn the rights of the GDPR into queries on your data, and have (automated) processes in place to deal with requests. It cannot be done with a ‘better’ fence. In the case of the GDPR, the first data related law that takes the affordances of digitisation as a given, the fence tactic is set to fail spectacularly. This makes the GDPR a great opportunity to move to a data focus not just for privacy by design, but to do openness, archiving and information security (in terms of quality) by design at the same time, as they are converging aspects of the same thing and can no longer be meaningfully separated. Detailed knowledge about your data structures then is needed.

Local governments inadvertently admit fence-tactic is failing
Governments already clearly yet indirectly admit that the fences don’t really work as tactic.
Local governments have been loudly complaining for years about the feared costs of compliance, concerning both openness and privacy. Drilling down into those complaints reveals that the feared costs concern the time and effort involved in e.g. dealing with requests. Because there’s only a fence, and usually no processes or detailed knowledge of the data they hold, every request becomes an expedition for answers. If local governments had detailed insight in the data structures, data content, and systems in use, the cost of compliance would be zero or at least indistinguishable from the rest of operations. Dealing with a request would be nothing more than running a query against their systems.

Complaints about compliance costs are essentially an admission that governments do not have their house in order when it comes to data.
The interviews I did with various stakeholders as part of the evaluation of the PSI Directive confirm this: the biggest obstacle stakeholders perceive to being more open and to realising impact with open data is the low quality of information systems and processes. It blocks fully leveraging the affordances digitisation brings.

Towards mature data governance, by making inventory
Changing tactics, doing away with the three fences, and focusing on having detailed knowledge of their data is needed. Combining what now are separate and disconnected activities (information security, openness, archiving and personal data protection), into ‘everything by design’. Basically it means turning all you know about your data into metadata that becomes part of your data. So that it will be easy to see which parts of a specific data set contain what type of person related data, which data fields are public, which subset is business critical, the records that have third party rights attached, or which records need to be deleted after a specific amount of time. Don’t man the fences where every check is always extra work, but let the data be able to tell exactly what is or is(n’t) possible, allowed, meant or needed. Getting there starts with making an inventory of what data a local or regional government currently holds, and describing the data in detailed operational, legal and technological terms.

Mature digital data governance: all aspects about the data are part of the data, allowing all processes and decisions access to all relevant material in determining what’s possible.

2: Ways local government data inventories are useful

Inventories are a key first step in doing away with the ineffective fences and towards mature data governance. Inventories are also useful as an instrument for several other purposes.

Local is where you are, but not the data pro’s
There’s a clear reason why local governments don’t have their house in order when it comes to data.
Most of our lives are local. The streets we live on, the shopping center we frequent, the schools we attend, the spaces we park in, the quality of life in our neighbourhood, the parks we walk our dogs in, the public transport we use for our commutes. All those acts are local.
Local governments have a wide variety of tasks, reflecting the variety of our acts. They hold a corresponding variety of data, connected to all those different tasks. Yet local governments are not data professionals. Unlike singular-task, data heavy national government bodies, like the Cadastre, the Meteo institute or the department for motor vehicles, local governments usually don’t have the capacity or capability. As a result local governments mostly don’t know their own data, and don’t have established effective processes that build on that data knowledge. Inventories are a first step. Inventories point to where contracts, procurement and collaboration leads to loss of needed data sovereignty. Inventories also allow determining what, from a technology perspective, is a smooth transition path to the actively open by design end-game local governments envision.

Open data as a policy instrument
Where local governments want to use the data they have as a way to enable others to act differently or in support of policy goals, they need to know in detail which data they hold and what can be done with it. Using open data as policy instrument means creating new connections between stakeholders around a policy issue, by putting the data into play. To be able to see which data could be published to engage certain stakeholders it takes knowing what you have, what it contains, and in which shape you have it first.

Better articulated citizen demands for data
Making public a list of what you have is also important here, as it invites new demand for your data. It allows people to be aware of what data exists, and contemplate if they have a use case for it. If a data set hasn’t been published yet, its existence is discoverable, so they can request it. It also enables local government to extend the data they publish based on actual demand, not assumed demand or blindly. This increases the likelihood data will be used, and increases the socio-economic impact.

Emerging data
More and more new data is emerging, from sensor networks in public and private spaces. This way new stakeholders and citizens are becoming agents in the public space, where they meet up with local governments. New relationships, and new choices result. For instance the sensor in my garden measuring temperature and humidity is part of the citizen-initiated Measure your city network, but also an element in the local governments climate change adaptation policies. For local governments as regulators, as guardian of public space, as data collector, and as source of transparency, this is a rebalancing of their position. It again takes knowing what data you own and how it relates to and complements what others collect and own. Only then is a local government able to weave a network with those stakeholders that connects data into valuable agency for all involved. (We’ve built a guidance tool, in Dutch, for the role of local government with regard to sensors in public spaces)

Having detailed data inventories are a way to start having the right conversations for local governments on all these points.

3: Getting to inventories

To create useful and detailed inventories, as I and my colleagues did for half a dozen local governments, some elements are key in my view. We looked at structured data collections only, so disregarded the thousands of individual once-off spreadsheets. They are not irrelevant, but obscure the wood for the trees. Then we scored all those data sets on up to 80(!) different facets, concerning policy domain, internal usage, current availability, technical details, legal aspects, and concerns etc. A key element in doing that is not making any assumptions:

  • don’t assume your list of applications will tell you what data you have. Not all your listed apps will be used, others won’t be on the list, and none of it tells you in detail what data actually is processed in them, just a generic pointer
  • don’t assume information management knows it all, as shadow information processes will exist outside of their view
  • don’t assume people know when you ask them how they do their work, as their description and rationalisation of their acts will not match up with reality,
    let them also show you
  • don’t assume people know the details of the data they work with, sit down with them and look at it together
  • don’t assume what it says on the tin is correct, as you’ll find things that don’t belong there (we’ve e.g. found domestic abuse data in a data set on litter in public spaces)

Doing an inventory well means

  • diving deeply into which applications are actually used,
  • talking to every unit in the organisation about their actual work and seeing it being done,
  • looking closely at data structures and real data content,
  • looking closely at current metadata and its quality
  • separately looking at large projects and programs as they tend to have their own information systems,
  • going through external communications as it may refer to internally held data not listed elsewhere,
  • looking at (procurement and collaboration) contracts to determine what claims other might have on data,
  • and then cross-referencing it all, and bringing it together in one giant list, scored on up to 80 facets.

Another essential part, especially to ensure the resulting inventory will be used as an instrument, is from the start ensuring the involvement and buy-in of the various parts of local government that usually are islands (IT, IM, legal, policy departments, archivists, domain experts, data experts). So that the inventory is something used to ask a variety of detailed questions of.

bring the islands together
Bring the islands together. (photo Dmitry Teslya CC-BY

We’ve followed various paths to do inventories, sometimes on our own as external team, sometimes in close cooperation with a client team, sometimes a guide for a client team while their operational colleagues do the actual work. All three yield very useful results but there’s a balance to strike between consistency and accuracy, the amount of feasible buy-in, and the way the hand-over is planned, so that the inventory becomes an instrument in future data-discussions.

What comes out as raw numbers is itself often counter-intuitive to local government. Some 98% of data typically held by Dutch Provinces can be public, although usually some 20% is made public (15% open data, usually geo-data). At local level the numbers are a bit different, as local governments hold much more person related data (concerning social benefits for instance, chronic care, and the persons register). About 67% of local data could be public, but only some 5% usually is. This means there’s still a huge gap between what can be open, and what is actually open. That gap is basically invisible if a local government deploys the three fences, and as a consequence they run on assumptions and overestimate the amount that needs the heaviest protection. The gap becomes visible from looking in-depth at data on all pertinent aspects by doing an inventory.

(Interested in doing an inventory of the data your organisations holds? Do get in touch.)