Today I gave a brief presentation of the framework for measuring open data impact I created for UNDP Serbia last year, at the Open Belgium 2019 Conference.
The framework is meant to be relatable and usable for individual organisations by themselves, and based on how existing cases, papers and research in the past have tried to establish such impact.
Here are the slides.
Measuring open data impact by Ton Zijlstra, at Open Belgium 2019
This is the full transcript of my presentation:
Last Friday, when Pieter Colpaert tweeted the talks he intended to visit (Hi Pieter!), he said two things. First he said after the coffee it starts to get difficult, and that’s true. Measuring impact is a difficult topic. And he asked about measuring impact: How can you possibly do that? He’s right to be cautious.
Because our everyday perception of impact and how to detect it is often too simplistic. Where’s the next Google the EC asked years ago. but it’s the wrong question. We will only know in 20 years when it is the new tech giant. But today it is likely a small start-up of four people with laptops and one idea, in Lithuania or Bulgaria somewhere, and we are by definition not be able to recognize it, framed this way. Asking for the killer app for open data is a similarly wrong question.
When it comes to impact, we seem to want one straightforward big thing. Hundreds of billions of euro impact in the EU as a whole, made up of a handful of wildly successful things. But what does that actually mean for you, a local government? And while you’re looking for that big impact you are missing all the smaller craters in this same picture, and also the bigger ones if they don’t translate easily into money.
Over the years however, there have been a range of studies, cases and research papers documenting specific impacts and effects. Me and my colleagues started collecting those a long time ago. And I used them to help contextualise potential impacts. First for the Flemish government, and last year for the Serbian government. To show what observed impact in for instance a Spanish sector would mean in the corresponding Belgian context. How a global prediction correlates to the Serbian economy and government strategies.
The UNDP in Serbia, asked me to extend that with a proposal for indicators to measure impact as they move forward with new open data action plans in follow up of the national readiness assessment I did for them earlier. I took the existing studies and looked at what they had tried to measure, what the common patterns are, and what they had looked at precisely. I turned that into a framework for impact measurement.
In the following minutes I will address three things. First what makes measuring impact so hard. Second what the common patterns are across existing research. Third how, avoiding the pitfalls, and using the commonalities we can build a framework, that then in itself is an indicator.Let’s first talk about the things that make measuring impact hard.
Judging by the available studies and cases there are several issues that make any easy answers to the question of open data impact impossible.There are a range of reasons measurement is hard. I’ll highlight a few.
Number 3, context is key. If you don’t know what you’re looking at, or why, no measurement makes much sense. And you can only know that in specific contexts. But specifying contexts takes effort. It asks the question: Where do you WANT impact.
Another issue is showing the impact of many small increments. Like how every Dutch person looks at this most used open data app every morning, the rain radar. How often has it changed a decision from taking the car to taking a bike? What does it mean in terms of congestion reduction, or emission reduction? Can you meaningfully quantify that at all?
Also important is who is asking for measurement. In one of my first jobs, my employer didn’t have email for all yet, so I asked for it. In response the MD asked me to put together the business case for email. This is a classic response when you don’t want to change anything. Often asking for measurement is meant to block change. Because they know you cannot predict the future. Motives shape measurements. The contextualisation of impact elsewhere to Flanders and Serbia in part took place because of this. Use existing answers against such a tactic.
Maturity and completeness of both the provision side, government, as well as the demand side, re-users, determine in equal measures what is possible at all, in terms of open data impact. If there is no mature provision side, in the end nothing will happen. If provision is perfect but demand side isn’t mature, it still doesn’t matter. Impact demands similar levels of maturity on both sides. It demands acknowledging interdependencies. And where that maturity is lacking, tracking impact means looking at different sets of indicators.
Measurements often motivate people to game the system. Especially single measurements. When number of datasets was still a metric for national portals the French opened with over 350k datasets. But really it was just a few dozen, which they had split according to departments and municipalities. So a balance is needed, with multiple indicators that point in different directions.
Open data, especially open core government registers, can be seen as infrastructure. But we actually don’t know how infrastructure creates impact. We know that building roads usually has a certain impact (investment correlates to a certain % rise in GDP), but we don’t know how it does so. Seeing open data as infrastructure is a logical approach (the consensus seems that the potential impact is about 2% of GDP), but it doesn’t help us much to measure impact or see how it creates that.
Network effects exist, but they are very costly to track. First order, second order, third order, higher order effects. We’re doing case studies for ESA on how satellite data gets used. We can establish network effects for instance how ice breakers in the Botnian gulf use satellite data in ways that ultimately reduce super market prices, but doing 24 such cases is a multi year effort.
E puor si muove! Galileo said Yet still it moves. The same is true for open data. Most measurements are proxies. They show something moving, without necessarily showing the thing that is doing the moving. Open data often is a silent actor, or a long range one. Yet still it moves.
Yet still it moves. And if we look at the patterns of established studies, that is what we indeed see. There are communalities in what movement we see. In the list on the slide the last point, that open data is a policy instrument is key. We know publishing data enables other stakeholders to act. When you do that on purpose you turn open data into a policy instrument. The cheapest one you have next to regulation and financing.
We all know the story of the drunk that lost his keys. He was searching under the light of a street lamp. Someone who helped him else asked if he lost the keys there. No, the drunk said, but at least there is light here. The same is true for open data. If you know what you published it for, at least you will be able to recognise relevant impact, if not all the impact it creates. Using it as policy instrument is like switching on the lights.
Dealing with lack of maturity means having different indicators for every step of the way. Not just seeing if impact occurs, but also if the right things are being done to make impact possible: Lead and lag indicators
The framework then is built from what has been used to establish impact in the past, and what we see in our projects as useful approaches. The point here is that we are not overly simplifying measurement, but adapt it to whatever is the context of a data provider or user. Also there’s never just one measurement, so a balanced approach is possible. You can’t game the system. It covers various levels of maturity from your first open dataset all the way to network effects. And you see that indicators that by themselves are too simple, still can be used.
Additionally the framework itself is a large scale sensor. If one indicator moves, you should see movement in other indicators over time as well. If you throw a stone in the pond, you should see ripples propagate. This means that if you start with data provision indicators only, you should see other measurements in other phases pick up. This allows you to both use a set of indicators across all phases, as well as move to more progressive ones when you outgrow the initial ones.finally some recommendations.
Some final thoughts. If you publish by default as integral part of processes, measuring impact, or building a business case is not needed as such. But measurement is very helpful in the transition to that end game. Core data and core policy elements, and their stakeholders are key. Measurement needs to be designed up front. Using open data as policy instrument lets you define the impact you are looking for at the least. The framework is the measurement: Only micro-economic studies really establish specific economic impact, but they only work in mature situations and cost a lot of effort, so you need to know when you are ready for them. But measurement can start wherever you are, with indicators that reflect the overall open data maturity level you are at, while looking both back and forwards. And because measurement can be done, as a data holder you should be doing it.
For the UNDP in Serbia, I made an overview of existing studies into the impact of open data. I’ve done something similar for the Flemish government a few years ago, so I had a good list of studies to start from. I updated that first list with more recent publications, resulting in a list of 45 studies from the past 10 years. The UNDP also asked me to suggest a measurement framework. Here’s a summary overview of some of the things I formulated in the report. I’ll start with 10 things that make measuring impact hard, and in a later post zoom in on what makes measuring impact doable.
While it is tempting to ask for a ‘killer app’ or ‘the next tech giant’ as proof of impact of open data, establishing the socio-economic impact of open data cannot depend on that. Both because answering such a question is only possible with long term hindsight which doesn’t help make decisions in the here and now, as well as because it would ignore the diversity of types of impacts of varying sizes known to be possible with open data. Judging by the available studies and cases there are several issues that make any easy answers to the question of open data impact impossible.
1 Dealing with variety and aggregating small increments
There are different varieties of impact, in all shapes and sizes. If an individual stakeholder, such as a citizen, does a very small thing based on open data, like making a different decision on some day, how do we express that value? Can it be expressed at all? E.g. in the Netherlands the open data based rain radar is used daily by most cyclists, to see if they can get to the rail way station dry, better wait ten minutes, or rather take the car. The impact of a decision to cycle can mean lower individual costs (no car usage), personal health benefits, economic benefits (lower traffic congestion) environmental benefits (lower emissions) etc., but is nearly impossible to quantify meaningfully in itself as a single act. Only where such decisions are stimulated, e.g. by providing open data that allows much smarter, multi-modal, route planning, aggregate effects may become visible, such as reduction of traffic congestion hours in a year, general health benefits of the population, reduction of traffic fatalities, which can be much better expressed in a monetary value to the economy.
2 Spotting new entrants, and tracking SME’s
The existing research shows that previously inactive stakeholders, and small to medium sized enterprises are better positioned to create benefits with open data. Smaller absolute improvements are of bigger value to them relatively, compared to e.g. larger corporations. Such large corporations usually overcome data access barriers with their size and capital. To them open data may even mean creating new competitive vulnerabilities at the lower end of their markets. (As a result larger corporations are more likely to say they have no problem with paying for data, as that protects market incumbents with the price of data as a barrier to entry.) This also means that establishing impacts requires simultaneously mapping new emerging stakeholders and aggregating that range of smaller impacts, which both can be hard to do (see point 1).
3 Network effects are costly to track
The research shows the presence of network effects, meaning that the impact of open data is not contained or even mostly specific to the first order of re-use of that data. Causal effects as well as second and higher order forms of re-use regularly occur and quickly become, certainly in aggregate, much higher than the value of the original form of re-use. For instance the European Space Agency (ESA) commissioned my company for a study into the impact of open satellite data for ice breakers in the Gulf of Bothnia. The direct impact for ice breakers is saving costs on helicopters and fuel, as the satellite data makes determining where the ice is thinnest much easier. But the aggregate value of the consequences of that is much higher: it creates a much higher predictability of ships and the (food)products they carry arriving in Finnish harbours, which means lower stocks are needed to ensure supply of these goods. This reverberates across the entire supply chain, saving costs in logistics and allowing lower retail prices across Finland. When mapping such higher order and network effects, every step further down the chain of causality shows that while the bandwidth of value created increases, at the same time the certainty that open data is the primary contributing factor decreases. Such studies also are time consuming and costly. It is often unlikely and unrealistic to expect data holders to go through such lengths to establish impact. The mentioned ESA example, is part of a series of over 20 such case studies ESA commissioned over the course of 5 years, at considerable cost for instance.
4 Comparison needs context
Without context, of a specific domain or a specific issue, it is hard to asses benefits, and compare their associated costs, which is often the underlying question concerning the impact of open data: does it weigh up against the costs of open data efforts? Even though in general open data efforts shouldn’t be costly, how does some type of open data benefit compare to the costs and benefits of other actions? Such comparisons can be made in a specific context (e.g. comparing the cost and benefit of open data for route planning with other measures to fight traffic congestion, such as increasing the number of lanes on a motor way, or increasing the availability of public transport).
5 Open data maturity determines impact and type of measurement possible
Because open data provisioning is a prerequisite for it having any impact, the availability of data and the maturity of open data efforts determine not only how much impact can be expected, but also determine what can be measured (mature impact might be measured as impact on e.g. traffic congestion hours in a year, but early impact might be measured in how the number of re-users of a data set is still steadily growing year over year)
6 Demand side maturity determines impact and type of measurement possible
Whether open data creates much impact is not only dependent on the availability of open data and the maturity of the supply-side, even if it is as mentioned a prerequisite. Impact, judging by the existing research, is certain to emerge, but the size and timing of such impact depends on a wide range of other factors on the demand-side as well, including things as the skills and capabilities of stakeholders, time to market, location and timing. An idea for open data re-use that may find no traction in France because the initiators can’t bring it to fruition, or because the potential French demand is too low, may well find its way to success in Bulgaria or Spain, because local circumstances and markets differ. In the Serbian national open data readiness assessment performed by me for the World Bank and the UNDP in 2015 this is reflected in the various dimensions assessed, that cover both supply and demand, as well as general aspects of Serbian infrastructure and society.
7 We don’t understand how infrastructure creates impact
The notion of broad open data provision as public infrastructure (such as the UK, Netherlands, Denmark and Belgium are already doing, and Switzerland is starting to do) further underlines the difficulty of establishing the general impact of open data on e.g. growth. The point that infrastructure (such as roads, telecoms, electricity) is important to growth is broadly acknowledged, with the corresponding acceptance of that within policy making. This acceptance of quantity and quality of infrastructure increasing human and physical capital however does not mean that it is clear how much what type of infrastructure contributes at what time to economic production and growth. Public capital is often used as a proxy to ascertain the impact of infrastructure on growth. Consensus is that there is a positive elasticity, meaning that an increase in public capital results in an increase in GDP, averaging at around 0.08, but varying across studies and types of infrastructure. Assuming such positive elasticity extends to open data provision as infrastructure (and we have very good reasons to do so), it will result in GDP growth, but without a clear view overall as to how much.
8 E pur si muove
Most measurements concerning open data impact need to be understood as proxies. They are not measuring how open data is creating impact directly, but from measuring a certain movement it can be surmised that something is doing the moving. Where opening data can be assumed to be doing the moving, and where opening data was a deliberate effort to create such movement, impact can then be assessed. We may not be able to easily see it, but still it moves.
9 Motives often shape measurements
Apart from the difficulty of measuring impact and the effort involved in doing so, there is also the question of why such impact assessments are needed. Is an impact assessment needed to create support for ongoing open data efforts, or to make existing efforts sustainable? Is an impact measurement needed for comparison with specific costs for a specific data holder? Is it to be used for evaluation of open data policies in general? In other words, in whose perception should an impact measurement be meaningful?
The purpose of impact assessments for open data further determines and/or limits the way such assessments can be shaped.
10 Measurements get gamed, become targets
Finally, with any type of measurement, there needs to be awareness that those with a stake of interest into a measurement are likely to try and game the system. Especially so where measurements determine funding for further projects, or the continuation of an effort. This must lead to caution when determining indicators. Measurements easily become a target in themselves. For instance in the early days of national open data portals being launched worldwide, a simple metric often reported was the number of datasets a portal contained. This is an example of a ‘point’ measurement that can be easily gamed for instance by subdividing a dataset into several subsets. The first version of the national portal of a major EU member did precisely that and boasted several hundred thousand data sets at launch, which were mostly small subsets of a bigger whole. It briefly made for good headlines, but did not make for impact.
In a second part I will take a closer look at what these 10 points mean for designing a measurement framework to track open data impact.
.@ton_zylstra explains why it is so hard to simply answer the eternal question from policy makers: “How should we measure the impact of Open Data?”
Contextualise!
Slides: grnl.eu/frame#OpenBelgium19
There were several points made in the conversation after my presentation yesterday at Open Belgium 2019. This is a brief overview to capture them here.
1) One remark was about the balance between privacy and openness, and asking about (negative) privacy impacts.
The framework assumes government as the party being interested in measurement (given that that was the assignment for which it was created). Government held open data is by default not personal data as re-use rules are based on access regimes which in turn all exclude personal data (with a few separately regulated exceptions). What I took away from the remark is that, as we know new privacy and other ethical issues may arise from working with data combinations, it might be of interest if we can formulate indicators that try to track negative outcomes or spot unintended consequences, in the same way as we are trying to track positive signals.
2) One question was about if I had included all economic modelling work in academia etc.
I didn’t. This isn’t academic research either. It seeks to apply lessons already learned. What was included were existing documented cases, studies and research papers looking at various aspects of open data impact. Some of those are academic publications, some aren’t. What I took from those studies is two things: what exactly did they look at (and what did they find), and how did they assess a specific impact? The ‘what’ was used as potential indicator, the ‘how’ as the method. It is of interest to keep tracking new research as it gets published, to augment the framework.
3) Is this academic research?
No, its primary aim is as a practical instrument for data holders as well as national open data policy makers. It’s is not meant to establish scientific truth, and completely quantify impact once and for all. It’s meant to establish if there are signs the right steps are taken, and if that results in visible impact. The aim, and this connects to the previous question as well, is to avoid extensive modelling techniques, and favor indicators we know work, where the methods are straightforward. This to ensure that government data holders are capable to do these measurements themselves, and use it actively as an instrument.
4) Does it include citizen science (open data) efforts?
This is an interesting one (asked by Lukas of Luftdaten.info). The framework currently does include in a way the existence and emergence of citizen science projects, as that would come up in any stakeholder mapping attempts and in any emerging ecosystem tracking, and as examples of using government open data (as context and background for citizen science measurements). But the framework doesn’t look at the impact of such efforts, not in terms of socio-economic impact and not in terms of government being a potential user of citizen science data. Again the framework is to make visible the impact of government opening up data. But I think it’s not very difficult to adapt the framework to track citizen science project’s impact. Adding citizen science projects in a more direct way, as indicators for the framework itself is harder I think, as it needs more clarification of how it ties into the impact of open government data.
5) Is this based only on papers, or also on approaching groups, and people ‘feeling’ the impact?
This was connected to the citizen science bit. Yes, the framework is based on existing documented material only. And although a range of those base themselves on interviewing or surveying various stakeholders, that is not a default or deliberate part of how the framework was created. I do however recognise the value of for instance participatory narrative inquiry that makes the real experiences of people visible, and the patterns across those experiences. Including that sort of measurements would be useful especially on the social and societal impacts of open data. But currently none of the studies that were re-used in the framework took that approach. It does make me think about how one could set-up something like that to monitor impact e.g. of local government open data initiatives.
All presentations and videos of the Open Knowledge Belgium 2019 #openbelgium19 conference early March are now online. Find them here 2019.openbelgium.be/presentations. Slides and transcript of my talk I had blogged earlier.
This Article was mentioned on thegreenland.eu
It’s the end of December, and we’re about to enjoy the company of dear friends to bring in the new year, as is our usual tradition. This means it is time for my annual year in review posting, the ‘Tadaa!’ list.
Nine years ago I started writing end-of-year blogposts listing the things that happened that year that gave me a feeling of accomplishment, that make me say ‘Tadaa!’, so this is the tenth edition (See the 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011 and 2010 editions). I am usually moving forwards to the next thing as soon as something is finished, and that often means I forget to celebrate or even acknowledge things during the year. Sometimes I forget things completely (a few years ago I completely forgot I organised a national level conference at the end of a project). My sense of awareness has improved in the past few years, especially since I posted week notes for the past 18 months. Still it remains a good way to reflect on the past 12 months and list the things that gave me a sense of accomplishment. So, here’s this year’s Tadaa!-list:
Visiting Open Knowledge Belgium to present the open data impact measurement framework I developed as part of an assignment for the UNDP in 2018. The way I accommodate in it for different levels of maturity on both the provision and demand side of open data and look at both lead and lag indicators, allows the entire framework to be a sensor: you should see the impact of actions propagate through indicators on subsequent levels. This allows you to look backwards and forwards with the framework, providing a sense of direction and speed as well as of current status. I’m currently deploying those notions with a client organisation for more balanced and ethical measurement and data collection.
When my project portfolio stabilised on a few bigger things, not a range of smaller things, I felt restless at first (there should be more chaos around me!), but I slowly recognised it as an opportunity to read, learn, and do more of the stuff on my endless backlog
Those few bigger things allow me to more deeply understand client organisations I do them in, and see more of my work and input evolve into results within an organisation. The clients involved seem to be very happy with the results so far, and I actually heard and accepted their positive feedback. Normally I’d dismiss such compliments.
Found a more stable footing for my company and in working/balancing with the other partners. We now are in a much better place than last year. Organisationally, as a team, and financially
We opened up offices in Utrecht for my company, meaning we now have space available to host people and events. We used some of that new opportunity, organising a few meet-ups, an unconference and hosting the Open Nederland general assembly meeting, but it is something I’d like to do more of. Set a rhythm in making our offices a hub in our network more.
Got to be there for friends, and friends got to be there for me. Thank you.
Visited Peter, Catherine and Oliver on PEI for the Crafting {:} a Life unconference. The importance of spending time together in unhurried conversations can’t be overestimated.
Gave a keynote at Coder Dojo NL conference. It turned out to be a more human and less abstract version of my Networked Agency keynote at SOTN in 2018. Helping me to better phrase my own thoughts on how technology, agency and being human interplays.
Organised 2 IndieWebCamps with Frank Meeuwsen, basically bringing the IndieWeb to the Netherlands. I enjoyed working with Frank, after having been out of touch for a while. Meeting over dinner at Ewout’s early last year, blogging about independent web technology, Elmine’s birthday unconference and visiting an IndieWebCamp in Germany together all in 2018, reconnected us, leading to organising two successful events in both Utrecht and Amsterdam, putting two new cities on the IndieWeb map.
Kept up the blogging (for the 17th year), making my site(s) even more central to the way I process and share info by doing things like syndicating to Twitter and Mastodon from my site, and not treating Twitter as a place where I write original content.
Enjoying every day still how much more central in the country we now live, how so many more things are now within easy reach. Events I can visit in the evening in Amsterdam, Utrecht, Rotterdam or The Hague, without the need to book a hotel, because I can be back home within an hour. How it allows us to let Y experience she’s part of a wider family, because it’s now so much easier to spend time with E’s brothers and cousins and my sisters. How comfortable our house is, and how I enjoy spending time and working in our garden.
Celebrated the 50th birthday of a dear friend. We all go back at least 25 years, from when we were all at university, and room mates in various constellations. M said she felt privileged to have all of us around the table that night, that all of us responded to her invitation. She’s right, and all of us realised it, it is a privilege. The combination of making the effort to hang out together, and doing that consistently over many years creates value and depth and a sense of connectedness by itself. Regardless of what happened and happens to any of us, that always stands.
Finally attended Techfestival, for its third edition, having had to decline the invitations to the previous two. Was there to get inspired, take the pulse of the European tech scene, and as part of the Copenhagen 150 helped created the Techpledge. Participating in that process gave me a few insights into my own role and motivations in the development and use of technology.
Getting into an operational rhythm with the new director and me in my role as the chairman of the Open State Foundation. Working in that role opened up my mind again to notions about openness and good governance that I lost track of a bit focussing on the commercial work I do in this area with my company. It rekindles the activist side of me more again.
Working with my Open NL colleagues, yet another angle of open content, seen from the licensing perspective. Enjoyed giving a presentation on Creative Commons in Leeuwarden as part of the Open Access Week events organised by the local public and higher education libraries in that city.
Visited some conferences without having an active contribution to the program. It felt like a luxury to just dip in and out of sessions and talks on a whim.
Finding a bit more mental space and time to dive deeper into some topics. Such as ethics with regard to data collection and usage, information hygiene & security, AI and distributed technologies
Worked in Belgium, Denmark, Canada and Germany, which together amounts to the smallest amount of yearly travel I have done in this last decade. Travel is a habit Bryan said to me a few years back, and it’s true. I felt the withdrawal symptoms this year. I missed travel, I need it, and as a result especially enjoyed my trips to both Denmark and Canada. In the coming year there should be an opportunity to work in SE Asia again, and I’m on the lookout for more activities across the EU member states.
Presented in Germany, in German for this first time since years. Again something I’d like to do more of, although I find it difficult to create opportunities to work there. The event opened my eyes to the totally different level of digitisation in Germany. There’s a world to gain there, and there should be opportunities in contributing to that.
Hosted an unconference at the Saxion University of Applied Sciences in Enschede, in celebration of the 15th anniversary of the industrial design department. Its head, Karin van Beurden asked me to do this as she had experienced our birthday unconferences and thought it a great way to celebrate something in a way that is intellectually challenging and has a bite to it. This year saw a rise in unconferences I organised, facilitated or attended (7), and I find there’s an entire post-BarCamp generation completely unfamiliar with the concept. Fully intend to do more of this next year, as part of the community efforts of my company. We did one on our office roof top this year, but I really want this to become a series
Spent a lot of time (every Friday) with Y, and (on weekends) with the three of us. Y is at an age where her action radius is growing, and the type of activities we can undertake have more substance to them. I love how her observational skills and mind work, and the types of questions she is now asking.
Taking opportunities to visit exhibits when they arise. Allowing myself the 60 or so minutes to explore. Like when I visited the Chihuly exhibit in Groningen when I was in the city for an appointment and happened to walk past the museum.
This post is not about it, but I have tangible notions about what I want to do and focus on in the coming months, more than I had a year ago. Part of that is what I learned from the things above that gave me a sense of accomplishment. Part of that is the realisation E and I need to better stimulate and reinforce each others professional activities. That is a good thing too.
In 2019 I worked 1756 hours, which is about 36 hours per week worked. This is above my actual 4 day work week, and I still aim to reduce it further, but it’s stable compared to 2016-2018, which is a good thing. Especially considering it was well over 2400 in 2011 and higher before.
I read 48 books, less than one a week, but including a handful of non-fiction, and nicely evenly spread out over the year, not in bursts. I did not succeed in reading significantly more non-fiction, although I did buy quite a number of books. So there’s a significant stack waiting for me. Just as there is a range of fiction works still waiting for my attention. I don’t think I need to buy more books in the coming 4 months or 6 even, but I will have to learn to keep the bed side lamp on longer as I have a surprising number of paper books waiting for me after years of e-books only.
We’ll see off the year in the company of dear friends in the Swiss mountainside, and return early 2020. Onwards!