We’re in a time where whatever is presented to us as discourse on Facebook, Twitter or any of the other platforms out there, may or may not come from humans, bots, or someone/a group with a specific agenda irrespective of what you say or respond. We’ve seen it at the political level, with outside influences on elections, we see it in things like gamer gate, and in critiques of the last Star Wars movie. It creates damage on a societal level, and it damages people individually. To quote Angela Watercutter, the author of the mentioned Star Wars article,

…it gets harder and harder to have an honest discussion […] when some of the speakers are just there to throw kerosene on a flame war. And when that happens, when it’s impossible to know which sentiments are real and what motivates the people sharing them, discourse crumbles. Every discussion […] could turn into a […] fight — if we let it.

Discourse disintegrates I think specifically when there’s no meaningful social context in which it takes place, nor social connections between speakers in that discourse. The effect not just stems from that you can’t/don’t really know who you’re conversing with, but I think more importantly from anyone on a general platform being able to bring themselves into the conversation, worse even force themselves into the conversation. Which is why you never should wade into newspaper comments, even though we all read them at times because watching discourse crumbling from the sidelines has a certain addictive quality. That this can happen is because participants themselves don’t control the setting of any conversation they are part of, and none of those conversations are limited to a specific (social) context.

Unlike in your living room, over drinks in a pub, or at a party with friends of friends of friends. There you know someone. Or if you don’t, you know them in that setting, you know their behaviour at that event thus far. All have skin in the game as well misbehaviour has immediate social consequences. Social connectedness is a necessary context for discourse, either stemming from personal connections, or from the setting of the place/event it takes place in. Online discourse often lacks both, discourse crumbles, entropy ensues. Without consequence for those causing the crumbling. Which makes it fascinating when missing social context is retroactively restored, outing the misbehaving parties, such as the book I once bought by Tinkebell where she matches death threats she received against the sender’s very normal Facebook profiles.

Two elements therefore are needed I find, one in terms of determining who can be part of which discourse, and two in terms of control over the context of that discourse. They are point 2 and point 6 in my manifesto on networked agency.

  • Our platforms need to mimick human networks much more closely : our networks are never ‘all in one mix’ but a tapestry of overlapping and distinct groups and contexts. Yet centralised platforms put us all in the same space.
  • Our platforms also need to be ‘smaller’ than the group using it, meaning a group can deploy, alter, maintain, administrate a platform for their specific context. Of course you can still be a troll in such a setting, but you can no longer be one without a cost, as your peers can all act themselves and collectively.
  • This is unlike on e.g. FB where the cost of defending against trollish behaviour by design takes more effort than being a troll, and never carries a cost for the troll. There must, in short, be a finite social distance between speakers for discourse to be possible. Platforms that dilute that, or allow for infinite social distance, is where discourse can crumble.

    This points to federation (a platform within control of a specific group, interconnected with other groups doing the same), and decentralisation (individuals running a platform for one, and interconnecting them). Doug Belshaw recently wrote in a post titled ‘Time to ignore and withdraw?‘ about how he first saw individuals running their own Mastodon instance as quirky and weird. Until he read a blogpost of Laura Kalbag where she writes about why you should run Mastodon yourself if possible:

    Everything I post is under my control on my server. I can guarantee that my Mastodon instance won’t start profiling me, or posting ads, or inviting Nazis to tea, because I am the boss of my instance. I have access to all my content for all time, and only my web host or Internet Service Provider can block my access (as with any self-hosted site.) And all blocking and filtering rules are under my control—you can block and filter what you want as an individual on another person’s instance, but you have no say in who/what they block and filter for the whole instance.

    Similarly I recently wrote,

    The logical end point of the distributed web and federated services is running your own individual instance. Much as in the way I run my own blog, I want my own Mastodon instance.

    I also do see a place for federation, where a group of people from a single context run an instance of a platform. A group of neighbours, a sports team, a project team, some other association, but always settings where damaging behaviour carries a cost because social distance is finite and context defined, even if temporary or emergent.

    During his keynote at the Partos Innovation Festival Kenyan designer Mark Kamau mentioned that “45% of Kenya’s GDP was mobile.” That is an impressive statistic, so I wondered if I could verify it. With some public and open data, it was easy to follow up.

    World Bank data pegs Kenya’s GDP in 2016 at some 72 billion USD.
    Kenya’s central bank publishes monthly figures on the volume of transactions through mobile, and for September 2018 it reports 327 billion KSh, while the lowest monthly figure is February at 300 billion. With 100 Ksh being equivalent to 1 USD, this means the monthly transaction volume exceeds 3 billion USD every month. For a year this means 3*12=36 billion USD, or about half of the 2016 GDP figure. An amazing volume.

    20181011_090319
    It was a beautiful day in Amsterdam, while I walked to the venue through the eastern harbour area

    Today I was in Amsterdam, participating in the Partos Innovation Festival, a yearly meet-up of those working on change and innovation in development and humanitarian aid. It was a much larger gathering than I had expected, and through the day I encountered a wide variety of projects and ideas. It was clear I normally operate in different environments, as some of the projects were making (technology) choices that wouldn’t have been made elsewhere. Clearly all of us work within the constraints of the capabilities, experience and knowledge available to us in our networks and sectors. The day started with two worthwile keynotes, one by Kenyan designer Mark Kamau, one by human rights lawyer Tulika Srivastava from India.

    20181011_102905 Partos Innovation Festival

    The reason I attended was that I was a jury member for one of 5 innovation awards presented today, the Dutch Humanitarian Coalition for Innovation’s “Best Humanitarian Innovation Award”. Together with Klaas Hernamdt, we go back a long time in the FabLabs network, and Suzanne Laszlo, general director of UNICEF Netherlands, we had the pleasure to judge a short list of 8 projects, from which we already selected 3 nominees two weeks ago. Today the winner was announced: Optimus, that through data analysis and optimisation models, helps the WFP to save millions of dollars while distributing food of the same nutritional value to those most in need. This allowed the WFP in trial runs to feed 100.000 people more against the same costs. This is crucial as food aid is continuously struggling with getting enough funding.

    While Optimus were deserved winners I must say the other two finalists came close. Of the overall 40 points they could get in our judging method, all three ended up within 2.5 points of each other, while the other 5 nominees fell further behind. Personally I liked Translators Without Borders very much as well, who ended up in second place. I also had the pleasure of meeting Animesh Prakash of Oxfam India, who with a cheap and distributed early flood warning system came third, twice in the past week. It seems to me his effort might benefit from building closer ties to the maker community in India, and I will try and assist him doing that.

    Partos Innovation Festival
    Klaas handing the award to the winners of the Optimus project, with the day’s moderator Marina Diboma

    In just over a week I will be joining the Nuremberg IndieWebCamp, together with Frank Meeuwsen. As I said earlier, like Frank, I’m wondering what I could be working on, talking about, or sharing at the event. Especially as the event is set up to not just talk but also build things.

    So I went through my blogpostings of the past months that concerned the indie web, and made a list of potential things. They are of varying feasibility and scope, so I can probably strike off quite a few, and should likely go for the most simple one, which could also be re-used as building block for some of the less easy options. The list contains 13 things (does that have a name, a collection of 13 things, like ‘odd dozen’ or something? Yes it does: a baker’s dozen, see comment by Ric below.). They fall into a few categories: webmention related, rss reader related, more conceptual issues, and hardware/software combinations.

    1. Getting WebMention to display the way I want, within the Sempress theme I’m using here. The creator of the theme, Matthias Pfefferle, may be present at the event. Specifically I want to get some proper quotes displayed underneath my postings, and also understand much better what webmention data is stored and where, and how to manipulate it.
    2. Building a growing list of IndieWeb sites by harvesting successful webmentions from my server logs, and publish that in a re-usable (micro-)format (so that you could slowly map the Indieweb over time)
    3. Make it much easier for myself to blog from mobile, or mail to my blog, using the MicroPub protocol, e.g. using the micropublish client.
    4. Dive into the TinyTinyRSS datastructure to better understand. First to be able to add tags to feeds (not articles), as per my wishlist for RSS reader functionality.
    5. Make basic visualisation possible on top of TinyTinyRSS database, as a step to a reading mode based on pattern detection
    6. Allow better search across TinyTinyRSS, full text, to support the reading mode of searching material around specific questions I hold
    7. Adding machine translation to TinyTinyRSS, so I can diversify my reading, and compare original to its translation on a post by post basis
    8. Visualising conversations across blogs, both for understanding the network dynamics involved and for discovery
    9. Digging up my old postings 2003-2005 about my information strategies and re-formulate them for networked agency and 2018
    10. Find a way of displaying content (not just postings, but parts of postings) limited to a specific audience, using IndieAuth.
    11. Formulate my Networked Agency principles, along the lines of the IndieWeb principles, for ‘indietech’ and ‘indiemethods’
    12. Attempt to run FreedomBone on a Raspberry Pi, as it contains a range of tools, including GnuSocial for social networking. (Don’t forget to bring a R Pi for it)
    13. Automatically harvest my Kindle highlights and notes and store them locally in a way I can re-use.

    These are the options. Now I need to pick something that is actually doable with my limited coding skills, yet also challenges me to learn/do something new.

    Previously I had tried to get GNU Social running on my own hosted domain as a way to interact with Mastodon. I did not get it to work, for reasons unclear to me, I could follow people on Mastodon but would not receive messages, nor would they see mine.

    This morning I saw the message below in my Mastodon timeline.

    It originates from Peter Rukavina’s own GNU Social install. So at least he got the ‘sending mentions’ part working. He is also able to receive my replies, as my responses show up underneath his original message. Including ones I limited the visibility of it seems.

    Now I am curious to compare notes. Which version of GNU Social? Any tweaks? Does Peter receive my timeline? How do permissions propagate (I only let people follow me after I approve them)? And more. I notice that his URL structures are different from those in my GNU Social install for instance.

    I was a bit surprised to see a Dutch title above one of Peter’s blog posts. It referred to the blog of Marco Derksen, that I follow. I think Peter may have found it in the list of blogs I follow (in OPML) that I publish.

    Peter read it through machine translation. Reading the posting made me realise I only follow blogs in the languages I can read, but that that is limiting my awareness of what others across Europe and beyond blog about.

    So I think I need to extend my existing list of demands for an RSS reader with built-in machine translation. As both Tiny Tiny RSS which I self host and Google translate have API’s that should be possible to turn into a script.

    We spent a lovely day in sunny Breda today at the BredaPhoto Festival, titled To Infinity and Beyond. The weather was perfect and we had lunch outside even.

    Some images.

    BredaPhoto BredaPhoto
    Walking through Breda

    BredaPhoto BredaPhoto
    Work by Kenta Cobayashi (festival page)

    BredaPhoto
    The artist and someone else’s work. (video interview with Jeroen Bocken, work on the wall by Maija Tammi

    BredaPhoto
    BredaPhoto
    BredaPhoto
    Three data visualisations photographed by Jos Jansen: Criminal relationship network (University of Amsterdam), Lidar images of trees (University of Amsterdam), Probability function of the Higgs Boson (NIKHEF).

    BredaPhoto
    Pictures of Aldermen of medium sized cities, with grey buzz cuts…Jan Dirk van der Burg serialises photos found online into weird patterns and categories.

    BredaPhoto
    Image deemed controversial by Iran’s ministry for culture. From the Qajar series by Shadi Ghadirian

    BredaPhoto BredaPhotoAntony Cairns, IBM CTY1, city photos on IBM punch cards.

    BredaPhoto
    Open after 8:00, close before 17:00. Note on a door at Breda city archive.

    BredaPhoto
    Empty lunch cafe in Breda city center, as everyone was outside enjoying the sun.

    We had a good day, but I found the photo festival lacking cohesion and a narrative, binding it all into the theme To Infinity and Beyond.

    As I didn’t succeed yet in getting Mastodon to run on a Raspberry Pi, nor in running a Gnu Social instance that actually federates on my hosting package, I’ve opted for an intermediate solution to running my own Mastodon instance.

    Key in all this is satisfying three dimensions: control, flexibility and ease of use. My earlier attempts satisfy the control and flexibility dimensions, but as I have a hard time getting them to work, do not satisfy the ease of use dimension yet.

    At the same time I did not want to keep using Mastodon on a generic server much longer, as it builds up a history there which with every conversation ups the cost of leaving.

    The logical end point of the distributed web and federated services is running your own individual instance. Much as in the way I run my own blog, I want my own Mastodon instance.

    Such an individual instance needs to be within my own scope of control. This means having it at a domain I own. and being able to move everything to a different server at will.

    There is a hoster, Masto.host run by Hugo Gameiro, who provides Mastodon hosting as a monthly subscription. As it allows me to use my own domain name, and provides me with admin privileges of the mastodon instance, this is a workable solution. When I succeed in getting my own instance of Mastodon running on the Rapsberry Pi, I can simply move the entire instance at Masto.host to it.

    Working with Hugo at Masto.host was straightforward. After registering for the service, Hugo got in touch with me to ensure the DNS settings on my own domain were correct, and briefly afterwards everything was up and running.
    Frank Meeuwsen, who started using Masto.host last month, kindly wrote up a ‘moving your mastodon account’ guide in his blog (in Dutch). I followed (most) of that, to ensure a smooth transition.

    Using Mastodon? Do follow me at https://m.tzyl.nl/@ton.

    Screenshots of my old Mastodon.cloud account, and my new one on my own domain. And the goodbye and hello world messages from both.

    Today I had a bit of time to try running Mastodon on Raspberry Pi again. Last week I got stuck as some of the mentioned dependencies in the Mastodon installation guide could not be installed. As the step where I got stuck deals with a different Linux version, I tried simply skipping to the next step.

    From the linked guide the steps ubuntu dependencies, node.js repository, yarn repository did not work.
    The step after that, for various other dependencies, works again (which includes yarn actually).
    Then a few steps follow that need to be executed as the specific user for mastodon. Installing ruby and node.js works fine, and almost all steps to install ruby and node.js dependencies. The final 2 steps of the dependencies throw errors (bundle install and yarn install). As at least some parts of the bundle install command do get executed, but not all. These are the last two steps before actually getting into configuring the installation, so it feels like being nearly there.

    I’d have to dive deeply into the logfiles to see what wasn’t installed and what is missing. Not sure if I will easily find time to do so, and if I would actually understand what the log files tell me. It is also unclear if there is a relationship with the three steps I have skipped earlier in the process as they didn’t work.

    This Tuesday 2 October sees the annual event of the Dutch Coalition for Humanitarian Innovation. The coalition consists of government entities, knowledge institutions, academia, businesses, and humanitarian organisations in the Netherlands. Together they aim to develop and scale new solutions to increase impact and reduce costs of humanitarian action.

    I was asked to join this year’s jury for DCHI’s innovation award. There is a jury award and a public award. For the jury award 8 projects were shortlisted, from which the jury has now selected 3 finalists that were announced last Friday. The public award winner will be selected from the same short list.
    At the annual event of DCHI this Tuesday the public award winner will be announced, followed by closing remarks by the Minister of development cooperation mrs Sigrid Kaag, who is very well experienced when it comes to international development. The jury award will be presented to the winner on October 11th at the Partos innovation festival.

    The three finalists my colleagues and I in the jury selected are all very interesting, so I briefly want to list them here.

    Optimus by the UN’s World Food Program and Tilburg University
    Data analysis and mathematical modeling optimises supply and distribution also by taking into account locally available food and conditions. Optimisation means delivering the same nutritional value against lower efforts. It has been successfully used in Syria, Iraq, Yemen and Ethiopia. In Iraq it helped save a million USD per month, allowing the program to provide an additional 100.000 people in need with food packages. (link in Dutch)

    Quotidian early warning solutions by Oxfam India
    Flood prediction models in India are accurate, but still flooding causes many fatalities. The cause is often not being able to timely reach and warn everyone. Oxfam India came up with ways to integrate early warning systems with existing local infrastructure, and so create a low cost option for real time distribution of flood warnings.

    Words of Relief / Translators without borders
    Being able to provide key information to people in need depends on having that information in the right language. Information only saves lives if those who need it understand it. Translators without Borders creates glossaries which can be used for humanitarian response. Their Gamayun initiative wants to bring 20 underserved languages online by creating such glossaries and providing that as open data to all who can use it. They see it as a key tool for equality as well. In a slightly different setting I saw this work in practice, during the Syrian refugee wave in Germany, at a hackathon I attended such glossaries were used to build apps to help refugees navigate German bureaucracy and find the help they needed.

    These three projects are very different, in terms of technology used, in the issues they address, and the way they involve the communities concerned, and all three highly fascinating.

    Saturday I visited the Maker Faire in Eindhoven. Jeroen of the Frysklab team invited me to come along, when their mobile FabLab was parked in our courtyard for Smart Stuff That Matters. They had arranged a touring car to take a group of librarians and educators to the Maker Faire, and invited me to join the bus ride. So I took a train to Apeldoorn and then a taxi out to a truck stop where the bus was scheduled to stop for a coffee break, and then joined them for the rest of the drive down south.

    The Maker Faire was filled with all kinds of makers showing their projects, and there was a track with 30 minute slots for various talks.
    It was fun to walk around, meet up with lots of people I know. Lots of projects shown seemed to lack a purpose beyond the initial fascination of technological possibilities however. There were many education oriented projects as well, and many kids happily trying their hand on them. From a networked agency point of view there were not that many projects that aimed for collective capabilities.

    Some images, and a line or two of comment.

    Makerfair Eindhoven
    En-able, a network of volunteers printing 3d-printed prosthetics, was present. Talked to the volunteer in the image, with his steam-punk prosthetic device. They printed 18 hands and arm prosthetics for kids in the Netherlands last year, and 10 this year until now. Children need new prosthetics every 3 to 6 months, and 3d printing them saves a lot of costs and time. You even get to customise them with colors, and your favourite cartoon figure or super hero.

    Makerfair Eindhoven Makerfair Eindhoven
    3d printing with concrete, a project in which our local FabLab Amersfoort is involved. Didn’t get to see the printer working alas.

    Makerfair Eindhoven Makerfair Eindhoven Makerfair Eindhoven
    Novelty 3d printing of portraits.

    Makerfair Eindhoven Makerfair Eindhoven
    Building your own electronic music devices.

    Makerfair Eindhoven Makerfair Eindhoven
    Bringing LED-farming to your home, open source. Astroplant is an educational citizen science project, supported by ESA.

    Maker Faire Eindhoven Maker Faire Eindhoven
    Robot football team versus kids team. Quite a few educational projects around robotics were shown. Mostly from a university of applied sciences, but with efforts now branching out to preceding education levels. Chatted to Ronald Scheer who’s deeply involved in this (and who participated in our Smart Stuff That Matters unconference).

    Maker Faire Eindhoven Maker Faire Eindhoven
    A good way to showcase a wide range of Microbit projects by school children. I can see this mounted on a class room wall.

    Maker Faire Eindhoven Maker Faire Eindhoven
    20180929_123913
    An open source 3d-printed, arduino controlled android. But what is it for? Open source robotics in general is of interest of course. There were also remote controlled robots, which were quite a lot of fun, as the video shows.

    Maker Faire Eindhoven Maker Faire Eindhoven

    20180929_134053

    At the fringe of the event there was some steam punk going on.

    Maker Faire Eindhoven Maker Faire Eindhoven
    Building with card board boxes for children. Makedo is an Australian brand, and next to their kits, you can find additional tools and elements as 3d printable designs online.

    Maker Faire Eindhoven
    The Frysklab team presented the new Dutch language Data Detox kit, which they translated from the English version the Berlin based Tactical Tech Collective created.

    For the UNDP in Serbia, I made an overview of existing studies into the impact of open data. I’ve done something similar for the Flemish government a few years ago, so I had a good list of studies to start from. I updated that first list with more recent publications, resulting in a list of 45 studies from the past 10 years. The UNDP also asked me to suggest a measurement framework. Here’s a summary overview of some of the things I formulated in the report. I’ll start with 10 things that make measuring impact hard, and in a later post zoom in on what makes measuring impact doable.

    While it is tempting to ask for a ‘killer app’ or ‘the next tech giant’ as proof of impact of open data, establishing the socio-economic impact of open data cannot depend on that. Both because answering such a question is only possible with long term hindsight which doesn’t help make decisions in the here and now, as well as because it would ignore the diversity of types of impacts of varying sizes known to be possible with open data. Judging by the available studies and cases there are several issues that make any easy answers to the question of open data impact impossible.

    1 Dealing with variety and aggregating small increments

    There are different varieties of impact, in all shapes and sizes. If an individual stakeholder, such as a citizen, does a very small thing based on open data, like making a different decision on some day, how do we express that value? Can it be expressed at all? E.g. in the Netherlands the open data based rain radar is used daily by most cyclists, to see if they can get to the rail way station dry, better wait ten minutes, or rather take the car. The impact of a decision to cycle can mean lower individual costs (no car usage), personal health benefits, economic benefits (lower traffic congestion) environmental benefits (lower emissions) etc., but is nearly impossible to quantify meaningfully in itself as a single act. Only where such decisions are stimulated, e.g. by providing open data that allows much smarter, multi-modal, route planning, aggregate effects may become visible, such as reduction of traffic congestion hours in a year, general health benefits of the population, reduction of traffic fatalities, which can be much better expressed in a monetary value to the economy.

    2 Spotting new entrants, and tracking SME’s

    The existing research shows that previously inactive stakeholders, and small to medium sized enterprises are better positioned to create benefits with open data. Smaller absolute improvements are of bigger value to them relatively, compared to e.g. larger corporations. Such large corporations usually overcome data access barriers with their size and capital. To them open data may even mean creating new competitive vulnerabilities at the lower end of their markets. (As a result larger corporations are more likely to say they have no problem with paying for data, as that protects market incumbents with the price of data as a barrier to entry.) This also means that establishing impacts requires simultaneously mapping new emerging stakeholders and aggregating that range of smaller impacts, which both can be hard to do (see point 1).

    3 Network effects are costly to track

    The research shows the presence of network effects, meaning that the impact of open data is not contained or even mostly specific to the first order of re-use of that data. Causal effects as well as second and higher order forms of re-use regularly occur and quickly become, certainly in aggregate, much higher than the value of the original form of re-use. For instance the European Space Agency (ESA) commissioned my company for a study into the impact of open satellite data for ice breakers in the Gulf of Bothnia. The direct impact for ice breakers is saving costs on helicopters and fuel, as the satellite data makes determining where the ice is thinnest much easier. But the aggregate value of the consequences of that is much higher: it creates a much higher predictability of ships and the (food)products they carry arriving in Finnish harbours, which means lower stocks are needed to ensure supply of these goods. This reverberates across the entire supply chain, saving costs in logistics and allowing lower retail prices across Finland. When 
mapping such higher order and network effects, every step further down the chain of causality shows that while the bandwidth of value created increases, at the same time the certainty that open data is the primary contributing factor decreases. Such studies also are time consuming and costly. It is often unlikely and unrealistic to expect data holders to go through such lengths to establish impact. The mentioned ESA example, is part of a series of over 20 such case studies ESA commissioned over the course of 5 years, at considerable cost for instance.

    4 Comparison needs context

    Without context, of a specific domain or a specific issue, it is hard to asses benefits, and compare their associated costs, which is often the underlying question concerning the impact of open data: does it weigh up against the costs of open data efforts? Even though in general open data efforts shouldn’t be costly, how does some type of open data benefit compare to the costs and benefits of other actions? Such comparisons can be made in a specific context (e.g. comparing the cost and benefit of open data for route planning with other measures to fight traffic congestion, such as increasing the number of lanes on a motor way, or increasing the availability of public transport).

    5 Open data maturity determines impact and type of measurement possible

    Because open data provisioning is a prerequisite for it having any impact, the availability of data and the maturity of open data efforts determine not only how much impact can be expected, but also determine what can be measured (mature impact might be measured as impact on e.g. traffic congestion hours in a year, but early impact might be measured in how the number of re-users of a data set is still steadily growing year over year)

    6 Demand side maturity determines impact and type of measurement possible

    Whether open data creates much impact is not only dependent on the availability of open data and the maturity of the supply-side, even if it is as mentioned a prerequisite. Impact, judging by the existing research, is certain to emerge, but the size and timing of such impact depends on a wide range of other factors on the demand-side as well, including things as the skills and capabilities of stakeholders, time to market, location and timing. An idea for open data re-use that may find no traction in France because the initiators can’t bring it to fruition, or because the potential French demand is too low, may well find its way to success in Bulgaria or Spain, because local circumstances and markets differ. In the Serbian national open data readiness assessment performed by me for the World Bank and the UNDP in 2015 this is reflected in the various dimensions assessed, that cover both supply and demand, as well as general aspects of Serbian infrastructure and society.

    7 We don’t understand how infrastructure creates impact

    The notion of broad open data provision as public infrastructure (such as the UK, Netherlands, Denmark and Belgium are already doing, and Switzerland is starting to do) further underlines the difficulty of establishing the general impact of open data on e.g. growth. The point that infrastructure (such as roads, telecoms, electricity) is important to growth is broadly acknowledged, with the corresponding acceptance of that within policy making. This acceptance of quantity and quality of infrastructure increasing human and physical capital however does not mean that it is clear how much what type of infrastructure contributes at what time to economic production and growth. Public capital is often used as a proxy to ascertain the impact of infrastructure on growth. Consensus is that there is a positive elasticity, meaning that an increase in public capital results in an increase in GDP, averaging at around 0.08, but varying across studies and types of infrastructure. Assuming such positive elasticity extends to open data provision as infrastructure (and we have very good reasons to do so), it will result in GDP growth, but without a clear view overall as to how much.

    8 E pur si muove

    Most measurements concerning open data impact need to be understood as proxies. They are not measuring how open data is creating impact directly, but from measuring a certain movement it can be surmised that something is doing the moving. Where opening data can be assumed to be doing the moving, and where opening data was a deliberate effort to create such movement, impact can then be assessed. We may not be able to easily see it, but still it moves.

    9 Motives often shape measurements

    Apart from the difficulty of measuring impact and the effort involved in doing so, there is also the question of why such impact assessments are needed. Is an impact assessment needed to create support for ongoing open data efforts, or to make existing efforts sustainable? Is an impact measurement needed for comparison with specific costs for a specific data holder? Is it to be used for evaluation of open data policies in general? In other words, in whose perception should an impact measurement be meaningful?
    The purpose of impact assessments for open data further determines and/or limits the way such assessments can be shaped.

    10 Measurements get gamed, become targets

    Finally, with any type of measurement, there needs to be awareness that those with a stake of interest into a measurement are likely to try and game the system. Especially so where measurements determine funding for further projects, or the continuation of an effort. This must lead to caution when determining indicators. Measurements easily become a target in themselves. For instance in the early days of national open data portals being launched worldwide, a simple metric often reported was the number of datasets a portal contained. This is an example of a ‘point’ measurement that can be easily gamed for instance by subdividing a dataset into several subsets. The first version of the national portal of a major EU member did precisely that and boasted several hundred thousand data sets at launch, which were mostly small subsets of a bigger whole. It briefly made for good headlines, but did not make for impact.

    In a second part I will take a closer look at what these 10 points mean for designing a measurement framework to track open data impact.

    This week I am in Novi Sad for the plenary of the Assembly of European Regions. Novi Sad is the capitol of the Vojvodina, a member region, and the host for the plenary meetings of the AER.

    I took part in a panel to discuss the opportunities of open data at regional level. The other panelists were my Serbian UNDP colleague Slobodan Markovic, Brigitte Lutz of the Vienna open data portal (whom I hadn’t met in years), Margreet Nieuwenhuis of the European open data portal, and Geert-Jan Waasdorp who uses open data about the European labour market commercially.

    Below are the notes I used for my panel contributions:

    Open data is a key building block for any policy plan. The Serbian government certainly treats it as such, judging by the PM’s message we just heard, and the same should be true for regional governments.

    Open data from an organisational stand point is only sustainable if it is directly connected to primary policy processes, and not just an additional step or effort after the ‘real’ work has been done. It’s only sustainable if it means something for your own work as regional administration.

    We know that open data allows people and organisations to take new actions. These by themselves or in aggregate have impact on policy domains. E.g. parents choosing schools for their children or finding housing, multimodal route planning, etc.

    So if you know this effect exists, you can use it on purpose. Publish data to enable external stakeholders. You need to ask yourself: around which policy issues do you want to enable more activity? Which stakeholders do you want to enable or nudge? Which data will be helpful for that, if put into the hands of those stakeholders?

    This makes open data a policy instrument. Next to funding and regulation, publishing open data for others to use is a way to influence stakeholder behaviour. By enabling them and partnering with them.
    It is actually your cheapest policy instrument, as the cost of data collection is always a sunk cost as part of your public task

    Positioning open data this way, as a policy instrument, requires building connections between your policy issues, external stakeholders and their issues, and the data relevant in that context.

    This requires going outside and listen to stakeholders and understand the issues they want to solve, the things they care about. You need to avoid making any assumptions.

    We worked with various regional governments in the Netherlands, including the two Dutch AER members Flevoland and Gelderland. With them we learned that having those outside conversations is maybe the hardest part. To create conversations between a policy domain expert, an internal data expert, and the external stakeholders. There’s often a certain apprehension to reach out like that and have an open ended conversation on equal footing. From those conversations you learn different things. That your counterparts are also professionals interested in achieving results and using the available data responsibly. That the ways in which others have shaped their routines and processes are usually invisible to you, and may be surprising to you.
    In Flevoland there’s a program for large scale maintenance on bridges and water locks in the coming 4 years. One of the provincial aims was to reduce hindrance. But an open question was what constitutes hindrance to different stakeholders. Only by talking to e.g. farmers it became clear that the maintenance plans themselves were less relevant than changes in those plans: a farmer rents equipment a week before some work needs to be done on the fields. If within that week a bridge unexpectedly becomes blocked, it means he can’t reach his fields with the rented equipment and damage is done. Also relevant is exploring which channels are useful to stakeholders for data dissemination. Finding channels that are used already by stakeholders or channels that connect to those is key. You can’t assume people will use whatever special channel you may think of building.

    Whether it is about bridge maintenance, archeology, nitrate deposition, better usage of Interreg subsidies, or flash flooding after rain fall, talking about open data in terms of innovation and job creation is hollow and meaningless if it is not connected to one of those real issues. Only real issues motivate action.

    Complex issues rarely have simple solutions. That is true for mobility, energy transition, demographic pressure on public services, emission reduction, and everything else regional governments are dealing with. None of this can be fixed by an administration on its own. So you benefit from enabling others to do their part. This includes local governments as stakeholder group. Your own public sector data is one of the easiest available enables in your arsenal.

    In the past few days I tried a second experiment to run my own Mastodon instance. Both to actually get a result, but also to learn how easy or hard it is to do. The first round I tried running something on a hosted domain. This second round I tried to get something running on a Raspberry Pi.

    The Rapsberry Pi is a 35 Euro computer, making it very useful for stand-alone solutions or as a cheap hardware environment to learn things like programming.

    20180923_144442Installing Debian Linux on the Rapsberry Pi

    I found this guide by Wim Vanderbauwhede, which describes installing both Mastodon and Pleroma on a Raspberry Pi 3. I ordered a Raspberry Pi 3 and received it earlier this week. Wim’s guide points to another guide by on how to install Ruby on Rails and PostgresSQL on a Rapsberry Pi. The link however was dead, and that website offline. However archive.org had stored several snapshots, which I save to Evernote.

    Installing Ruby on Rails went fine using the guide, as did installing PostgresSQL. Then I returned to Wim’s guide, now pointing to the Mastodon installation guide. This is where the process currently fails for me: I can’t extend the Ubuntu repositories mentioned, nor node.js.

    So for now I’m stalled. I’ll try to get back to it later next week.

    Last week the 2nd annual Techfestival took place in Copenhagen. As part of this there was a 48 hour think tank of 150 people (the ‘Copenhagen 150‘), looking to build the Copenhagen Catalogue, as a follow-up of last year’s Copenhagen Letter of which I am a signee. Thomas, initiator of the Techfestival had invited me to join the CPH150 but I had to decline the invitation, because of previous commitments I could not reschedule. I’d have loved to contribute however, as the event’s and even more the think tank’s concerns are right at the heart of my own. My concept of networked agency and the way I think about how we should shape technology to empower people in different ways runs in parallel to how Thomas described the purpose of the CPH150 48 hour think tank at its start last week.

    For me the unit of agency is the individual and a group of meaningful relationships in a specific context, a networked agency. The power to act towards meaningful results and change lies in that group, not in the individual. The technology and methods that such a group deploys need to be chosen deliberately. And those tools need to be fully within scope of the group itself. To control, alter, extend, tinker, maintain, share etc. Such tools therefore need very low adoption thresholds. Tools also need to be useful on their own, but great when federated with other instances of those tools. So that knowledge and information, learning and experimentation can flow freely, yet still can take place locally in the (temporary) absence of such wider (global) connections. Our current internet silos such as Facebook and Twitter clearly do not match this description. But most other technologies aren’t shaped along those lines either.

    As Heinz remarked earlier musing about our unconference, effective practices cannot be separated from the relationships in which you live. I added that the tools (both technology and methods) likewise cannot be meaningfully separated from the practices. Just like in the relationships you cannot fully separate between the hyperlocal, the local, regional and global, due to the many interdependencies and complexity involved: what you do has wider impact, what others do and global issues express themselves in your local context too.

    So the CPH150 think tank effort to create a list of principles that takes a human and her relationships as the starting point to think about how to design tools, how to create structures, institutions, networks fits right with that.

    Our friend Lee Bryant has a good description of how he perceived the CPH150 think tank, and what he shared there. Read the whole thing.

    Meanwhile the results are up: 150 principles called the Copenhagen Catalogue, beautifully presented. You can become signatory to those principles you deem most valuable to stick to.