Shakingtree Award

Today I attended the presentation of this year’s Shaking Tree Award. This annual award started in 2016, and is named after my friend Niels who received the first award during his ‘last lecture‘. Niels died a year ago. The Ministry of Healthcare has pledged to keep going with the award, in the spirit of Niels’ efforts: shake up the system, fighting unneeded and kafkaesque bureaucracy, have hands-on experience with the system at ‘the receiving end’ so you know what you’re talking about, have a sense of humor to go with it, and be able to ‘dance with the system’.

The meeting was attended by a diverse range of people, from the healthcare domain, Niels’ family, and of course a smattering of Niels’ friends.

Before presenting this year’s nominees and the award, time was given to remembering Niels and the reason for this award. This was followed by two conversations between a previous winner and nominee and a representative of an institution they struggled with. First were Annette Stekelenburg and Ria Dijkstra, manager operations at a health care insurer. Annette has a son that needs tube feeding to survive. This situation will not change. Yet every year they need to apply for approval to continue receiving the materials needed. Annette and Ria had a frank conversation about what happened when Annette publicly announced she was fed up with this yearly bureaucracy that should be unneeded. Dijkstra explained how they thought that they had already changed the rules, making the renewal once every 5 years, but that the suppliers never knew, and that forms are being sent out in the insurers name that don’t actually exist anymore.

The second conversation was between Kathi Künnen, a previous nominee, and Betsie Gerrits, department head at UWV, the government agency in charge of employee insurance. Kathi is 29 and has incurable cancer. Because of that she has been determined to be 100% incapable of working, yet there are lots of phases where she actually does want to work. 25% of young professionals with cancer have an incurable form, and most want to remain active as long as possible. Yet the system tells them their ‘earning capacity is 0’ and with a stamp like that there’s no way to find paid activity. Here too, the conversation first of all made the two parties at the table see each other as individual human beings. And from it energy and potential solutions follow. Kathi said she needs reassurance that there can be administrative certainty (other than being tossed out as worthless), as her own life is fluid enough as it is and changing all the time.

I thought both conversations were impressive, and the type of thing we need much more of. Once you get past the frustration, anger and disbelief that often plays a role too, you can see the actual human being at the other side of the table. Dancing with the system is, in part, being able to have these conversations.

The award was presented by the previous winner, Tim Kroesbergen, and the secretary general of the Ministry Erik Gerritsen was host to the event, with Maarten den Braber as MC. The jury, consisting of Sanne (Niels’ wife) and the previous two winners, Annette Stekelenburg and Tim Kroesbergen, made their choice known from amongst the three nominees: Eva Westerhoff, Elianne Speksnijder and Geert-Jan den Hengst. All three nominees were presented by a video, as well as a conversation about their experiences.

Eva Westerhoff is a disability rights advocate & accessibility consultant who happens to be deaf. Next to her job at a bank, she does lots of volunteer work on diversity, inclusion & accessibility in information, communication & tech. She’s been knocking on doors in the Healthcare Ministry for over 20 years. Today she said that because of the political cycle, it seems you need to do everything again every four years or so, to keep awareness high enough.

Elianne Speksnijder is a professional fashion model, photographer and story teller. Lyme disease and epilepsy caused her to land in a wheelchair when she was 15. As she said today, an age which brings enough difficulties as it is. It took her a decade to accept that her wheels were a permanent part of her life. She’s 28 now, a woman with ambitions ‘on wheels’. When she was a teenager she sorely missed a role model (or rolling model, as the Dutch word ‘rolmodel’ can mean both). Now she is setting out to be that role model herself. She hopes for much more inclusivity in media, and challenges companies about it.

Geert-Jan den Hengst, is a 48 year old father of two adult children. He has MS and has been living the last decade or so in an environment that provides 24/7 care. His laptop is his core conduit to the rest of the world. Writing is a need for him. He blogs on his own blog, and writes for the local football team’s website, various media in his hometown and more. At the heart of his writing are everyday observations. He says he is “not a political animal, so I need to stay close to my everyday life in what I do”. Often those observations are examples of how life can be made impractical for someone in his position. He mentioned an early example that got him started: for the local football stadium all types of tickets could be bought online, except for …. tickets for wheel chair access. People with wheel chairs needed to come buy the tickets in person. The group least likely to be able to do that easily.

From all three nominees, I think the main takeaway is taking the time to share and listen to the actual stories of people. Especially when things get complicated or complex. Not news, there’s a reason I’ve been active in participatory narrative inquiry and sense making for a long time, but it bears repeating. Stories are our main way of ‘measurement’ in complex situations, to catch what’s going on for real, to spot the actual (not just the intended) consequences of our actions, structures and regulations, to see the edge cases, and to find the knobs to turn towards getting better results (and know what better actually is).

Jury chairman Tim Kroesbergen after reading the jury motivations for all three nominees, announced Eva Westerhoff as the new Shaking Tree Award winner.

'Last Lecture' Deluxe @shakingtree #fakkeldragers
The Shaking Tree Award statuette (photo by Henk-Jan Winkeldermaat, CC by-nc-sa)

Inside the Ministry a poem by Merel Morre is painted on the wall, that she wrote in honor of Niels ‘Shakingtree’.
A rough translation reads (anything unpoetic is all my doing)

outside

shake goals awake
jump past rules
dance joints wider
dream chances free

out of bounds
outside limitation
it grows
as it grows

tree high
dream high
where it lighter
but never stops

In the ministy’s central hall all the pillars show a face of someone with the words “I care”. That and the poem are promising signs of commitment to the actual stories of people. The Ministry still has 24 statuettes in stock for the Shaking Tree Award, so there’s a likelihood they will keep the annual award up as well. But as this year’s winner Eva Westhoff warned, every 4 years the politics changes, so it’s better to make sure.

20181112_173353
The faces in the Ministry with the text ‘I care’

Aaron Swartz would have turned 32 November 8th. He died five years and 10 months ago, and since then, like this weekend, the annual Aaron Swartz weekend takes place with all kinds of hackathons and events in his memory. At the time of his suicide Swartz was being prosecuted for downloading material in bulk from JSTOR, a scientific papers archive (even though he had legitimate access to it).

In 2014 the Smart New World exhibition took place in Kunsthalle Düsseldorf, which Elmine and I visited. Part of it was the installation “18.591 Articles Sold By JSTOR for $19 = $353.229” with those 18.591 articles printed out, showing what precisely is behind the paywall, and what Swartz was downloading. Articles, like those shown, from the 19th century, since long in the public domain, sold for $19 each. After Swartz’ death JSTOR started making a small percentage of their public domain content freely accessible, limited to a handful papers per month.

The Düsseldorf exhibit was impressive, as it showed the volumes of material, but the triviality of most material too. It’s a long tail of documents with extremely low demand, being treated equally as recent papers in high demand.

Smart New World

Smart New World Smart New World
Smart New World Smart New World
Smart New World

Scientific journal publishers are increasingly a burden on the scientific world, rent-seeking gatekeepers. Their original value added role, that of multiplication and distribution to increase access, has been completely eroded, if not actually fully reversed.

This is a start to more fully describe and explore a distributed version of digitisation, digitalisation and specifically digital transformation, and state why I think bringing distributed / networked thinking into them matters.

Digitising stuff, digitalising routines, the regular way

Over the past decades much more of the things around us became digitised, and in recent years much of the things we do, our daily routines and work processes, have become digitalised. Many of those digitalised processes are merely digitised replicas of their paper predecessors. Asking for a government permit for instance, or online banking. There’s nothing there that wasn’t there in the paper version. Sometimes even small steps in those processes still force you to use paper. At the start of this year I had to apply for a declaration that my company had never been involved in procurement fraud. All the forms I needed for it (30 pages in total!), were digitised and I filled them out online, but when it came to sending it in, I had to print the PDF resulting from those 30 pages, and send it through snail mail. I have no doubt that the receiving government office’s first step was to scan it all before processing it. Online banking similarly is just a digitised paper process. Why don’t all online bank accounts provide nifty visualisation, filtering and financial planning tools (like alerts for dates due, saving towards a goal, maintaining a buffer etc.), now that everything is digital? The reason we laugh at Little Britains ‘computer says no’ sketches, is because we recognise all too well the frustration of organisations blindly trusting their digitalised processes, and never acknowledging or addressing their crappy implementation, or the extra work and route-arounds their indifference inflicts.

Digital transformation, digital societies

Digital transformation is the accumulated societal impact of all those digital artefacts and digitalised processes, even if they’re incomplete or half-baked. Digital transformation is why I have access to all those books in the long tail that never reached the shelves of any of the book shops I visited in decades part, yet now come to my e-reader instantly, resulting in me reading more and across a wider spectrum than ever before. Digital transformation is also the impact on elections that almost individually targeted data-driven Facebook advertising caused by minutely profiling undecided voters.

Digital transformation is often referred to these days, in my work often also in the context of development and the sustainable development goals.
Yet, it often feels to me that for most intents and purposes this digital transformation is done to us, about us but not of us. It’s a bit like the smart city visions corporations like Siemens and Samsung push(ed), that were basically devoid of life and humanity. Quality of life reduced and equated to security only, in sterilised cities, ignoring that people are the key actors, as critiqued by Adam Greenfield in 2013.

Human digital networks: distributed digital transformation

The Internet is a marvellous thing. At least it is when we use it actively, to assist us in our routines and in our efforts to change, learn and reach out. As social animals, our human interaction has always been networked where we fluently switch between contexts, degrees of trust and disclosure, and routing around undesired connections. In that sense human interaction and the internet’s original design principle closely match up, they’re both distributed. In contrast most digitalisation and digital transformation happens from the perspective of organisations and silos. Centralised things, where some decide for the many.

To escape that ‘done to us, about us, not of us’, I think we need to approach digitisation, digitalisation and digital transformation from a distributed perspective, matching up our own inherently networked humanity with our newly (since 30 yrs) networked global digital infrastructure. We need to think in terms of distributed digital transformation. Distributed digital transformation (making our own digital societal impact), building on distributed digitisation (making our things digital), and on distributed digitalisation (making our routines digital).

Signs of distributed digitisation and digitalisation

Distributed digitisation can already be seen in things like the quantified self movement, where individuals create data around themselves to use for themselves. Or in the sensors I have in the garden. Those garden measurements are part of something you can call distributed digitalisation, where a network of similar sensors create a map of our city that informs climate adaptation efforts by local government. My evolving information strategies, with a few automated parts, and the interplay of different protocols and self-proposed standards that make up the Indieweb also are examples of distributed digitalisation. My Networked Agency framework, where small groups of relationships fix something of value with low threshold digital technology, and network/digital based methods and processes, is distributed digitisation and distributed digitalisation combined into a design aid for group action.

Distributed digital transformation needs a macroscope for the new civil society

Distributed digital transformation, distributed societal impact seems a bit more elusive though.
Civil society is increasingly distributed too, that to me is clear. New coops, p2p groups, networks of individual actors emerge all over the world. However they are largely invisible to for instance the classic interaction between government and the incumbent civil society, and usually cut-off from the scaffolding and support structures that ‘classic’ activities can build on to get started. Because they’re not organised ‘the right way’, not clearly representative of a larger whole. Bootstrapping is their only path. As a result these initiatives are only perceived as single elements, and the scale they actually (can) achieve as a network remains invisible. Often even in the eyes of those single elements themselves.

Our societies, including the nodes that make up the network of this new type of civil society, lack the perception to recognise the ‘invisible hand of networks’. A few years ago already I discussed with a few people, directors of entities in that new civil society fabric, how it is that we can’t seem to make our newly arranged collective voices heard, our collective efforts and results seen, and our collective power of agency recognised and sought out for collaboration? We’re too used, it seems, to aggregating all those things, collapsing them into a single voice of a mouthpiece that has the weight of numbers behind it, in order to be heard. We need to learn to see the cumulative impact of a multitude of efforts, while simultaneously keeping all those efforts visible on their own. There exist so many initiatives I think that are great examples of how distributed digitalisation leads to transformation, but they are largely invisible outside their own context, and also not widely networked and connected enough to reach their own full potential. They are valuable on their own, but would be even more valuable to themselves and others when federated, but the federation part is mostly missing.
We need to find a better way to see the big picture, while also seeing all pixels it consists of. A macroscope, a distributed digital transformation macroscope.

The Twitter-like platform Gab has been forced offline, as their payment providers, hosting provider and domain provider all told them their business was no longer welcome. The platform is home to people with extremist views claiming their freedom of speech is under threat. At issue is of course where that speech becomes calling for violence, such as by the Gab-user who horribly murdered 11 people last weekend in Pittsburgh driven by anti-semitic hate.

Will we see an uptick in the use of federated sites such as Mastodon when platforms like Gab that are much more public disappear?

This I think isn’t about extremists being ‘driven underground’ but denying calls for violence, such as happened on Gab, a place in public discourse. An uptick in the use of federated sites would be a good development, as federation allows for much smaller groups to get together around something, whatever it is. In reverse that means no-one else needs to be confronted with it either if they don’t want to. Within the federation of Mastodon sites, I regularly come across instances listing other instances they do not connect to, and for which reasons. It puts the power of supporting welcomed behaviour and pushing back on unwelcome behaviour in the hands of more people, meaning every person running a Mastodon instance (and you can have your own instance), than just Twitter or Facebook management.


example of an instance denying another to be federated with it

That sort of moderation can still be hard, even if the moderator to member ratio is already much better than on the main platforms. But that just points the way to the long tail of much smaller instances, more individual ones even. It means it becomes easier for individuals and small groups to shun small cells, echo-chambers and rage bubbles, and not accidentally ending up in them or being forcefully drawn into them while you were having other conversations, like what can happen on Twitter. See my earlier posting on the disintegration of discourse. You then can do what networks do well: route around the stuff you perceive as damage or non-functional. It creates a stronger power symmetry and communication symmetry. It also denies extremists a wider platform. Yes they can still call for violence, which remains just as despicable. Yes, they can still blame Others for anything and be hateful of them. But they will be doing it in their back yard (or Mastodon instance), not in the park where you like to go walk your dog or do your morning run (or Twitter). They will not have a podium bigger than warranted, they will not have visibility beyond their own in-crowd. And will have to deal with more pushback and reality whenever they step outside such a bubble, without the pleasant illusion ‘everyone on twitter agrees with me’.

We’re in a time where whatever is presented to us as discourse on Facebook, Twitter or any of the other platforms out there, may or may not come from humans, bots, or someone/a group with a specific agenda irrespective of what you say or respond. We’ve seen it at the political level, with outside influences on elections, we see it in things like gamer gate, and in critiques of the last Star Wars movie. It creates damage on a societal level, and it damages people individually. To quote Angela Watercutter, the author of the mentioned Star Wars article,

…it gets harder and harder to have an honest discussion […] when some of the speakers are just there to throw kerosene on a flame war. And when that happens, when it’s impossible to know which sentiments are real and what motivates the people sharing them, discourse crumbles. Every discussion […] could turn into a […] fight — if we let it.

Discourse disintegrates I think specifically when there’s no meaningful social context in which it takes place, nor social connections between speakers in that discourse. The effect not just stems from that you can’t/don’t really know who you’re conversing with, but I think more importantly from anyone on a general platform being able to bring themselves into the conversation, worse even force themselves into the conversation. Which is why you never should wade into newspaper comments, even though we all read them at times because watching discourse crumbling from the sidelines has a certain addictive quality. That this can happen is because participants themselves don’t control the setting of any conversation they are part of, and none of those conversations are limited to a specific (social) context.

Unlike in your living room, over drinks in a pub, or at a party with friends of friends of friends. There you know someone. Or if you don’t, you know them in that setting, you know their behaviour at that event thus far. All have skin in the game as well misbehaviour has immediate social consequences. Social connectedness is a necessary context for discourse, either stemming from personal connections, or from the setting of the place/event it takes place in. Online discourse often lacks both, discourse crumbles, entropy ensues. Without consequence for those causing the crumbling. Which makes it fascinating when missing social context is retroactively restored, outing the misbehaving parties, such as the book I once bought by Tinkebell where she matches death threats she received against the sender’s very normal Facebook profiles.

Two elements therefore are needed I find, one in terms of determining who can be part of which discourse, and two in terms of control over the context of that discourse. They are point 2 and point 6 in my manifesto on networked agency.

  • Our platforms need to mimick human networks much more closely : our networks are never ‘all in one mix’ but a tapestry of overlapping and distinct groups and contexts. Yet centralised platforms put us all in the same space.
  • Our platforms also need to be ‘smaller’ than the group using it, meaning a group can deploy, alter, maintain, administrate a platform for their specific context. Of course you can still be a troll in such a setting, but you can no longer be one without a cost, as your peers can all act themselves and collectively.
  • This is unlike on e.g. FB where the cost of defending against trollish behaviour by design takes more effort than being a troll, and never carries a cost for the troll. There must, in short, be a finite social distance between speakers for discourse to be possible. Platforms that dilute that, or allow for infinite social distance, is where discourse can crumble.

    This points to federation (a platform within control of a specific group, interconnected with other groups doing the same), and decentralisation (individuals running a platform for one, and interconnecting them). Doug Belshaw recently wrote in a post titled ‘Time to ignore and withdraw?‘ about how he first saw individuals running their own Mastodon instance as quirky and weird. Until he read a blogpost of Laura Kalbag where she writes about why you should run Mastodon yourself if possible:

    Everything I post is under my control on my server. I can guarantee that my Mastodon instance won’t start profiling me, or posting ads, or inviting Nazis to tea, because I am the boss of my instance. I have access to all my content for all time, and only my web host or Internet Service Provider can block my access (as with any self-hosted site.) And all blocking and filtering rules are under my control—you can block and filter what you want as an individual on another person’s instance, but you have no say in who/what they block and filter for the whole instance.

    Similarly I recently wrote,

    The logical end point of the distributed web and federated services is running your own individual instance. Much as in the way I run my own blog, I want my own Mastodon instance.

    I also do see a place for federation, where a group of people from a single context run an instance of a platform. A group of neighbours, a sports team, a project team, some other association, but always settings where damaging behaviour carries a cost because social distance is finite and context defined, even if temporary or emergent.

    For the UNDP in Serbia, I made an overview of existing studies into the impact of open data. I’ve done something similar for the Flemish government a few years ago, so I had a good list of studies to start from. I updated that first list with more recent publications, resulting in a list of 45 studies from the past 10 years. The UNDP also asked me to suggest a measurement framework. Here’s a summary overview of some of the things I formulated in the report. I’ll start with 10 things that make measuring impact hard, and in a later post zoom in on what makes measuring impact doable.

    While it is tempting to ask for a ‘killer app’ or ‘the next tech giant’ as proof of impact of open data, establishing the socio-economic impact of open data cannot depend on that. Both because answering such a question is only possible with long term hindsight which doesn’t help make decisions in the here and now, as well as because it would ignore the diversity of types of impacts of varying sizes known to be possible with open data. Judging by the available studies and cases there are several issues that make any easy answers to the question of open data impact impossible.

    1 Dealing with variety and aggregating small increments

    There are different varieties of impact, in all shapes and sizes. If an individual stakeholder, such as a citizen, does a very small thing based on open data, like making a different decision on some day, how do we express that value? Can it be expressed at all? E.g. in the Netherlands the open data based rain radar is used daily by most cyclists, to see if they can get to the rail way station dry, better wait ten minutes, or rather take the car. The impact of a decision to cycle can mean lower individual costs (no car usage), personal health benefits, economic benefits (lower traffic congestion) environmental benefits (lower emissions) etc., but is nearly impossible to quantify meaningfully in itself as a single act. Only where such decisions are stimulated, e.g. by providing open data that allows much smarter, multi-modal, route planning, aggregate effects may become visible, such as reduction of traffic congestion hours in a year, general health benefits of the population, reduction of traffic fatalities, which can be much better expressed in a monetary value to the economy.

    2 Spotting new entrants, and tracking SME’s

    The existing research shows that previously inactive stakeholders, and small to medium sized enterprises are better positioned to create benefits with open data. Smaller absolute improvements are of bigger value to them relatively, compared to e.g. larger corporations. Such large corporations usually overcome data access barriers with their size and capital. To them open data may even mean creating new competitive vulnerabilities at the lower end of their markets. (As a result larger corporations are more likely to say they have no problem with paying for data, as that protects market incumbents with the price of data as a barrier to entry.) This also means that establishing impacts requires simultaneously mapping new emerging stakeholders and aggregating that range of smaller impacts, which both can be hard to do (see point 1).

    3 Network effects are costly to track

    The research shows the presence of network effects, meaning that the impact of open data is not contained or even mostly specific to the first order of re-use of that data. Causal effects as well as second and higher order forms of re-use regularly occur and quickly become, certainly in aggregate, much higher than the value of the original form of re-use. For instance the European Space Agency (ESA) commissioned my company for a study into the impact of open satellite data for ice breakers in the Gulf of Bothnia. The direct impact for ice breakers is saving costs on helicopters and fuel, as the satellite data makes determining where the ice is thinnest much easier. But the aggregate value of the consequences of that is much higher: it creates a much higher predictability of ships and the (food)products they carry arriving in Finnish harbours, which means lower stocks are needed to ensure supply of these goods. This reverberates across the entire supply chain, saving costs in logistics and allowing lower retail prices across Finland. When 
mapping such higher order and network effects, every step further down the chain of causality shows that while the bandwidth of value created increases, at the same time the certainty that open data is the primary contributing factor decreases. Such studies also are time consuming and costly. It is often unlikely and unrealistic to expect data holders to go through such lengths to establish impact. The mentioned ESA example, is part of a series of over 20 such case studies ESA commissioned over the course of 5 years, at considerable cost for instance.

    4 Comparison needs context

    Without context, of a specific domain or a specific issue, it is hard to asses benefits, and compare their associated costs, which is often the underlying question concerning the impact of open data: does it weigh up against the costs of open data efforts? Even though in general open data efforts shouldn’t be costly, how does some type of open data benefit compare to the costs and benefits of other actions? Such comparisons can be made in a specific context (e.g. comparing the cost and benefit of open data for route planning with other measures to fight traffic congestion, such as increasing the number of lanes on a motor way, or increasing the availability of public transport).

    5 Open data maturity determines impact and type of measurement possible

    Because open data provisioning is a prerequisite for it having any impact, the availability of data and the maturity of open data efforts determine not only how much impact can be expected, but also determine what can be measured (mature impact might be measured as impact on e.g. traffic congestion hours in a year, but early impact might be measured in how the number of re-users of a data set is still steadily growing year over year)

    6 Demand side maturity determines impact and type of measurement possible

    Whether open data creates much impact is not only dependent on the availability of open data and the maturity of the supply-side, even if it is as mentioned a prerequisite. Impact, judging by the existing research, is certain to emerge, but the size and timing of such impact depends on a wide range of other factors on the demand-side as well, including things as the skills and capabilities of stakeholders, time to market, location and timing. An idea for open data re-use that may find no traction in France because the initiators can’t bring it to fruition, or because the potential French demand is too low, may well find its way to success in Bulgaria or Spain, because local circumstances and markets differ. In the Serbian national open data readiness assessment performed by me for the World Bank and the UNDP in 2015 this is reflected in the various dimensions assessed, that cover both supply and demand, as well as general aspects of Serbian infrastructure and society.

    7 We don’t understand how infrastructure creates impact

    The notion of broad open data provision as public infrastructure (such as the UK, Netherlands, Denmark and Belgium are already doing, and Switzerland is starting to do) further underlines the difficulty of establishing the general impact of open data on e.g. growth. The point that infrastructure (such as roads, telecoms, electricity) is important to growth is broadly acknowledged, with the corresponding acceptance of that within policy making. This acceptance of quantity and quality of infrastructure increasing human and physical capital however does not mean that it is clear how much what type of infrastructure contributes at what time to economic production and growth. Public capital is often used as a proxy to ascertain the impact of infrastructure on growth. Consensus is that there is a positive elasticity, meaning that an increase in public capital results in an increase in GDP, averaging at around 0.08, but varying across studies and types of infrastructure. Assuming such positive elasticity extends to open data provision as infrastructure (and we have very good reasons to do so), it will result in GDP growth, but without a clear view overall as to how much.

    8 E pur si muove

    Most measurements concerning open data impact need to be understood as proxies. They are not measuring how open data is creating impact directly, but from measuring a certain movement it can be surmised that something is doing the moving. Where opening data can be assumed to be doing the moving, and where opening data was a deliberate effort to create such movement, impact can then be assessed. We may not be able to easily see it, but still it moves.

    9 Motives often shape measurements

    Apart from the difficulty of measuring impact and the effort involved in doing so, there is also the question of why such impact assessments are needed. Is an impact assessment needed to create support for ongoing open data efforts, or to make existing efforts sustainable? Is an impact measurement needed for comparison with specific costs for a specific data holder? Is it to be used for evaluation of open data policies in general? In other words, in whose perception should an impact measurement be meaningful?
    The purpose of impact assessments for open data further determines and/or limits the way such assessments can be shaped.

    10 Measurements get gamed, become targets

    Finally, with any type of measurement, there needs to be awareness that those with a stake of interest into a measurement are likely to try and game the system. Especially so where measurements determine funding for further projects, or the continuation of an effort. This must lead to caution when determining indicators. Measurements easily become a target in themselves. For instance in the early days of national open data portals being launched worldwide, a simple metric often reported was the number of datasets a portal contained. This is an example of a ‘point’ measurement that can be easily gamed for instance by subdividing a dataset into several subsets. The first version of the national portal of a major EU member did precisely that and boasted several hundred thousand data sets at launch, which were mostly small subsets of a bigger whole. It briefly made for good headlines, but did not make for impact.

    In a second part I will take a closer look at what these 10 points mean for designing a measurement framework to track open data impact.

    When I talk about Networked Agency, I talk about reducing the barrier to entry for all kinds of technology as well as working methods, that we know work well in a fully networked situation. Reducing those barriers allows others to adopt these tools more easily and find power in refound ability to act. Networked agency needs tech and methods that can be easily deployed by groups, and that work even better when federated across groups and the globe-spanning digital human network.

    The IndieWeb’s principles (own your own data, use tools that work well on their own, and better when federated, avoid silos as the primary place of where you post content) fit well with that notion.

    Recently I said that I was coming back to a lot of my material on information strategies and metablogging from 2003-2006, but now with more urgency and a change in scope. Frank asked what I meant, and I answered

    that the principles of the open web (free to use, alter, tinker, control, trust by you/your group) also apply to other techs (for instance energy production, blockchain, biohacking, open source hardware, cheap computing hardware, algorithms, IoT sensors and actuators) and methods (p2p, community building, social media usage/production, group facilitation etc.). Only then are they truly empowering, otherwise you’re just the person it is ‘done to’.

    Blockchain isn’t empowering you to run your own local currency if you can only run it on de-facto centralised infrastructure, where you’re exposed to propagating negative externalities. Whether it is sudden Ethereum forks, or the majority of BTC transactions being run on opaque Chinese computing clusters. It is empowering only if it is yours to deploy for a specific use. Until you can e.g. run a block chain based LETS easily for your neighbourhood or home town on nodes that are Raspberry Pi’s attached to the LETS-members’ routers, there is no reliable agency in blockchain.

    IoT is not empowering if it means Amazon is listening into all your conversations, or your fire alarm sensors run through centralised infrastructure run by a telco. It is empowering if you can easily deploy your own sensors and have them communicate to an open infrastructure for which you can run your own gateway or trust your neighbour’s gateway. And on top of which your group does their own data crunching.

    Community building methods are not empowering if it is only used to purposefully draw you closer to a clothing brand or football club so they can sell your more of their stuff. Where tribalism is used to drive sales. It is empowering if you can, with your own direct environment, use those methods to strengthen local community relationships, learn how to collectively accommodate differences in opinions, needs, strengths and weaknesses, and timely reorient yourself as a group to keep momentum. Dave Winer spoke about working together at State of the Net, and 3 years ago wrote about working together in the context of the open web. To work together there are all kinds of methods, but like community building, those methods aren’t widely known or adopted.

    So, what applies to the open web, IndieWeb, I see applies to any technology and method we think help increase the agency of groups in our networked world. More so as technologies and methods often need to be used in tandem. All these tools need to be ‘smaller’ than us, be ours. This is a key element of Networked Agency, next to seeing the group, you and a set of meaningful relationships, as the unit of agency.

    Not just IndieWeb. More IndieTech. More IndieMethods.

    How would the ‘Generations‘ model of the IndieWeb look if transposed to IndieTech and IndieMethods? What is Selfdogfooding when it comes to methods?

    More on this in the coming months I think, and in the runup to ‘Smart Stuff That Matters‘ late August.

    Came across this post by Ruben Verborgh from last December, “Paradigm Shifts for the Decentralised Web“.

    I find it helpful because of how it puts different aspects of wanting to decentralise the web into words. Ruben Verborgh mentions 3 simultaneous shifts:

    1) End-users own their data, which is the one mostly highlighted in light of things like the Cambridge Analytica / Facebook scandal.

    2) Apps become views, when they are disconnected from the data, as they are no longer the single way to see that data

    3) Interfaces become queries, when data is spread out over many sources.

    Those last two specifically help me think of decentralisation in different ways. Do read the whole thing.

    Yesterday at State of the Net I showed some of the work I did with the great Frysklab team, letting a school class find power in creating their own solutions. We had a I think very nicely working triade of talks in our session, Hossein Derakshan first, me in the middle, and followed by Dave Snowden. In his talk, Dave referenced my preceding one, saying it needed scaling for the projects I showed to alter anything. Although I know Dave Snowden didn’t mean his call for scale that way, often when I hear it, it is rooted in the demand-for-ever-more-growth type of systems we know cannot be sustained in a closed world system like earth’s. The small world syndrom, as I named it at Shift 2010, will come biting.

    It so often also assumes there needs to be one person or entity doing the scaling, a scaler. Distributed networks don’t need a scaler per se.
    The internet was not created that way, nor was the Web. Who scaled RSS? Some people moved it forwards more than others, for certain, but unconnected people, just people recognising a possibility to fruitfully build on others for something they felt personally needed. Dave Winer spread it with Userland, made it more useful, and added the possibility of having the payload be something else than just text, have it be podcasts. We owe him a lot for the actual existence of this basic piece of web plumbing. Matt Mullenweg of WordPress and Ben and Mena Trott of Movable Type helped it forward by adding RSS to their blogging tools, so people like me could use it ‘out of the box’. But it actually scaled because bloggers like me wanted to connect. We recognised the value of making it easy for others to follow us, and for us to follow the writings of others. So I and others created our own templates, starting from copying something someone else already made and figuring out how to use RSS. It is still how I adopt most of my tools. Every node in a network is a scaler, by doing something because it is of value to themselves in the moment, changes them, and by extension adding themselves to the growing number of nodes doing it. Some nodes may take a stronger interest in spreading something, convincing others to adopt something, but that’s about it. You might say the source of scaling is the invisible hand of networks.

    That’s why I fully agree with Chris Hardie that in the open web, all the tools you create need to have the potentiality of the network effect built in. Of course, when something is too difficult for most to copy or adapt, then there won’t be this network effect. Which is why most of the services we see currently dominating online experiences, the ones that shocked Hossein upon returning from his awful forced absence, are centralised services made very easy to use. Where someone was purposefully aiming for scale, because their business depended on it once they recognised their service had the potential to scale.

    Dave Winer yesterday suggested the blogosphere is likely bigger now than when it was so dominantly visible in the ‘00s, when your blogpost of today could be Google’s top hit for a specific topic, when I could be found just on my first name. But it is so much less visible than before, precisely because it is not centralised, and the extravagant centralised silos stand out so much. The blogosphere diminished itself as well however, Dave Winer responded to Hossein Derakshan’s talk yesterday.

    People still blog, more people blog than before, but we no longer build the same amount of connections across blogs. Connections we were so in awe of when our writing first proved to have the power to create them. Me and many others, bloggers all, suckered ourselves into feeling blog posts needed to be more like reporting, essays, and took our conversations to the comments on Facebook. Facebook, which, as Hossein Derakshan pointed out, make such a travesty of what web links are by allowing them only as separate from the text you write on Facebook. It treats all links as references to articles, not allowing embedding them in the text, or allowing more than one link to be presented meaningfully. That further reinforced the blog-posts-as-articles notions. That further killed the link as weaving a web of distributed conversations, a potential source of meaning. Turned the web, turned your timeline, into TV, as Hossein phrased it.

    Hoder on ‘book-internet’ (blogs) and ‘tv-internet’ (FB et al) Tweet by Anna Masera

    I switched off my tv ages ago. And switched off my FB tv-reincarnate nine months ago. In favour of allowing myself more time to write as thinking out loud, to have conversations.

    Adriana Lukas and I after the conference, as we sat there enjoying an Italian late Friday afternoon over drinks, talked about the Salons of old. How we both have created through the years settings like that, Quantified Self meetings, BlogWalks, Birthday Unconferences, and how we approached online sharing like that too. To just add some of my and your ramblings to the mix. Starting somewhere in the middle, following a few threads of thought and intuitions, adding a few links (as ambient humanity), and ending without conclusions. Open ended. Just leaving it here.

    I an open letter (PDF) a range of institutions call upon their respective European governments to create ELLIS, the European Lab for Learning and Intelligent Systems. It’s an effort to fortify against brain drain, and instead attract top talent to Europe. It points to the currently weak position in AI of Europe between what is happening in the USA and in China, adding a geo-political dimension. The letter calls not so much for an institution with a large headcount, but for commitment to long term funding to attract and keep the right people. These are similar reasons that led to the founding of CERN, now a global center for physics (and a key driver of things like open access to research and open research data), and more recently the European Molecular Biology Laboratory.

    At the core the signatories see France and Germany as most likely to act to start this intra-governmental initiative. It seems this nicely builds upon the announcement by French president Macron late March to invest heavily in AI, and keep / attract the right people for it. He too definitely sees the European dimension to this, even puts European and enlightenment values at the core of it, although he acted within his primary scope of agency, France itself.

    (via this Guardian article)

    Wired is calling for an RSS revival.

    RSS is the most important piece of internet plumbing for following new content from a wide range of sources. It allows you to download new updates from your favourite sites automatically and read them at your leisure. Dave Winer, forever dedicated to the open web, created it.

    I used to be a very heavy RSS user. I tracked hundreds of sources on a daily basis. Not as news but as a way to stay informed about the activities and thoughts of people I was interested in. At some point, that stopped working. Popular RSS readers were discontinued, most notably Google’s RSS reader, many people migrated to the Facebook timeline, platforms like Twitter stopped providing RSS feeds to make you visit their platform, and many people stopped blogging. But with FB in the spotlight, there is some interest in refocusing on the open web, and with it on RSS.

    Currently I am repopulating from scratch my RSS reading ‘antenna’, following around 100 people again.

    Wired in its call for an RSS revival suggests a few RSS readers. I, as I always have, use a desktop RSS reader, which currently is ReadKit. The FB timeline presents stuff to you based on their algorithmic decisions. As mentioned I definitely would like to have smarter ways of shaping my own information diet, but then with me in control and not the one being commoditised.

    So it’s good to read that RSS Reader builders are looking at precisely that.
    “Machines can have a big role in helping understand the information, so algorithms can be very useful, but for that they have to be transparent and the user has to feel in control. What’s missing today with the black-box algorithms is where they look over your shoulder, and don’t trust you to be able to tell what’s right.”,says Edwin Khodabakchian cofounder and CEO of RSS reader Feedly (which currently has 14 million users). That is more or less precisely my reasoning as well.

    Russia is trying to block Telegram, an end-to-end encrypted messaging app. The reason for blocking is that Telegram refused to provide keys to the authorities with which messages can be decrypted. Not for a specific case, but for listening into general traffic.

    Asking for keys (even if technologically possible), to have a general backdoor is a very bad idea. It will always be misused by others. And yes, you do have something to hide. Your internet banking is encrypted, your VPN connection from home to your work computer is too. You use passwords on websites, mail accounts and your wifi. If you don’t have anything to hide, please leave your Facebook login details along with your banking details in the comments. I promise I won’t use them. The point isn’t whether I or government keep our promises (and I or government might not), it’s that others definitely won’t.

    As a result of Telegram not providing the keys, Russia is now trying to block people from using it. This results in millions of IP addresses now being blocked, more than 1 IP address per the around 14 million users of Telegram in Russia. (Telegram reports about 200 million users globally per month). Because the service partly runs on servers of Amazon and Google data centers, and those are getting blocked. This impacts other services as well, who use the same data centers to flexibly scale their computing needs. The blocking attempts aren’t working though.

    It shows how fully distributed systems are hard to stamp out, it will merely pop up somewhere else. The internet routes around damages, it is what it was designed to do.

    Let’s see if actions will now be taken by Russian authorities against persons and assets of Telegram, as that really is the only (potential, not garantueed,) way to stamp out something: dismantling it. In the case of Telegram, a private company, there are indeed people and assets one could target. And Telegram is pledging to deploy those assets in resisting. Yet dismantling Telegram, even if successful and disregarding other costs and consequences for a government, defeats the original purpose of wanting to listen in to message traffic. Traffic will easily move into other encrypted tools, like Signal, while new even more distributed applications will also emerge in response.

    Summary:

    • General backdoors, bad idea, regardless of whether you can trust the one you give back door access to.
    • Blocking is hard to do with distributed systems.
    • If you don’t accept attempts to do either from data driven authoritarian governments, you need to accept the same objections to general back door access apply to other situations where you think the stated aim has more merit.
    • Do use an encrypted messaging app, like Signal, as much as possible

    Data, especially lots of it, is the feedstock of machine learning and algorithms. And there’s a race on for who will lead in these fields. This gives it a geopolitical dimension, and makes data a key strategic resource of nations. In between the vast data lakes in corporate silos in the US and the national data spaces geared towards data driven authoritarianism like in China, what is the European answer, what is the proposition Europe can make the world? Ethics based AI. “Enlightenment Inside”.

    French President Macron announced spending 1.5 billion in the coming years on AI last month. Wired published an interview with Macron. Below is an extended quote of I think key statements.

    AI will raise a lot of issues in ethics, in politics, it will question our democracy and our collective preferences……It could totally dismantle our national cohesion and the way we live together. This leads me to the conclusion that this huge technological revolution is in fact a political revolution…..Europe has not exactly the same collective preferences as US or China. If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution . That’s the condition of having a say in designing and defining the rules of AI. That is one of the main reasons why I want to be part of this revolution and even to be one of its leaders. I want to frame the discussion at a global scale….The key driver should not only be technological progress, but human progress. This is a huge issue. I do believe that Europe is a place where we are able to assert collective preferences and articulate them with universal values.

    Macron’s actions are largely based on the report by French MP and Fields Medal winning mathematician Cédric Villani, For a Meaningful Artificial Intelligence (PDF)

    My current thinking about what to bring to my open data and data governance work, as well as to technology development, especially in the context of networked agency, can be summarised under the moniker ‘ethics by design’. In a practical sense this means setting non-functional requirements at the start of a design or development process, or when tweaking or altering existing systems and processes. Non-functional requirements that reflect the values you want to safeguard or ensure, or potential negative consequences you want to mitigate. Privacy, power asymmetries, individual autonomy, equality, and democratic control are examples of this.

    Today I attended the ‘Big Data Festival’ in The Hague, organised by the Dutch Ministry of Infrastructure and Water Management. Here several government organisations presented themselves and the work they do using data as an intensive resource. Stuff that speaks to the technologist in me. In parallel there were various presentations and workshops, and there I was most interested in what was said about ethical issues around data.

    Author and interviewer Bas Heijne set the scene at the start by pointing to the contrast between the technology optimism concerning digitisation of years back and the more dystopian discussion (triggered by things like the Cambridge Analytica scandal and cyberwars), and sought the balance in the middle. I think that contrast is largely due to the difference in assumptions underneath the utopian and dystopian views. The techno-optimist perspective, at least in the webscene I frequented in the late 90’s and early 00’s assumed the tools would be in the hands of individuals, who would independently weave the world wide web, smart at the edges and dumb at the center. The dystopian views, including those of early criticaster like Aron Lanier, assumed, and were proven at least partly right, a centralisation into walled gardens where individuals are mere passive users or an object, and no longer a subject with autonomy. This introduces wildly different development paths concerning power distribution, equality and agency.

    In the afternoon a session with professor Jeroen van den Hoven, of Delft University, focused on making the ethical challenges more tangible as well as pointed to the beginnings of practical ways to address them. It was the second time I heard him present in a month. A few weeks ago I attended an Ethics and Internet of Things workshop at University of Twente, organised by UNESCO World Commission on the Ethics of Science and Technology (COMEST). There he gave a very worthwile presentation as well.


    Van den Hoven “if we don’t design for our values…”

    What I call ethics by design, a term I first heard from prof Valerie Frissen, Van den Hoven calls value sensitive design. That term sounds more pragmatic but I feel conveys the point less strongly. This time he also incorporated the geopolitical aspects of data governance, which echoed what Rob van Kranenburg (IoT Council, Next Generation Internet) presented at that workshop last month (and which I really should write down separately). It was good to hear it reinforced for today’s audience of mainly civil servants, as currently there is a certain level of naivety involved in how (mainly local governments) collaborate with commercial partners around data collection and e.g. sensors in the public space.

    (Malfunctioning) billboard at Utrecht Central Station a few days ago, with not thought through camera in a public space (to measure engagement with adverts). Civic resistance taped over the camera.

    Value sensitive design, said Van den Hoven, should seek to combine the power of technology with the ethical values, into services and products. Instead of treating it as a dilemma with an either/or choice, which is the usual way it is framed: Social networking OR privacy, security OR privacy, surveillance capitalism OR personal autonomy, smart cities OR human messiness and serendipity. In value sensitive design it is about ensuring the individual is still a subject in the philosophical sense, and not merely the object on which data based services feed. By addressing both values and technological benefits as the same design challenge (security AND privacy, etc.), one creates a path for responsible innovation.

    The audience saw both responsibilities for individual citizens as well as governments in building that path, and none thought turning one’s back on technology to fictitious simpler times would work, although some were doubtful if there was still room to stem the tide.

    Stephanie Booth, a long time blogging connection, has been writing about reducing her Facebook usage and increasing her blogging. She says at one point

    As the current “delete Facebook” wave hits, I wonder if there will be any kind of rolling back, at any time, to a less algorithmic way to access information, and people. Algorithms came to help us deal with scale. I’ve long said that the advantage of communication and connection in the digital world is scale. But how much is too much?

    I very much still believe there’s no such thing as information overload, and fully agree with Stephanie that the possible scale of networks and connections is one of the key affordances of our digital world. My rss-based filtering, as described in 2005, worked better when dealing with more information, than with less. Our information strategies need to reflect and be part of the underlying complexity of our lives.

    Algorithms can help us with that scale, just not the algorithms that FB uses around us. For algorithms to help, like any tool, they need to be ‘smaller’ than us, as I wrote in my networked agency manifesto. We need to be able to control its settings, tinker with it, deploy it and stop it as we see fit. The current application of algorithms, as they usually need lots of data to perform, sort of demands a centralised platform like FB to work. The algorithms that really will be helping us scale will be the ones we can use for our own particular scaling needs. For that the creation, maintenance and usage of algorithms needs to have a much lower threshold than now. I placed it in my ‘agency map‘ because of it.

    Going back to a less algorithmic way of dealing with information isn’t an option, nor something to desire I think. But we do need algorithms that really serve us, perform to our information needs. We need less algorithms that purport to aid us in dealing with the daily river of newsy stuff, but really commodotise us at the back-end.