After my initial posting on this yesterday, Greg shares a few more quotes from his students. It reminds me of the things both teachers and students said at the end of my 2008 project at Rotterdam university for applied sciences. There, a group of teachers explored how to use digital technology, blogs and the myriad of social web tools, to both support their own learning and change their teaching. The sentiments expressed are similar, if you look at the quotes in the last two sections (change yourself, change your students) of my 2009 posting about it. What jumps out most for me, is the sense of agency, the power that comes from discovering that agency.
Next week it is 50 years ago that Doug Engelbart (1925-2013) and his team demonstrated all that has come to define interactive computing. Five decades on we still don’t have turned everything in that live demo into routine daily things. From the mouse, video conferencing, word processing, outlining, drag and drop, digital mind mapping, to real time collaborative editing from multiple locations. In 1968 it is all already there. In 2018 we are still catching up with several aspects of that live demonstrated vision though. Doug Engelbart and team ushered in the interactive computing era to “augment human intellect”, and on the 50th anniversary of The Demo a symposium will ask the question what augmenting the human intellect can look like in the 21st century.
A screenshot of Doug Engelbart during the 1968 demo
The 1968 demo was later named ‘the Mother of all Demos‘. I first saw it in its entirety at the 2005 Reboot conference in Copenhagen. Doug Engelbart had a video conversation with us after the demo. To me it was a great example, not merely of prototyping new tech, but most of all of proposing a coherent and expansive vision of how different technological components and human networked interaction and routines can together be used to create new agency and new possibilities. To ‘augment human intellect’ indeed. That to me is the crux, to look at the entire constellation of humans, our connections, routines, methods and processes, our technological tools and achieving our desired impact. Likely others easily think I’m a techno-optimist, but I don’t think I am. I am generally an optimist yes, but to me what is key is our humanity, and to create tools and methods that enhance and support it. Tech as tools, in context, not tech as a solution, on its own. It’s what my networked agency framework is about, and what I try to express in its manifesto.
Paul Duplantis has blogged about where the planned symposium, and more importantly us in general, may take the internet and the web as our tools.
This is a very interesting article to read. A small French adtech company Vectaury has been ordered to stop using and delete the personal data of tens of millions of Europeans, as it cannot show proper consent as required under the GDPR. Of interest here is that Vectaury tried to show consent using a branche wide template by IAB. A French judge has ruled this is not enough. This is an early sign that as Doc Searls says GDPR is able to, though at the speed of legal proceedings, put a stake through the heart of ad-tech. Provided enforcement goes forward.
A month after the verdict, Vectaury’s website still proudly claims that they’re GDPR compliant because they use the concept of a ‘consent management provider’. Yet that is exactly what has now been ruled as not enough to show actual consent.
This Twitter thread by NYT’s Robin Berjon about the case is also interesting.
Some things I thought worth reading in the past days
- A good read on how currently machine learning (ML) merely obfuscates human bias, by moving it to the training data and coding, to arrive at peace of mind from pretend objectivity. Because of claiming that it’s ‘the algorithm deciding’ you make ML a kind of digital alchemy. Introduced some fun terms to me, like fauxtomation, and Potemkin AI: Plausible Disavowal – Why pretend that machines can be creative?
- These new Google patents show how problematic the current smart home efforts are, including the precursor that are the Alexa and Echo microphones in your house. They are stripping you of agency, not providing it. These particular ones also nudge you to treat your children much the way surveillance capitalism treats you: as a suspect to be watched, relationships denuded of the subtle human capability to trust. Agency only comes from being in full control of your tools. Adding someone else’s tools (here not just Google but your health insurer, your landlord etc) to your home doesn’t make it smart but a self-censorship promoting escape room. A fractal of the panopticon. We need to start designing more technology that is based on distributed use, not on a centralised controller: Google’s New Patents Aim to Make Your Home a Data Mine
- An excellent article by the NYT about Facebook’s slide to the dark side. When the student dorm room excuse “we didn’t realise, we messed up, but we’ll fix it for the future” defence fails, and you weaponise your own data driven machine against its critics. Thus proving your critics right. Weaponising your own platform isn’t surprising but very sobering and telling. Will it be a tipping point in how the public views FB? Delay, Deny and Deflect: How Facebook’s Leaders Fought Through Crisis
- Some of these takeaways from the article just mentioned we should keep top of mind when interacting with or talking about Facebook: FB knew very early on about being used to influence the US 2016 election and chose not to act. FB feared backlash from specific user groups and opted to unevenly enforce their terms or service/community guidelines. Cambridge Analytica is not an isolated abuse, but a concrete example of the wider issue. FB weaponised their own platform to oppose criticism: How Facebook Wrestled With Scandal: 6 Key Takeaways From The Times’s Investigation
- There really is no plausible deniability for FB’s execs on their “in-house fake news shop” : Facebook’s Top Brass Say They Knew Nothing About Definers. Don’t Believe Them. So when you need to admit it, you fall back on the ‘we messed up, we’ll do better going forward’ tactic.
- As Aral Balkan says, that’s the real issue at hand because “Cambridge Analytica and Facebook have the same business model. If Cambridge Analytica can sway elections and referenda with a relatively small subset of Facebook’s data, imagine what Facebook can and does do with the full set.“: We were warned about Cambridge Analytica. Why didn’t we listen?
- [update] Apparently all the commotion is causing Zuckerberg to think FB is ‘at war‘, with everyone it seems, which is problematic for a company that has as a mission to open up and connect the world, and which is based on a perception of trust. Also a bunker mentality probably doesn’t bode well for FB’s corporate culture and hence future: Facebook At War.
Today I attended the presentation of this year’s Shaking Tree Award. This annual award started in 2016, and is named after my friend Niels who received the first award during his ‘last lecture‘. Niels died a year ago. The Ministry of Healthcare has pledged to keep going with the award, in the spirit of Niels’ efforts: shake up the system, fighting unneeded and kafkaesque bureaucracy, have hands-on experience with the system at ‘the receiving end’ so you know what you’re talking about, have a sense of humor to go with it, and be able to ‘dance with the system’.
The meeting was attended by a diverse range of people, from the healthcare domain, Niels’ family, and of course a smattering of Niels’ friends.
Before presenting this year’s nominees and the award, time was given to remembering Niels and the reason for this award. This was followed by two conversations between a previous winner and nominee and a representative of an institution they struggled with. First were Annette Stekelenburg and Ria Dijkstra, manager operations at a health care insurer. Annette has a son that needs tube feeding to survive. This situation will not change. Yet every year they need to apply for approval to continue receiving the materials needed. Annette and Ria had a frank conversation about what happened when Annette publicly announced she was fed up with this yearly bureaucracy that should be unneeded. Dijkstra explained how they thought that they had already changed the rules, making the renewal once every 5 years, but that the suppliers never knew, and that forms are being sent out in the insurers name that don’t actually exist anymore.
The second conversation was between Kathi Künnen, a previous nominee, and Betsie Gerrits, department head at UWV, the government agency in charge of employee insurance. Kathi is 29 and has incurable cancer. Because of that she has been determined to be 100% incapable of working, yet there are lots of phases where she actually does want to work. 25% of young professionals with cancer have an incurable form, and most want to remain active as long as possible. Yet the system tells them their ‘earning capacity is 0’ and with a stamp like that there’s no way to find paid activity. Here too, the conversation first of all made the two parties at the table see each other as individual human beings. And from it energy and potential solutions follow. Kathi said she needs reassurance that there can be administrative certainty (other than being tossed out as worthless), as her own life is fluid enough as it is and changing all the time.
I thought both conversations were impressive, and the type of thing we need much more of. Once you get past the frustration, anger and disbelief that often plays a role too, you can see the actual human being at the other side of the table. Dancing with the system is, in part, being able to have these conversations.
The award was presented by the previous winner, Tim Kroesbergen, and the secretary general of the Ministry Erik Gerritsen was host to the event, with Maarten den Braber as MC. The jury, consisting of Sanne (Niels’ wife) and the previous two winners, Annette Stekelenburg and Tim Kroesbergen, made their choice known from amongst the three nominees: Eva Westerhoff, Elianne Speksnijder and Geert-Jan den Hengst. All three nominees were presented by a video, as well as a conversation about their experiences.
Eva Westerhoff is a disability rights advocate & accessibility consultant who happens to be deaf. Next to her job at a bank, she does lots of volunteer work on diversity, inclusion & accessibility in information, communication & tech. She’s been knocking on doors in the Healthcare Ministry for over 20 years. Today she said that because of the political cycle, it seems you need to do everything again every four years or so, to keep awareness high enough.
Elianne Speksnijder is a professional fashion model, photographer and story teller. Lyme disease and epilepsy caused her to land in a wheelchair when she was 15. As she said today, an age which brings enough difficulties as it is. It took her a decade to accept that her wheels were a permanent part of her life. She’s 28 now, a woman with ambitions ‘on wheels’. When she was a teenager she sorely missed a role model (or rolling model, as the Dutch word ‘rolmodel’ can mean both). Now she is setting out to be that role model herself. She hopes for much more inclusivity in media, and challenges companies about it.
Geert-Jan den Hengst, is a 48 year old father of two adult children. He has MS and has been living the last decade or so in an environment that provides 24/7 care. His laptop is his core conduit to the rest of the world. Writing is a need for him. He blogs on his own blog, and writes for the local football team’s website, various media in his hometown and more. At the heart of his writing are everyday observations. He says he is “not a political animal, so I need to stay close to my everyday life in what I do”. Often those observations are examples of how life can be made impractical for someone in his position. He mentioned an early example that got him started: for the local football stadium all types of tickets could be bought online, except for …. tickets for wheel chair access. People with wheel chairs needed to come buy the tickets in person. The group least likely to be able to do that easily.
From all three nominees, I think the main takeaway is taking the time to share and listen to the actual stories of people. Especially when things get complicated or complex. Not news, there’s a reason I’ve been active in participatory narrative inquiry and sense making for a long time, but it bears repeating. Stories are our main way of ‘measurement’ in complex situations, to catch what’s going on for real, to spot the actual (not just the intended) consequences of our actions, structures and regulations, to see the edge cases, and to find the knobs to turn towards getting better results (and know what better actually is).
Jury chairman Tim Kroesbergen after reading the jury motivations for all three nominees, announced Eva Westerhoff as the new Shaking Tree Award winner.
Inside the Ministry a poem by Merel Morre is painted on the wall, that she wrote in honor of Niels ‘Shakingtree’.
A rough translation reads (anything unpoetic is all my doing)
shake goals awake
jump past rules
dance joints wider
dream chances free
out of bounds
as it grows
where it lighter
but never stops
In the ministy’s central hall all the pillars show a face of someone with the words “I care”. That and the poem are promising signs of commitment to the actual stories of people. The Ministry still has 24 statuettes in stock for the Shaking Tree Award, so there’s a likelihood they will keep the annual award up as well. But as this year’s winner Eva Westhoff warned, every 4 years the politics changes, so it’s better to make sure.
Aaron Swartz would have turned 32 November 8th. He died five years and 10 months ago, and since then, like this weekend, the annual Aaron Swartz weekend takes place with all kinds of hackathons and events in his memory. At the time of his suicide Swartz was being prosecuted for downloading material in bulk from JSTOR, a scientific papers archive (even though he had legitimate access to it).
In 2014 the Smart New World exhibition took place in Kunsthalle Düsseldorf, which Elmine and I visited. Part of it was the installation “18.591 Articles Sold By JSTOR for $19 = $353.229” with those 18.591 articles printed out, showing what precisely is behind the paywall, and what Swartz was downloading. Articles, like those shown, from the 19th century, since long in the public domain, sold for $19 each. After Swartz’ death JSTOR started making a small percentage of their public domain content freely accessible, limited to a handful papers per month.
The Düsseldorf exhibit was impressive, as it showed the volumes of material, but the triviality of most material too. It’s a long tail of documents with extremely low demand, being treated equally as recent papers in high demand.
Scientific journal publishers are increasingly a burden on the scientific world, rent-seeking gatekeepers. Their original value added role, that of multiplication and distribution to increase access, has been completely eroded, if not actually fully reversed.
This is a start to more fully describe and explore a distributed version of digitisation, digitalisation and specifically digital transformation, and state why I think bringing distributed / networked thinking into them matters.
Digitising stuff, digitalising routines, the regular way
Over the past decades much more of the things around us became digitised, and in recent years much of the things we do, our daily routines and work processes, have become digitalised. Many of those digitalised processes are merely digitised replicas of their paper predecessors. Asking for a government permit for instance, or online banking. There’s nothing there that wasn’t there in the paper version. Sometimes even small steps in those processes still force you to use paper. At the start of this year I had to apply for a declaration that my company had never been involved in procurement fraud. All the forms I needed for it (30 pages in total!), were digitised and I filled them out online, but when it came to sending it in, I had to print the PDF resulting from those 30 pages, and send it through snail mail. I have no doubt that the receiving government office’s first step was to scan it all before processing it. Online banking similarly is just a digitised paper process. Why don’t all online bank accounts provide nifty visualisation, filtering and financial planning tools (like alerts for dates due, saving towards a goal, maintaining a buffer etc.), now that everything is digital? The reason we laugh at Little Britains ‘computer says no’ sketches, is because we recognise all too well the frustration of organisations blindly trusting their digitalised processes, and never acknowledging or addressing their crappy implementation, or the extra work and route-arounds their indifference inflicts.
Digital transformation, digital societies
Digital transformation is the accumulated societal impact of all those digital artefacts and digitalised processes, even if they’re incomplete or half-baked. Digital transformation is why I have access to all those books in the long tail that never reached the shelves of any of the book shops I visited in decades part, yet now come to my e-reader instantly, resulting in me reading more and across a wider spectrum than ever before. Digital transformation is also the impact on elections that almost individually targeted data-driven Facebook advertising caused by minutely profiling undecided voters.
Digital transformation is often referred to these days, in my work often also in the context of development and the sustainable development goals.
Yet, it often feels to me that for most intents and purposes this digital transformation is done to us, about us but not of us. It’s a bit like the smart city visions corporations like Siemens and Samsung push(ed), that were basically devoid of life and humanity. Quality of life reduced and equated to security only, in sterilised cities, ignoring that people are the key actors, as critiqued by Adam Greenfield in 2013.
Human digital networks: distributed digital transformation
The Internet is a marvellous thing. At least it is when we use it actively, to assist us in our routines and in our efforts to change, learn and reach out. As social animals, our human interaction has always been networked where we fluently switch between contexts, degrees of trust and disclosure, and routing around undesired connections. In that sense human interaction and the internet’s original design principle closely match up, they’re both distributed. In contrast most digitalisation and digital transformation happens from the perspective of organisations and silos. Centralised things, where some decide for the many.
To escape that ‘done to us, about us, not of us’, I think we need to approach digitisation, digitalisation and digital transformation from a distributed perspective, matching up our own inherently networked humanity with our newly (since 30 yrs) networked global digital infrastructure. We need to think in terms of distributed digital transformation. Distributed digital transformation (making our own digital societal impact), building on distributed digitisation (making our things digital), and on distributed digitalisation (making our routines digital).
Signs of distributed digitisation and digitalisation
Distributed digitisation can already be seen in things like the quantified self movement, where individuals create data around themselves to use for themselves. Or in the sensors I have in the garden. Those garden measurements are part of something you can call distributed digitalisation, where a network of similar sensors create a map of our city that informs climate adaptation efforts by local government. My evolving information strategies, with a few automated parts, and the interplay of different protocols and self-proposed standards that make up the Indieweb also are examples of distributed digitalisation. My Networked Agency framework, where small groups of relationships fix something of value with low threshold digital technology, and network/digital based methods and processes, is distributed digitisation and distributed digitalisation combined into a design aid for group action.
Distributed digital transformation needs a macroscope for the new civil society
Distributed digital transformation, distributed societal impact seems a bit more elusive though.
Civil society is increasingly distributed too, that to me is clear. New coops, p2p groups, networks of individual actors emerge all over the world. However they are largely invisible to for instance the classic interaction between government and the incumbent civil society, and usually cut-off from the scaffolding and support structures that ‘classic’ activities can build on to get started. Because they’re not organised ‘the right way’, not clearly representative of a larger whole. Bootstrapping is their only path. As a result these initiatives are only perceived as single elements, and the scale they actually (can) achieve as a network remains invisible. Often even in the eyes of those single elements themselves.
Our societies, including the nodes that make up the network of this new type of civil society, lack the perception to recognise the ‘invisible hand of networks’. A few years ago already I discussed with a few people, directors of entities in that new civil society fabric, how it is that we can’t seem to make our newly arranged collective voices heard, our collective efforts and results seen, and our collective power of agency recognised and sought out for collaboration? We’re too used, it seems, to aggregating all those things, collapsing them into a single voice of a mouthpiece that has the weight of numbers behind it, in order to be heard. We need to learn to see the cumulative impact of a multitude of efforts, while simultaneously keeping all those efforts visible on their own. There exist so many initiatives I think that are great examples of how distributed digitalisation leads to transformation, but they are largely invisible outside their own context, and also not widely networked and connected enough to reach their own full potential. They are valuable on their own, but would be even more valuable to themselves and others when federated, but the federation part is mostly missing.
We need to find a better way to see the big picture, while also seeing all pixels it consists of. A macroscope, a distributed digital transformation macroscope.
The Twitter-like platform Gab has been forced offline, as their payment providers, hosting provider and domain provider all told them their business was no longer welcome. The platform is home to people with extremist views claiming their freedom of speech is under threat. At issue is of course where that speech becomes calling for violence, such as by the Gab-user who horribly murdered 11 people last weekend in Pittsburgh driven by anti-semitic hate.
Will we see an uptick in the use of federated sites such as Mastodon when platforms like Gab that are much more public disappear?
This I think isn’t about extremists being ‘driven underground’ but denying calls for violence, such as happened on Gab, a place in public discourse. An uptick in the use of federated sites would be a good development, as federation allows for much smaller groups to get together around something, whatever it is. In reverse that means no-one else needs to be confronted with it either if they don’t want to. Within the federation of Mastodon sites, I regularly come across instances listing other instances they do not connect to, and for which reasons. It puts the power of supporting welcomed behaviour and pushing back on unwelcome behaviour in the hands of more people, meaning every person running a Mastodon instance (and you can have your own instance), than just Twitter or Facebook management.
example of an instance denying another to be federated with it
That sort of moderation can still be hard, even if the moderator to member ratio is already much better than on the main platforms. But that just points the way to the long tail of much smaller instances, more individual ones even. It means it becomes easier for individuals and small groups to shun small cells, echo-chambers and rage bubbles, and not accidentally ending up in them or being forcefully drawn into them while you were having other conversations, like what can happen on Twitter. See my earlier posting on the disintegration of discourse. You then can do what networks do well: route around the stuff you perceive as damage or non-functional. It creates a stronger power symmetry and communication symmetry. It also denies extremists a wider platform. Yes they can still call for violence, which remains just as despicable. Yes, they can still blame Others for anything and be hateful of them. But they will be doing it in their back yard (or Mastodon instance), not in the park where you like to go walk your dog or do your morning run (or Twitter). They will not have a podium bigger than warranted, they will not have visibility beyond their own in-crowd. And will have to deal with more pushback and reality whenever they step outside such a bubble, without the pleasant illusion ‘everyone on twitter agrees with me’.
We’re in a time where whatever is presented to us as discourse on Facebook, Twitter or any of the other platforms out there, may or may not come from humans, bots, or someone/a group with a specific agenda irrespective of what you say or respond. We’ve seen it at the political level, with outside influences on elections, we see it in things like gamer gate, and in critiques of the last Star Wars movie. It creates damage on a societal level, and it damages people individually. To quote Angela Watercutter, the author of the mentioned Star Wars article,
…it gets harder and harder to have an honest discussion […] when some of the speakers are just there to throw kerosene on a flame war. And when that happens, when it’s impossible to know which sentiments are real and what motivates the people sharing them, discourse crumbles. Every discussion […] could turn into a […] fight — if we let it.
Discourse disintegrates I think specifically when there’s no meaningful social context in which it takes place, nor social connections between speakers in that discourse. The effect not just stems from that you can’t/don’t really know who you’re conversing with, but I think more importantly from anyone on a general platform being able to bring themselves into the conversation, worse even force themselves into the conversation. Which is why you never should wade into newspaper comments, even though we all read them at times because watching discourse crumbling from the sidelines has a certain addictive quality. That this can happen is because participants themselves don’t control the setting of any conversation they are part of, and none of those conversations are limited to a specific (social) context.
Unlike in your living room, over drinks in a pub, or at a party with friends of friends of friends. There you know someone. Or if you don’t, you know them in that setting, you know their behaviour at that event thus far. All have skin in the game as well misbehaviour has immediate social consequences. Social connectedness is a necessary context for discourse, either stemming from personal connections, or from the setting of the place/event it takes place in. Online discourse often lacks both, discourse crumbles, entropy ensues. Without consequence for those causing the crumbling. Which makes it fascinating when missing social context is retroactively restored, outing the misbehaving parties, such as the book I once bought by Tinkebell where she matches death threats she received against the sender’s very normal Facebook profiles.
Two elements therefore are needed I find, one in terms of determining who can be part of which discourse, and two in terms of control over the context of that discourse. They are point 2 and point 6 in my manifesto on networked agency.
This is unlike on e.g. FB where the cost of defending against trollish behaviour by design takes more effort than being a troll, and never carries a cost for the troll. There must, in short, be a finite social distance between speakers for discourse to be possible. Platforms that dilute that, or allow for infinite social distance, is where discourse can crumble.
This points to federation (a platform within control of a specific group, interconnected with other groups doing the same), and decentralisation (individuals running a platform for one, and interconnecting them). Doug Belshaw recently wrote in a post titled ‘Time to ignore and withdraw?‘ about how he first saw individuals running their own Mastodon instance as quirky and weird. Until he read a blogpost of Laura Kalbag where she writes about why you should run Mastodon yourself if possible:
Everything I post is under my control on my server. I can guarantee that my Mastodon instance won’t start profiling me, or posting ads, or inviting Nazis to tea, because I am the boss of my instance. I have access to all my content for all time, and only my web host or Internet Service Provider can block my access (as with any self-hosted site.) And all blocking and filtering rules are under my control—you can block and filter what you want as an individual on another person’s instance, but you have no say in who/what they block and filter for the whole instance.
Similarly I recently wrote,
The logical end point of the distributed web and federated services is running your own individual instance. Much as in the way I run my own blog, I want my own Mastodon instance.
I also do see a place for federation, where a group of people from a single context run an instance of a platform. A group of neighbours, a sports team, a project team, some other association, but always settings where damaging behaviour carries a cost because social distance is finite and context defined, even if temporary or emergent.
For the UNDP in Serbia, I made an overview of existing studies into the impact of open data. I’ve done something similar for the Flemish government a few years ago, so I had a good list of studies to start from. I updated that first list with more recent publications, resulting in a list of 45 studies from the past 10 years. The UNDP also asked me to suggest a measurement framework. Here’s a summary overview of some of the things I formulated in the report. I’ll start with 10 things that make measuring impact hard, and in a later post zoom in on what makes measuring impact doable.
While it is tempting to ask for a ‘killer app’ or ‘the next tech giant’ as proof of impact of open data, establishing the socio-economic impact of open data cannot depend on that. Both because answering such a question is only possible with long term hindsight which doesn’t help make decisions in the here and now, as well as because it would ignore the diversity of types of impacts of varying sizes known to be possible with open data. Judging by the available studies and cases there are several issues that make any easy answers to the question of open data impact impossible.
1 Dealing with variety and aggregating small increments
There are different varieties of impact, in all shapes and sizes. If an individual stakeholder, such as a citizen, does a very small thing based on open data, like making a different decision on some day, how do we express that value? Can it be expressed at all? E.g. in the Netherlands the open data based rain radar is used daily by most cyclists, to see if they can get to the rail way station dry, better wait ten minutes, or rather take the car. The impact of a decision to cycle can mean lower individual costs (no car usage), personal health benefits, economic benefits (lower traffic congestion) environmental benefits (lower emissions) etc., but is nearly impossible to quantify meaningfully in itself as a single act. Only where such decisions are stimulated, e.g. by providing open data that allows much smarter, multi-modal, route planning, aggregate effects may become visible, such as reduction of traffic congestion hours in a year, general health benefits of the population, reduction of traffic fatalities, which can be much better expressed in a monetary value to the economy.
2 Spotting new entrants, and tracking SME’s
The existing research shows that previously inactive stakeholders, and small to medium sized enterprises are better positioned to create benefits with open data. Smaller absolute improvements are of bigger value to them relatively, compared to e.g. larger corporations. Such large corporations usually overcome data access barriers with their size and capital. To them open data may even mean creating new competitive vulnerabilities at the lower end of their markets. (As a result larger corporations are more likely to say they have no problem with paying for data, as that protects market incumbents with the price of data as a barrier to entry.) This also means that establishing impacts requires simultaneously mapping new emerging stakeholders and aggregating that range of smaller impacts, which both can be hard to do (see point 1).
3 Network effects are costly to track
The research shows the presence of network effects, meaning that the impact of open data is not contained or even mostly specific to the first order of re-use of that data. Causal effects as well as second and higher order forms of re-use regularly occur and quickly become, certainly in aggregate, much higher than the value of the original form of re-use. For instance the European Space Agency (ESA) commissioned my company for a study into the impact of open satellite data for ice breakers in the Gulf of Bothnia. The direct impact for ice breakers is saving costs on helicopters and fuel, as the satellite data makes determining where the ice is thinnest much easier. But the aggregate value of the consequences of that is much higher: it creates a much higher predictability of ships and the (food)products they carry arriving in Finnish harbours, which means lower stocks are needed to ensure supply of these goods. This reverberates across the entire supply chain, saving costs in logistics and allowing lower retail prices across Finland. When mapping such higher order and network effects, every step further down the chain of causality shows that while the bandwidth of value created increases, at the same time the certainty that open data is the primary contributing factor decreases. Such studies also are time consuming and costly. It is often unlikely and unrealistic to expect data holders to go through such lengths to establish impact. The mentioned ESA example, is part of a series of over 20 such case studies ESA commissioned over the course of 5 years, at considerable cost for instance.
4 Comparison needs context
Without context, of a specific domain or a specific issue, it is hard to asses benefits, and compare their associated costs, which is often the underlying question concerning the impact of open data: does it weigh up against the costs of open data efforts? Even though in general open data efforts shouldn’t be costly, how does some type of open data benefit compare to the costs and benefits of other actions? Such comparisons can be made in a specific context (e.g. comparing the cost and benefit of open data for route planning with other measures to fight traffic congestion, such as increasing the number of lanes on a motor way, or increasing the availability of public transport).
5 Open data maturity determines impact and type of measurement possible
Because open data provisioning is a prerequisite for it having any impact, the availability of data and the maturity of open data efforts determine not only how much impact can be expected, but also determine what can be measured (mature impact might be measured as impact on e.g. traffic congestion hours in a year, but early impact might be measured in how the number of re-users of a data set is still steadily growing year over year)
6 Demand side maturity determines impact and type of measurement possible
Whether open data creates much impact is not only dependent on the availability of open data and the maturity of the supply-side, even if it is as mentioned a prerequisite. Impact, judging by the existing research, is certain to emerge, but the size and timing of such impact depends on a wide range of other factors on the demand-side as well, including things as the skills and capabilities of stakeholders, time to market, location and timing. An idea for open data re-use that may find no traction in France because the initiators can’t bring it to fruition, or because the potential French demand is too low, may well find its way to success in Bulgaria or Spain, because local circumstances and markets differ. In the Serbian national open data readiness assessment performed by me for the World Bank and the UNDP in 2015 this is reflected in the various dimensions assessed, that cover both supply and demand, as well as general aspects of Serbian infrastructure and society.
7 We don’t understand how infrastructure creates impact
The notion of broad open data provision as public infrastructure (such as the UK, Netherlands, Denmark and Belgium are already doing, and Switzerland is starting to do) further underlines the difficulty of establishing the general impact of open data on e.g. growth. The point that infrastructure (such as roads, telecoms, electricity) is important to growth is broadly acknowledged, with the corresponding acceptance of that within policy making. This acceptance of quantity and quality of infrastructure increasing human and physical capital however does not mean that it is clear how much what type of infrastructure contributes at what time to economic production and growth. Public capital is often used as a proxy to ascertain the impact of infrastructure on growth. Consensus is that there is a positive elasticity, meaning that an increase in public capital results in an increase in GDP, averaging at around 0.08, but varying across studies and types of infrastructure. Assuming such positive elasticity extends to open data provision as infrastructure (and we have very good reasons to do so), it will result in GDP growth, but without a clear view overall as to how much.
Most measurements concerning open data impact need to be understood as proxies. They are not measuring how open data is creating impact directly, but from measuring a certain movement it can be surmised that something is doing the moving. Where opening data can be assumed to be doing the moving, and where opening data was a deliberate effort to create such movement, impact can then be assessed. We may not be able to easily see it, but still it moves.
9 Motives often shape measurements
Apart from the difficulty of measuring impact and the effort involved in doing so, there is also the question of why such impact assessments are needed. Is an impact assessment needed to create support for ongoing open data efforts, or to make existing efforts sustainable? Is an impact measurement needed for comparison with specific costs for a specific data holder? Is it to be used for evaluation of open data policies in general? In other words, in whose perception should an impact measurement be meaningful?
The purpose of impact assessments for open data further determines and/or limits the way such assessments can be shaped.
10 Measurements get gamed, become targets
Finally, with any type of measurement, there needs to be awareness that those with a stake of interest into a measurement are likely to try and game the system. Especially so where measurements determine funding for further projects, or the continuation of an effort. This must lead to caution when determining indicators. Measurements easily become a target in themselves. For instance in the early days of national open data portals being launched worldwide, a simple metric often reported was the number of datasets a portal contained. This is an example of a ‘point’ measurement that can be easily gamed for instance by subdividing a dataset into several subsets. The first version of the national portal of a major EU member did precisely that and boasted several hundred thousand data sets at launch, which were mostly small subsets of a bigger whole. It briefly made for good headlines, but did not make for impact.
In a second part I will take a closer look at what these 10 points mean for designing a measurement framework to track open data impact.
When I talk about Networked Agency, I talk about reducing the barrier to entry for all kinds of technology as well as working methods, that we know work well in a fully networked situation. Reducing those barriers allows others to adopt these tools more easily and find power in refound ability to act. Networked agency needs tech and methods that can be easily deployed by groups, and that work even better when federated across groups and the globe-spanning digital human network.
The IndieWeb’s principles (own your own data, use tools that work well on their own, and better when federated, avoid silos as the primary place of where you post content) fit well with that notion.
Recently I said that I was coming back to a lot of my material on information strategies and metablogging from 2003-2006, but now with more urgency and a change in scope. Frank asked what I meant, and I answered
“that the principles of the open web (free to use, alter, tinker, control, trust by you/your group) also apply to other techs (for instance energy production, blockchain, biohacking, open source hardware, cheap computing hardware, algorithms, IoT sensors and actuators) and methods (p2p, community building, social media usage/production, group facilitation etc.). Only then are they truly empowering, otherwise you’re just the person it is ‘done to’.”
Blockchain isn’t empowering you to run your own local currency if you can only run it on de-facto centralised infrastructure, where you’re exposed to propagating negative externalities. Whether it is sudden Ethereum forks, or the majority of BTC transactions being run on opaque Chinese computing clusters. It is empowering only if it is yours to deploy for a specific use. Until you can e.g. run a block chain based LETS easily for your neighbourhood or home town on nodes that are Raspberry Pi’s attached to the LETS-members’ routers, there is no reliable agency in blockchain.
IoT is not empowering if it means Amazon is listening into all your conversations, or your fire alarm sensors run through centralised infrastructure run by a telco. It is empowering if you can easily deploy your own sensors and have them communicate to an open infrastructure for which you can run your own gateway or trust your neighbour’s gateway. And on top of which your group does their own data crunching.
Community building methods are not empowering if it is only used to purposefully draw you closer to a clothing brand or football club so they can sell your more of their stuff. Where tribalism is used to drive sales. It is empowering if you can, with your own direct environment, use those methods to strengthen local community relationships, learn how to collectively accommodate differences in opinions, needs, strengths and weaknesses, and timely reorient yourself as a group to keep momentum. Dave Winer spoke about working together at State of the Net, and 3 years ago wrote about working together in the context of the open web. To work together there are all kinds of methods, but like community building, those methods aren’t widely known or adopted.
So, what applies to the open web, IndieWeb, I see applies to any technology and method we think help increase the agency of groups in our networked world. More so as technologies and methods often need to be used in tandem. All these tools need to be ‘smaller’ than us, be ours. This is a key element of Networked Agency, next to seeing the group, you and a set of meaningful relationships, as the unit of agency.
Not just IndieWeb. More IndieTech. More IndieMethods.
More on this in the coming months I think, and in the runup to ‘Smart Stuff That Matters‘ late August.
Came across this post by Ruben Verborgh from last December, “Paradigm Shifts for the Decentralised Web“.
I find it helpful because of how it puts different aspects of wanting to decentralise the web into words. Ruben Verborgh mentions 3 simultaneous shifts:
1) End-users own their data, which is the one mostly highlighted in light of things like the Cambridge Analytica / Facebook scandal.
2) Apps become views, when they are disconnected from the data, as they are no longer the single way to see that data
3) Interfaces become queries, when data is spread out over many sources.
Those last two specifically help me think of decentralisation in different ways. Do read the whole thing.
Yesterday at State of the Net I showed some of the work I did with the great Frysklab team, letting a school class find power in creating their own solutions. We had a I think very nicely working triade of talks in our session, Hossein Derakshan first, me in the middle, and followed by Dave Snowden. In his talk, Dave referenced my preceding one, saying it needed scaling for the projects I showed to alter anything. Although I know Dave Snowden didn’t mean his call for scale that way, often when I hear it, it is rooted in the demand-for-ever-more-growth type of systems we know cannot be sustained in a closed world system like earth’s. The small world syndrom, as I named it at Shift 2010, will come biting.
It so often also assumes there needs to be one person or entity doing the scaling, a scaler. Distributed networks don’t need a scaler per se.
The internet was not created that way, nor was the Web. Who scaled RSS? Some people moved it forwards more than others, for certain, but unconnected people, just people recognising a possibility to fruitfully build on others for something they felt personally needed. Dave Winer spread it with Userland, made it more useful, and added the possibility of having the payload be something else than just text, have it be podcasts. We owe him a lot for the actual existence of this basic piece of web plumbing. Matt Mullenweg of WordPress and Ben and Mena Trott of Movable Type helped it forward by adding RSS to their blogging tools, so people like me could use it ‘out of the box’. But it actually scaled because bloggers like me wanted to connect. We recognised the value of making it easy for others to follow us, and for us to follow the writings of others. So I and others created our own templates, starting from copying something someone else already made and figuring out how to use RSS. It is still how I adopt most of my tools. Every node in a network is a scaler, by doing something because it is of value to themselves in the moment, changes them, and by extension adding themselves to the growing number of nodes doing it. Some nodes may take a stronger interest in spreading something, convincing others to adopt something, but that’s about it. You might say the source of scaling is the invisible hand of networks.
That’s why I fully agree with Chris Hardie that in the open web, all the tools you create need to have the potentiality of the network effect built in. Of course, when something is too difficult for most to copy or adapt, then there won’t be this network effect. Which is why most of the services we see currently dominating online experiences, the ones that shocked Hossein upon returning from his awful forced absence, are centralised services made very easy to use. Where someone was purposefully aiming for scale, because their business depended on it once they recognised their service had the potential to scale.
Dave Winer yesterday suggested the blogosphere is likely bigger now than when it was so dominantly visible in the ‘00s, when your blogpost of today could be Google’s top hit for a specific topic, when I could be found just on my first name. But it is so much less visible than before, precisely because it is not centralised, and the extravagant centralised silos stand out so much. The blogosphere diminished itself as well however, Dave Winer responded to Hossein Derakshan’s talk yesterday.
People still blog, more people blog than before, but we no longer build the same amount of connections across blogs. Connections we were so in awe of when our writing first proved to have the power to create them. Me and many others, bloggers all, suckered ourselves into feeling blog posts needed to be more like reporting, essays, and took our conversations to the comments on Facebook. Facebook, which, as Hossein Derakshan pointed out, make such a travesty of what web links are by allowing them only as separate from the text you write on Facebook. It treats all links as references to articles, not allowing embedding them in the text, or allowing more than one link to be presented meaningfully. That further reinforced the blog-posts-as-articles notions. That further killed the link as weaving a web of distributed conversations, a potential source of meaning. Turned the web, turned your timeline, into TV, as Hossein phrased it.
Hoder on ‘book-internet’ (blogs) and ‘tv-internet’ (FB et al) Tweet by Anna Masera
Adriana Lukas and I after the conference, as we sat there enjoying an Italian late Friday afternoon over drinks, talked about the Salons of old. How we both have created through the years settings like that, Quantified Self meetings, BlogWalks, Birthday Unconferences, and how we approached online sharing like that too. To just add some of my and your ramblings to the mix. Starting somewhere in the middle, following a few threads of thought and intuitions, adding a few links (as ambient humanity), and ending without conclusions. Open ended. Just leaving it here.
I an open letter (PDF) a range of institutions call upon their respective European governments to create ELLIS, the European Lab for Learning and Intelligent Systems. It’s an effort to fortify against brain drain, and instead attract top talent to Europe. It points to the currently weak position in AI of Europe between what is happening in the USA and in China, adding a geo-political dimension. The letter calls not so much for an institution with a large headcount, but for commitment to long term funding to attract and keep the right people. These are similar reasons that led to the founding of CERN, now a global center for physics (and a key driver of things like open access to research and open research data), and more recently the European Molecular Biology Laboratory.
At the core the signatories see France and Germany as most likely to act to start this intra-governmental initiative. It seems this nicely builds upon the announcement by French president Macron late March to invest heavily in AI, and keep / attract the right people for it. He too definitely sees the European dimension to this, even puts European and enlightenment values at the core of it, although he acted within his primary scope of agency, France itself.
(via this Guardian article)
RSS is the most important piece of internet plumbing for following new content from a wide range of sources. It allows you to download new updates from your favourite sites automatically and read them at your leisure. Dave Winer, forever dedicated to the open web, created it.
I used to be a very heavy RSS user. I tracked hundreds of sources on a daily basis. Not as news but as a way to stay informed about the activities and thoughts of people I was interested in. At some point, that stopped working. Popular RSS readers were discontinued, most notably Google’s RSS reader, many people migrated to the Facebook timeline, platforms like Twitter stopped providing RSS feeds to make you visit their platform, and many people stopped blogging. But with FB in the spotlight, there is some interest in refocusing on the open web, and with it on RSS.
Currently I am repopulating from scratch my RSS reading ‘antenna’, following around 100 people again.
Wired in its call for an RSS revival suggests a few RSS readers. I, as I always have, use a desktop RSS reader, which currently is ReadKit. The FB timeline presents stuff to you based on their algorithmic decisions. As mentioned I definitely would like to have smarter ways of shaping my own information diet, but then with me in control and not the one being commoditised.
So it’s good to read that RSS Reader builders are looking at precisely that.
“Machines can have a big role in helping understand the information, so algorithms can be very useful, but for that they have to be transparent and the user has to feel in control. What’s missing today with the black-box algorithms is where they look over your shoulder, and don’t trust you to be able to tell what’s right.”,says Edwin Khodabakchian cofounder and CEO of RSS reader Feedly (which currently has 14 million users). That is more or less precisely my reasoning as well.