When I talk about Networked Agency, I talk about reducing the barrier to entry for all kinds of technology as well as working methods, that we know work well in a fully networked situation. Reducing those barriers allows others to adopt these tools more easily and find power in refound ability to act. Networked agency needs tech and methods that can be easily deployed by groups, and that work even better when federated across groups and the globe-spanning digital human network.

The IndieWeb’s principles (own your own data, use tools that work well on their own, and better when federated, avoid silos as the primary place of where you post content) fit well with that notion.

Recently I said that I was coming back to a lot of my material on information strategies and metablogging from 2003-2006, but now with more urgency and a change in scope. Frank asked what I meant, and I answered

that the principles of the open web (free to use, alter, tinker, control, trust by you/your group) also apply to other techs (for instance energy production, blockchain, biohacking, open source hardware, cheap computing hardware, algorithms, IoT sensors and actuators) and methods (p2p, community building, social media usage/production, group facilitation etc.). Only then are they truly empowering, otherwise you’re just the person it is ‘done to’.

Blockchain isn’t empowering you to run your own local currency if you can only run it on de-facto centralised infrastructure, where you’re exposed to propagating negative externalities. Whether it is sudden Ethereum forks, or the majority of BTC transactions being run on opaque Chinese computing clusters. It is empowering only if it is yours to deploy for a specific use. Until you can e.g. run a block chain based LETS easily for your neighbourhood or home town on nodes that are Raspberry Pi’s attached to the LETS-members’ routers, there is no reliable agency in blockchain.

IoT is not empowering if it means Amazon is listening into all your conversations, or your fire alarm sensors run through centralised infrastructure run by a telco. It is empowering if you can easily deploy your own sensors and have them communicate to an open infrastructure for which you can run your own gateway or trust your neighbour’s gateway. And on top of which your group does their own data crunching.

Community building methods are not empowering if it is only used to purposefully draw you closer to a clothing brand or football club so they can sell your more of their stuff. Where tribalism is used to drive sales. It is empowering if you can, with your own direct environment, use those methods to strengthen local community relationships, learn how to collectively accommodate differences in opinions, needs, strengths and weaknesses, and timely reorient yourself as a group to keep momentum. Dave Winer spoke about working together at State of the Net, and 3 years ago wrote about working together in the context of the open web. To work together there are all kinds of methods, but like community building, those methods aren’t widely known or adopted.

So, what applies to the open web, IndieWeb, I see applies to any technology and method we think help increase the agency of groups in our networked world. More so as technologies and methods often need to be used in tandem. All these tools need to be ‘smaller’ than us, be ours. This is a key element of Networked Agency, next to seeing the group, you and a set of meaningful relationships, as the unit of agency.

Not just IndieWeb. More IndieTech. More IndieMethods.

How would the ‘Generations‘ model of the IndieWeb look if transposed to IndieTech and IndieMethods? What is Selfdogfooding when it comes to methods?

More on this in the coming months I think, and in the runup to ‘Smart Stuff That Matters‘ late August.

Came across this post by Ruben Verborgh from last December, “Paradigm Shifts for the Decentralised Web“.

I find it helpful because of how it puts different aspects of wanting to decentralise the web into words. Ruben Verborgh mentions 3 simultaneous shifts:

1) End-users own their data, which is the one mostly highlighted in light of things like the Cambridge Analytica / Facebook scandal.

2) Apps become views, when they are disconnected from the data, as they are no longer the single way to see that data

3) Interfaces become queries, when data is spread out over many sources.

Those last two specifically help me think of decentralisation in different ways. Do read the whole thing.

Dave Winer, one of, if not the, earliest bloggers asks what became of the blogosphere? It was a topic of the conversations in Trieste 2 weeks ago at State of the Net, where we both were on the program.

I get what he says about losing the center, and seeing that center as a corporation back then. This much in the way Tantek Celik talked about the silos first being friendly and made by the people we knew, but then got sold, which I wrote about yesterday. Creating a new center, or centers, is worthwile I concur with Dave, and if it can’t be a company at the center, then maybe it should be a network or an organisational manifestation thereof, such as a cooperative. An expression of networked agency.

Because of that I wonder about Dave’s last point “There used to be a communication network among bloggers, but that’s gone now.”

I asked (on Facebook), “What to you was that previous communications network, and what was it built on? What type of communications would you like to see re-emerge?” The answer is about being able to discover other bloggers, like Dave’s Weblogs.com platform used to do (and still does, but most updates are spam).

Blogs to me are distributed conversations. Look at the unbridled enthusiasm I expressed 11 years ago when I wrote about 5 years of blogging in this space, and the list of people I then regarded as my regular group of people I had blogged conversations with. It is currently harder to create those, and it has become harder for me to notice when something I write is reacted to as well. Much of the IndieWeb discussion is about at least being able to discover all online facets of someone from their own domain, and pulling responses to it back there too. Something I need to explore more how to do in a way that fits me.

In terms of communication and connecting, it would be great if I could explore the blogosphere much as in the picture below. Created by Anjo Anjewierden and presented at the AOIR conference in Chicago in 2005 by Lilia Efimova, it shows a representation of my blog network based on text analysis of my and other people’s blogs. It’s a pretty good picture of what my blog ‘neighbourhood’ looked like then.

Or this one also by Anjo Anjewierden from 2008, titled “the big one”. It shows conversations between my and other’s blogs. Grey boxes are conversations across blogs (the bigger the box, the more blogpostings), the other dots are postings that refer to such a conversation but aren’t part of it. Top-left a box is ‘opened up’ to show there are different postings (colored dots) inside it.

Makes me want to have a personal crawler that maps out connections between blogs! Are there any ‘personalised’ crawlers out there?

When Hossein Derakshan came back on-line after a 6 year absence in 2015, he was shocked to find how the once free flowing web ended up in walled gardens and silo’s. Musing about what he presented at State of the Net earlier this month, I came across Frank Meeuwsen’s posting about the IndieWeb Summit starting today in Portland (livestream on YT). That send me off on a short trip around the IndieWeb and related topics.

I came across this 2014 video of Tantek Celik. (he, Chris Messina and Andy Smith organised the first ever BarCamp in 2005, followed by a second one in Amsterdam where I met the latter two and many other fellow bloggers/techies)

In his talk he looks back at how the web got silo’d, and talks from a pure techie perspective about much the same things Hoder wrote about in 2015 and talked about this month. He places ‘peak open web’ in 2003, just before the web 2.0 silos came along. Those first silo’s (like Flickr, delicious etc) were ‘friendly silo’s’. We knew the people who built them, and we trusted their values, assumed the open web was how it was, is and would remain.

The friendly silos got sold, other less friendly silos emerged.
The silos have three things that make them hugely attractive. One ‘dark pattern’ which is adding functionality that feeds your dopamine cravings, such as like and heart buttons. The other two are where the open web is severely lacking: The seamless integration into one user interface of both reading and writing, making it very easy to respond to others that way, or add to the river of content. And the ability to find people and walk the social graph, by jumping from a friend to their list of friends and so on. The open web never got there. We had things like Qumana that tried to combine reading and writing, but it never really took off. We had FOAF but it never became easy.

So, Tantek and others set out in 2011 to promote the open web, IndieWeb, starting from those notions. Owning your data and content, and federating to participate. In his slides he briefly touches upon many small things he did, some of which I realised I could quickly adopt or do.
So I

  • added IndieAuth to my site (using the IndieAuth WP plugin), so that I can use my own website to authenticate on other services such as my user profile at IndieWeb wiki (a bit like Facebook Connect, but then from my own server).
  • added new sharing buttons to this site that don’t track you simply by being displayed (using the GDPR compliant Sharrif plugin), which includes Diaspora and Mastodon sharing buttons
  • followed Tantek’s notion of staying in control of the URLs you share, e.g. by using your own URLs such as zylstra.eu/source/apple-evernote-wordpress to redirect to my GitHub project of that name (so should GitHub be eaten alive after Microsofts take-over, you can run your own Gitnode or migrate, while the URLs stay valid).
  • decided to go to IndieWeb Camp in Nuremburg in October, together with Frank Meeuwsen

My stroll on the IndieWeb this morning leaves me with two things:

  • I really need to more deeply explore how to build loops between various services and my site, so that for all kinds of interactions my site is the actual repository of content. This likely also means making posting much easier for myself. The remaining challenge is my need to more fluidly cater to different circles of social distance / trust, layers that aren’t public but open to friends
  • The IndieWeb concept is more or less the same as what I think any technology or method should be to create networked agency: within control of the group that deploys it, useful on its own, more useful federated, and easy enough to use so my neighbours can adopt it.

At State of the Net yesterday I used the concept of macroscopes. I talked about how many people don’t really feel where their place is in the face of global changes, like climate change, ageing, the pressures on rules and institutions, the apparent precarity of global financial systems. That many feel whatever their actions, they will not have influence on those changes. That many feel so much of the change around them is being done to them, merely happens to them, like the weather.
Macroscopes provide a perspective that may address such feelings of being powerless, and helps us in the search for meaning.

Macroscopes, being the opposite of microscopes, allow us to see how our personal situation fits in a wider global whole. The term comes from John Thackara in the context of social end ecological design. He says a macroscope “allows us to see what the aggregation of many small interactions looks like when added together”. It makes the processes and systems that surrounds us visible and knowable.

I first encountered the term macroscope at the 2009 Reboot conference in Copenhagen where Matt Webb in his opening keynote invoked Thackara.
Matt Webb also rephrased what a macroscope is, and said “a macroscope shows you where you are, and where within something much bigger, simultaneously. To understand something much bigger than you in a human way, at human scale, in your heart.” His way of phrasing it stayed with me in the past years. I like it very much because it adds human emotion to the concept of macroscopes. It provides us with a place we feel we have, a sense of meaning. As meaning is deeply emotional.

Chuck Close self portrait at Drents Museum
Seeing the small …

Chuck Close self portrait at Drents Museum
and the bigger picture simultaneously. (Chuck Close self portrait 1995, at Drents Museum)

Later in his on stage conversation at State of the Net, Dave Winer remarked that for Donald Trump’s base MAGA is such a source of meaning, and I think he’s right. Even though it’s mostly an expression of hope that I typified in my talk as salvationism. (Someone will come along and make everything better, a populist, an authoritarian, a deity, or speakers pontificating on stage.) I’ve encountered macroscopes that worked for people in organisations. But sometimes they can appear very contrived viewed from the outside. The man who cleans the urinals at an airport and says he’s ensuring 40 million people per year have a pleasant and safe trip, clearly is using a macroscope effectively. It’s one I can empathise with as aiming for great hospitality, but it also feels a bit contrived as many other things at an airport, such as the cattle prodding at security and the leg room on your plane so clearly don’t chime with it. In the Netherlands I encountered two examples of working macroscopes. Everyone I encountered at the Court of Audit reflexively compares every idea and proposal to the way their institution’s role is described in the constitution. Not out of caution, but out of feeling a real sense of purpose as working on behalf of the people to check how government spends its money. The other one was the motto of the government engineering department responsible for water works and coastal defences, “Keeping our feet dry”. With so much of our country below sea level, and the catastrophic floods of 1953 seared in our collective memory, it’s a highly evocative macroscope that draws an immediate emotional response. They since watered it down, and now it’s back to something bloodless and bland, likely resulting from a dreary mission statement workshop.

In my talk I positioned networked agency as a macroscope. Globe spanning digital networks and our human networks in my mind are very similar in the way they behave, and hugely overlapping. So much so they can be treated as one, we should think in terms of human digital networks. There is meaning, the deeply felt kind of meaning, to be found in doing something together with a group. There’s also a tremendous sense of power to be felt from the ability to solve something for yourself as a group. Seeing your group as part, as a distinctive node or local manifestation, of the earth-wide human digital network allows you to act in your own way as part of global changes, and see the interdependencies. That also let’s you see how to build upon the opportunities that emerge from the global network, while being able to disconnect or shield yourself from negative things propagating over the network. Hence my call to build tools (technologies and methods) that are useful on their own within a group, as a singular instance, but more useful when federated with other instances across the global network. Tools shaped like that mean no-one but the group using it itself can switch their tools off, and the group can afford to disconnect from the wider whole on occasion.

A good quote from Thomas Madsen Mygdal

4 billion dollar ico yesterday.
Seen a generation and tech with big potential end up in ipo games, greed, speculation and short term thinking – “saw the beast of capitalism directly in it’s eyes” is my mental image of the dotcom bubble.
A natural consequence of any technology cycle I rationally know, but just sad to see generations repeating previous mistakes over and over.

In the comments Martin von Haller Grønbaek points to what happened after the dotcom bubble burst, a tremendous wave of innovation. So blockchain tech is set to blossom after the impending ICO crash.

The Washington Post now has a premium ‘EU’ option, suggesting you pay more for them to comply with the GDPR.

Reading what the offer entails of course shows something different.
The basic offer is the price you pay to read their site, but you must give consent for them to track you and to serve targeted ads.
The premium offer is the price you pay to have an completely ad-free, and thus tracking free, version of the WP. Akin to what various other outlets and e.g. many mobile apps do too.

This of course has little to do with GDPR compliance. For the free and basic subscription they still need to be compliant with the GDPR but you enter into a contract that includes your consent to get to that compliance. They will still need to explain to you what they collect and what they do with it for instance. And they do, e.g. listing all their partners they exchange visitor data with.

The premium version gives you an ad-free WP so the issue of GDPR compliance doesn’t even come up (except of course for things like commenting which is easy to handle). Which is an admission of two things:

  1. They don’t see any justification for how their ads work other than getting consent from a reader. And they see no hassle-free way to provide informed consent options, or granular controls to readers, that doesn’t impact the way ad-tech works, without running afoul of the rule that consent cannot be tied to core services (like visiting their website).
  2. They value tracking you at $30 per year.

Of course their free service is still forced consent, and thus runs afoul of the GDPR, as you cannot see their website at all without it.

Yet, just to peruse an occasional article, e.g. following a link, that forced consent is nothing your browser can’t handle with a blocker or two, and VPN if you want. After all your browser is your castle.

Today I was at a session at the Ministry for Interior Affairs in The Hague on the GDPR, organised by the center of expertise on open government.
It made me realise how I actually approach the GDPR, and how I see all the overblown reactions to it, like sending all of us a heap of mail to re-request consent where none’s needed, or taking your website or personal blog even offline. I find I approach the GDPR like I approach a quality assurance (QA) system.

One key change with the GDPR is that organisations can now be audited concerning their preventive data protection measures, which of course already mimics QA. (Next to that the GDPR is mostly an incremental change to the previous law, except for the people described by your data having articulated rights that apply globally, and having a new set of teeth in the form of substantial penalties.)

AVG mindmap
My colleague Paul facilitated the session and showed this mindmap of GDPR aspects. I think it misses the more future oriented parts.

The session today had three brief presentations.

In one a student showed some results from his thesis research on the implementation of the GDPR, in which he had spoken with a lot of data protection officers or DPO’s. These are mandatory roles for all public sector bodies, and also mandatory for some specific types of data processing companies. One of the surprising outcomes is that some of these DPO’s saw themselves, and were seen as, ‘outposts’ of the data protection authority, in other words seen as enforcers or even potentially as moles. This is not conducive to a DPO fulfilling the part of its role in raising awareness of and sensitivity to data protection issues. This strongly reminded me of when 20 years ago I was involved in creating a QA system from scratch for my then employer. Some of my colleagues saw the role of the quality assurance manager as policing their work. It took effort to show how we were not building a straightjacket around them that kept them within strict boundaries, but providing a solid skeleton to grow on, and move faster. Where audits are not hunts for breaches of compliance but a way to make emergent changes in the way people worked visible, and incorporate professionally justified ones in that skeleton.

In another presentation a civil servant of the Ministry involved in creating a register of all person related data being processed. What stood out most for me was the (rightly) pragmatic approach they took with describing current practices and data collections inside the organisation. This is a key element of QA as well. You work from descriptions of what happens, and not at what ’should’ happen or ‘ideally’ happens. QA is a practice rooted in pragmatism, where once that practice is described and agreed it will be audited.
Of course in the case of the Ministry it helps that they only have tasks mandated by law, and therefore the grounds for processing are clear by default, and if not the data should not be collected. This reduces the range of potential grey areas. Similarly for security measures, they already need to adhere to national security guidelines (called the national baseline information security), which likewise helps with avoiding new measures, proves compliance for them, and provides an auditable security requirement to go with it. This no doubt helped them to be able to take that pragmatic approach. Pragmatism is at the core of QA as well, it takes its cues from what is really happening in the organisation, what the professionals are really doing.

A third one dealt with open standards for both processes and technologies by the national Forum for Standardisation. Since 2008 a growing list of currently some 40 or so standards is mandatory for Dutch public sector bodies. In this list of standards you find a range of elements that are ready made to help with GDPR compliance. In terms of support for the rights of those described by the data, such as the right to export and portability for instance, or in terms of preventive technological security measures, and ‘by design’ data protection measures. Some of these are ISO norms themselves, or, as the mentioned national baseline information security, a compliant derivative of such ISO norms.

These elements, the ‘police’ vs ‘counsel’ perspective on the rol of a DPO, the pragmatism that needs to underpin actions, and the building blocks readily to be found elsewhere in your own practice already based on QA principles, made me realise and better articulate how I’ve been viewing the GDPR all along. As a quality assurance system for data protection.

With a quality assurance system you can still famously produce concrete swimming vests, but it will be at least done consistently. Likewise with GDPR you will still be able to do all kinds of things with data. Big Data and developing machine learning systems are hard but hopefully worthwile to do. With GDPR it will just be hard in a slightly different way, but it will also be helped by establishing some baselines and testing core assumptions. While making your purposes and ways of working available for scrutiny. Introducing QA upon its introduction does not change the way an organisation works, unless it really doesn’t have its house in order. Likewise the GDPR won’t change your organisation much if you have your house in order either.

From the QA perspective on GDPR, it is perfectly clear why it has a moving baseline (through its ‘by design’ and ‘state of the art’ requirements). From the QA perspective on GDPR it is perfectly clear what the connection is to how Europe is positioning itself geopolitically in the race concerning AI. The policing perspective after all only leads to a luddite stance concerning AI, which is not what the EU is doing, far from it. From that it is clear how the legislator intends the thrust of GDPR. As QA really.

At least I think it is…. Personal blogs don’t need to comply with the new European personal data protection regulations (already in force but enforceable from next week May 25th), says Article 2.2.c. However my blog does have a link with my professional activities, as I blog here about professional interests. One of those interests is data protection (the more you’re active in transparency and open data, the more you also start caring about data protection).

In the past few weeks Frank Meeuwsen has been writing about how to get his blog GDPR compliant (GDPR and the IndieWeb 1, 2 and 3, all in Dutch), and Peter Rukavina has been following suit. Like yours, my e-mail inbox is overflowing with GDPR related messages and requests from all the various web services and mailing lists I’m using. I had been thinking about adding a GDPR statement to this blog, but clearly needed a final nudge.

That nudge came this morning as I updated the Jetpack plugin of my WordPress blog. WordPress is the software I use to create this website, and Jetpack is a module for it, made by the same company that makes WordPress itself, Automattic. After the update, I got a pop-up stating that in my settings a new option now exists called “Privacy Policy”, which comes with a guide and suggested texts to be GDPR compliant. I was pleasantly surprised by this step by Automattic.

So I used that to write a data protection policy for this site. It is rather trivial in the sense that this website doesn’t do much, yet it is also surprisingly complicated as there are many different potential rabbit holes to go down. As it concerns not just comments or webmentions but also server logs my web hoster makes, statistics tools (some of which I don’t use but cannot switch off either), third party plugins for WordPress, embedded material from data hungry platforms like Youtube etc. I have a relatively bare bones blog (over the years I made it ever more minimalistic, stripping out things like sharing buttons most recently), and still as I’m asking myself questions that normally only legal departments would ask themselves, there are many aspects to consider. That is of course the whole point, that we ask these types of questions more often, not just of ourselves, but of every service provider we engage with.

The resulting Data Protection Policy is now available from the menu above.

Over the years there have been several things I’ve automated in my workflow. This week it was posting from Evernote to WordPress, saving me over 60 minutes per week. Years ago I automated starting a project, which saves me about 20 minutes each time I start a new project (of whatever type), by populating my various workflow tools with the right things for it. I use Android on my phone, and my ToDo application Things is Mac only, so at some point I wrote a little script that allows me to jot down tasks on my phone that then got send to Things. As Things now can process email that has become obsolete. I have also written tiny scripts that allow me to link to Evernote notes and Things items from inside other applications.

I’m still working to create a chat based script in my terminal that takes me through my daily starting routine, as well as my daily closing routine. This to take the ‘bookkeeping’ character away, and make it easier for me to for instance track a range of lead-indicators.

I know many others, like Peter Rukavina or Frank Meeuwsen also automate stuff for themselves, and if you search online the sheer range of examples you can find is enormous. Yet, I find there is much to learn from hearing directly from others what they automate, how and why it is important to them, as the context of where something fits in their workflow is crucial information.

What are the things you automate? Apart from the the full-on techie things, like to start a new virtual server on Amazon, I mean. The more mundane day to day things in your workflow, above key board shortcuts? And have you published how you do that somewhere online?

I’ve finished building an AppleScript for automatically creating a Suggested Reading blogpost from my Evernote bookmarks quicker than I thought.

Mostly because in my previous posting on this I, in an example of blogging as thinking out loud, had already created a list of steps I wanted to take. That made it easier to build the step by step solution in AppleScript and find online examples where needed.

Other key ingredients were the AppleScript Language Guide, the Evernote dictionary for AppleScript (which contains the objects from Evernote available to AppleScript), the Evernote query language (for retrieving material from Evernote), and the Postie plugin documentation (which I use to mail to WordPress).

In the end I spent most time on getting the syntax right of talking to the WordPress plugin Postie. By testing it multiple times I ultimately got the sequence of elements right.

The resulting script is on duty from now on. I automatically call the script every Monday afternoon. The result is automatically mailed to my WordPress installation which saves it as a posting with a publication date set for Tuesday afternoon. This allows me time to review or edit the posting if I want, but if I don’t WordPress will go ahead and post it.

There is still some room for improvement. First, I currently use Apple Mail to send the posting to WordPress. My default mail tool is Thunderbird, so I had to configure Mail for this, which I had rather not. Second, the tags from Evernote that I use in the title of the posting aren’t capitalised yet, which I would prefer. Good enough for now though.

I’ve posted the code to my GitHub account, where it is available under an open license. For my own reference I also posted it in the wiki pages of this blog.

The bookmarks to use as listed in Evernote..


…and the resulting posting scheduled in WordPress

The second founder, Jan Koum, of WhatsApp has left Facebook, apparently over differences in dealing with encryption and the sharing of data of WhatsApp. The other founder, Brian Acton, had already left Facebook last September, over similar issues. He donated $50 million to the non-profit Signal Foundation earlier this year, and stated he wanted to work on transparent, open-source development and uncompromising data protection. (Koum on the other hand said he was going to spend time on collecting Porsches….) Previously the European Union fined Facebook 110 million Euro for lying about matching up data of Whatsapp with Facebook profiles when Facebook acquired Whatsapp in 2014. Facebook at the time said it couldn’t match Whatsapp and Facebook accounts automatically, then 2 years later did precisely that, while the technology for it already existed in 2014 of which Facebook was aware. Facbeook says “errors made in its 2014 filings were not intentional” Another “we’re sorry, honestly” moment for Facebook in a 15 year long apology tour since even before its inception.

I have WhatsApp on my phone but never use it to initiate contact. Some in my network however don’t use any alternatives.

The gold standard for messaging apps is Signal by Open Whisper Systems. Other applications such as Whatsapp, FB Messenger or Skype have actually incorporated Signal’s encryption technology (it’s open after all), but in un-testable ways (they’re not open after all). Signal is available on your phone and as desktop app (paired with your phone). It does require you to disclose a phone number, which is a drawback. I prefer using Signal, but the uptake of Signal is slow in western countries.

Other possible apps using end-to-end encryption are:
Threema, a Switzerland based application, I also use but not with many contacts. Trust levels in the application are partly based on exchanging keys when meeting face to face, adding a non-tech layer. It also claims to not store metadata (anonymous use possible, no phone necessary, not logging who communicates with whom, contact lists and groups locally on your device etc). Yet, the app itself isn’t open for inspection.

Telegram (originating in Russia, but now banned for not handing over encryption keys to Russian authorities, and now also banned in Iran, where it has 40 million users, 25% of its global user population.) I don’t use Telegram, and don’t know many in my network who do.

Interestingly the rise in using encrypted messaging is very high in countries high on the corruption perception index. It also shows how slowly Signal is growing in other countries.

VPN tools will allow you to circumvent blocking of an app, by pretending to be in a different country. However VPN, which is a standard application in all businesses allowing remote access to employees, itself is banned in various countries (or only allowed from ‘approved’ VPN suppliers, basically meaning bans of a messaging app will still be enforced).

Want to message me? Use Signal. Use Threema if you don’t want to disclose a phone number.

The US government is looking at whether to start asking money again for providing satellite imagery and data from Landsat satellites, according to an article in Nature.

Officials at the Department of the Interior, which oversees the USGS, have asked a federal advisory committee to explore how putting a price on Landsat data might affect scientists and other users; the panel’s analysis is due later this year. And the USDA is contemplating a plan to institute fees for its data as early as 2019.

To “explore how putting a price on Landsat data might affect” the users of the data, will result in predictable answers, I feel.

  • Public digital government held data, such as Landsat imagery, is both non-rivalrous and non-exclusionary.
  • The initial production costs of such data may be very high, and surely is in the case of satellite data as it involves space launches. Yet these costs are made in the execution of a public and mandated task, and as such are sunk costs. These costs are not made so others can re-use the data, but made anyway for an internal task (such as national security in this case).
  • The copying costs and distribution costs of additional copies of such digital data is marginal, tending to zero
  • Government held data usually, and certainly in the case of satellite data, constitute a (near) monopoly, with no easily available alternatives. As a consequence price elasticity is above 1: when the price of such data is reduced, the demand for it will rise non-lineary. The inverse is also true: setting a price for government data that currently is free will not mean all current users will pay, it will mean a disproportionate part of current usage will simply evaporate, and the usage will be much less both in terms of numbers of users as well as of volume of usage per user.
  • Data sales from one public entity to another publicly funded one, such as in this case academic institutions, are always a net loss to the public sector, due to administration costs, transaction costs and enforcement costs. It moves money from one pocket to another of the same outfit, but that transfer costs money itself.
  • The (socio-economic) value of re-use of such data is always higher than the possible revenue of selling that data. That value will also accrue to the public sector in the form of additional tax revenue. Loss of revenue from data sales will always over time become smaller than that. Free provision or at most at marginal costs (the true incremental cost of providing the data to one single additional user) is economically the only logical path.
  • Additionally the value of data re-use is not limited to the first order of re-use (in this case e.g. academic research it enables), but knows “downstream” higher order and network effects. E.g. the value that such academic research results create in society, in this case for instance in agriculture, public health and climatic impact mitigation. Also “upstream” value is derived from re-use, e.g. in the form of data quality improvement.

This precisely was why the data was made free in 2008 in the first place:

Since the USGS made the data freely available, the rate at which users download it has jumped 100-fold. The images have enabled groundbreaking studies of changes in forests, surface water, and cities, among other topics. Searching Google Scholar for “Landsat” turns up nearly 100,000 papers published since 2008.

That 100-fold jump in usage? That’s the price elasticity being higher than 1, I mentioned. It is a regularly occurring pattern where fees for data are dropped, whether it concerns statistics, meteo, hydrological, cadastral, business register or indeed satellite data.

The economic benefit of the free Landsat data was estimated by the USGS in 2013 at $2 billion per year, while the programme costs about $80 million per year. That’s an ROI factor for US Government of 25. If the total combined tax burden (payroll, sales/VAT, income, profit, dividend etc) on that economic benefit would only be as low as 4% it still means it’s no loss to the US government.

It’s not surprising then, when previously in 2012 a committee was asked to look into reinstating fees for Landsat data, it concluded

“Landsat benefits far outweigh the cost”. Charging money for the satellite data would waste money, stifle science and innovation, and hamper the government’s ability to monitor national security, the panel added. “It is in the U.S. national interest to fund and distribute Landsat data to the public without cost now and in the future,”

European satellite data open by design

In contrast the European Space Agency’s Copernicus program which is a multiyear effort to launch a range of Sentinel satellites for earth observation, is designed to provide free and open data. In fact my company, together with EARSC, in the past 2 years and in the coming 3 years will document over 25 cases establishing the socio-economic impact of the usage of this data, to show both primary and network effects, such as for instance for ice breakers in Finnish waters, Swedish forestry management, Danish precision farming and Dutch gas mains preventative maintenance and infrastructure subsidence.

(Nature article found via Tuula Packalen)

I an open letter (PDF) a range of institutions call upon their respective European governments to create ELLIS, the European Lab for Learning and Intelligent Systems. It’s an effort to fortify against brain drain, and instead attract top talent to Europe. It points to the currently weak position in AI of Europe between what is happening in the USA and in China, adding a geo-political dimension. The letter calls not so much for an institution with a large headcount, but for commitment to long term funding to attract and keep the right people. These are similar reasons that led to the founding of CERN, now a global center for physics (and a key driver of things like open access to research and open research data), and more recently the European Molecular Biology Laboratory.

At the core the signatories see France and Germany as most likely to act to start this intra-governmental initiative. It seems this nicely builds upon the announcement by French president Macron late March to invest heavily in AI, and keep / attract the right people for it. He too definitely sees the European dimension to this, even puts European and enlightenment values at the core of it, although he acted within his primary scope of agency, France itself.

(via this Guardian article)

Wired is calling for an RSS revival.

RSS is the most important piece of internet plumbing for following new content from a wide range of sources. It allows you to download new updates from your favourite sites automatically and read them at your leisure. Dave Winer, forever dedicated to the open web, created it.

I used to be a very heavy RSS user. I tracked hundreds of sources on a daily basis. Not as news but as a way to stay informed about the activities and thoughts of people I was interested in. At some point, that stopped working. Popular RSS readers were discontinued, most notably Google’s RSS reader, many people migrated to the Facebook timeline, platforms like Twitter stopped providing RSS feeds to make you visit their platform, and many people stopped blogging. But with FB in the spotlight, there is some interest in refocusing on the open web, and with it on RSS.

Currently I am repopulating from scratch my RSS reading ‘antenna’, following around 100 people again.

Wired in its call for an RSS revival suggests a few RSS readers. I, as I always have, use a desktop RSS reader, which currently is ReadKit. The FB timeline presents stuff to you based on their algorithmic decisions. As mentioned I definitely would like to have smarter ways of shaping my own information diet, but then with me in control and not the one being commoditised.

So it’s good to read that RSS Reader builders are looking at precisely that.
“Machines can have a big role in helping understand the information, so algorithms can be very useful, but for that they have to be transparent and the user has to feel in control. What’s missing today with the black-box algorithms is where they look over your shoulder, and don’t trust you to be able to tell what’s right.”,says Edwin Khodabakchian cofounder and CEO of RSS reader Feedly (which currently has 14 million users). That is more or less precisely my reasoning as well.