A project I’m involved has won funding from the SIDN Fund. SIDN is the Dutch domain name authority, and they run a fund to promote, innovate, and stimulate internet use, to build a ‘stronger internet for all’.
With the Open Nederland association, the collective of makers behind the Dutch Creative Commons Chapter, of which I’m a board member, we received funding for our new project “Filter me niet!” (Don’t filter me.)

With the new EU Copyright Directive, the position of copyrights holders is in flux the coming two years. Online platforms will be responsible for ensuring copyrights on content you upload. In practice this will mean that YouTube, Facebook, and all those other platforms will filter out content where they have doubts concerning origin, license or metadata. For makers this is a direct threat, as they run the risk of seeing their uploads blocked even while they clearly hold the needed copyright. False positives are already a very common phenomenon, and this will likely get worse.

With Filtermeniet.nl (Don’t filter me) we want to aid makers that want to upload their work, by inserting a bit of advice and assistance right when they want to hit that upload button. We’ll create a tool, guide and information source for Dutch media makers, through which they can declare the license that fits them best, as well as improve metadata. In order to lower the risk of being automatically filtered out for the wrong reasons.

Yesterday I realised once again the importance of watching how others work with their tools. During the demo’s of what people worked on during IndieWebCamp Utrecht I was watching remotely as Frank demoed his OPML importer for Microsub servers. At some point he started sending messages to his Microsub server’s API, and launched Postman for it. It was the first takeaway from his demo. I decided to look Postman up, install it, and resolved to blog about the importance about sharing your set-up and showing people your workflows.

Then Peter independently, from a different cause, beat me to it with “You do it like that?”.

So consider this reinforcement of that message!

Björn Wijers demoing, with Dylan, Neil and Julia in the photo looking on

Most of yesterday’s participants returned today to get under the hood of their websites and build something. I didn’t attend in person, but participated remotely in the opening session this morning, and the demo’s this afternoon. The demo session has just concluded and some cool things were created, or at least started. Here are a few:

Frank Meeuwsen worked on an OPML importer for Aperture, a microsub server. This way it is possible to import the feeds from your existing RSS reader into your microsub server. Very useful to aid migrating to a new way of reading online content.

Jeremy Cherfas worked on displaying his gps tracks on his site, using Compass

Rosemary Orchard, extending on that, created the option of sharing her geo location on her site for a specified number of minutes.

Neil Mather installed a separate WordPress install to experiment with ActivityPub, and succeeded in sending messages from WordPress to Mastodon, and receive back replies.

Björn Wijers wrote a tool that grabs book descriptions from GoodReads for him to post to his blog when he finishes a book.

Martijn van der Ven picked up on Djoerd Hiemstra’s session yesterday on federated search, and created a search tool that searches within the weblogs of IndieWeb community members.

That concludes the first IndieWebCamp in Utrecht, with a shout-out to all who contributed.

This is a quick exploration of my current and preferred feed reading patterns. As part of my activities, for Day 2, the hack day, of IndieWebCamp Utrecht.

I currently use a stand alone RSS reader, which only consumes RSS feeds. I also experiment with TinyTinyRSS which is a self-hosted feed-grabber and reader. I am attracted to TinyTiny RSS beacue 1) it has a database I can access, 2) it can create RSS from any selection I make, and it publishes a ‘live’ OPML file of feeds I track, which I use as blogroll in the side bar.

What I miss is being able to follow ‘any’ feed, for instance JSON feeds which would allow tracking anything that has an API. Tracking #topics on Twitter, or people’s tweets. Or adding newsletters, so I can keep them out of my mail client, and add them to my reader. And there are things that I think don’t have feeds, but I might be able to create them. E.g. URLs mentioned in Slack channels, or conversation notes I take (currently in Evernote).

Using IndieWeb building blocks: the attraction of IndieWeb here is that it makes a distinction between collecting / grabbing feeds and reading them. A Microsub server grabs and stores feeds. A Microsub client then is the actual reader.
Combined with Micropub, the ability to post to your own site from a different client, allows directly sharing or responding from a reader. In the background Webmention then works its magic of pulling all that together so that the full interaction can be shown on my blog.

The sharing buttons in a (microsub client) reader like Monocle are ‘liking’, ‘repost’ and ‘reply’. This list is too short to my taste. Bookmarking, ‘repost with short remarks’ and ‘turn into a draft for long form’ are obvious additions. But there’s another range of things to add about sharing into channels that aren’t my website or not a website at all, and channels that aren’t fully public.

To get things under my own control, first I want to run my own microsub server, so I have the collected feeds somewhere I can access. And so I can start experimenting with collecting types of feeds that aren’t RSS.

It was a beautiful morning, cycling along the canal in Utrecht, for the first IndieWebCamp. In the offices of shoppagina.nl about a dozen people found each other for a day of discussions, demo’s and other sessions on matters of independent web activities. As organisers Frank and I aimed to not just discuss the IndieWeb as such, but also how to tap into the more general growing awareness of what the silos mean for online discourse. To seek connection with other initiatives and movements of similar minded people.

P1050053Frank’s opening keynote

After Frank kicking off, and introducing the key concepts of IndieWeb, we did an introduction round of everyone there. Some familiar faces, from last year’s IndieWebCamp in Nürnberg, and from last night’s early bird dinner, but also new ones. Here’s a list with their (personal) websites.

Sebastiaan http://seblog.nl
Rosemary http://rosemaryorchard.com/
Jeremy https://jeremycherfas.net
Neil http://doubleloop.net/
Martijn https://vanderven.se/martijn/
Ewout http://www.onedaycompany.nl/
Björn https://burobjorn.nl
Harold http://www.storyconnect.nl/
Dylan http://dylanharris.org
Frank http://diggingthedigital.com
Djoerd https://djoerdhiemstra.com/
Ton https://www.zylstra.org/blog
Johan https://driaans.nl
Julia http://attentionfair.com

After intro’s we collectively created the schedule, the part of the program I facilitated.

20190518_115552The program, transcribed here with links to notes and videos

Halfway through the first session I attended, on the IndieWeb buidling blocks, an urgent family matter meant I had to leave, just as Frank and I were starting to prepare lunch.

Later in the afternoon I remotely followed the etherpad notes and the live stream of a few sessions. Things that stood out for me:

Federated Search
Djoerd Hiemstra talked us through federated search. Search currently isn’t on the radar of indieweb efforts, but if indieweb is about taking back control, search cannot be a blind spot. Search being your gateway to the web, means there’s a huge potential for manipulation. Federated search is a way of trying to work around that. Interestingly the tool Djoerd and his team at Twente University developed doesn’t try to build a new but different database to get to a different search tool. This I take as a good sign, the novel shouldn’t mimic what it is trying to replace or defeat.

This was an interesting discussion about how to discover new people, new sources, that are worthwile to follow. And how those tactics translate to indieweb tools. Frank rightly suggested a distinction between discovery, how to find others, and discoverability, how to be findable yourself. For me this session comes close to the topic I had suggested for the schedule, people centered navigation and personal information strategies. As I had to leave that session didn’t happen. I will need to go through the notes once more, to see what I can take from this.

Sebastiaan took us all through the interplay of microsub servers (that fetch feeds), readers (which are normally connected to the feed fetcher, but not in the IndieWeb), and how webmention and micropub enable directly responding and sharing from your reader interface. This is the core bit I need to match more closely with my own information strategies. One element is that IndieWeb discussions assume sharing is always about online sharing. But I never only think of it that way. Processing information means putting it in a variety of channels, some might be online, but others might be e-mails to clients or peers. It may mean bookmarked on my blog, or added to a curated bookmark collection, or stored with a note in my notes collection.

Day 2: building stuff
The second day, tomorrow, is about taking little steps to build things. I will again follow the proceedings remotely as far a possible. But the notes of the sessions about reading, and discovery are good angles for me to start. I’d like to try to scope out my specs for reading, processing and writing/sharing in more detail. And hopefully do a small thing to run a reader locally to tinker.

This looks like a very useful work, by over 65 authors and a team of editors including Mor Rubinstein and Tim Davis: The State of Open Data

A little over a decade has passed since open data became a real topic globally and in the EU. I had my first discussions about open data in the spring of 2008, and started my first open data project, for the Dutch Ministry for the Interior, in January 2009. The State of Open Data looks at what has been achieved around the world over that decade since, but also looks forward:

How will open data initiatives respond to new concerns about privacy, inclusion, and artificial intelligence? And what can we learn from the last decade in order to deliver impact where it is most needed? The State of Open Data brings together over 65 authors from around the world to address these questions and to take stock of the real progress made to date across sectors and around the world, uncovering the issues that will shape the future of open data in the years to come.

Over 18 months the authors and editors worked to pull all this material together. That is quite an impressive effort. I look forward to working my way through the various parts in the coming time. Next to the online version African Minds has made a hard copy version available, as well as a free downloadable PDF. That PDF comes in at 594 pages, so don’t expect to take it all in in one sitting.

Hossein Derakhshan makes an important effort to find more precise language to describe misinformation (or rather mis- dis- and mal- information). In this Medium article, he takes a closer look at the different combinations of actors and targets, along the lines of state, non-state entities and the public.

Table by Hossein Derakhshan, from article DisInfo Wars

One of his conclusions is

…that from all the categories, those three where non-state organisations are targeted with dis-/malinfomation (i.e. SN, NN, and PN) are the most effective in enabling the agents to reach their malicious goals. Best example is still how US and UK state organisations duped independent and professional media outlets such as the New York Times into selling the war with Iraq to the public.
The model, thus, encourages to concentrate funds and efforts on non-state organisations to help them resist information warfare.

He goes on to say that public protection against public agents is too costly, or too complicated:

the public is easy to target but very hard (and expensive) to protect – mainly because of their vast numbers, their affective tendencies, and the uncertainty about the kind and degree of the impact of bad information on their minds

I feel that this is where our individual civic duty to do crap detection, and call it out when possible, or at least not spread it, comes into play as inoculation.

Jerome Velociter has an interesting riff on how Diaspora, Mastodon and similar decentralised and federated tools are failing their true potential (ht Frank Meeuwsen).

He says that these decentralised federated applications are trying to mimic the existing platforms too much.

They are attempts at rebuilding decentralized Facebook and Twitter

This tendency has multiple faces
I very much recognise this tendency, for this specific example, as well as in general for digital disruption / transformation.

It is recognisable in discussions around ‘fake news’ and media literacy where the underlying assumption often is to build your own ‘perfect’ news or media platform for real this time.

It is visible within Mastodon in the missing long tail, and the persisting dominance of a few large instances. The absence of a long tail means Mastodon isn’t very decentralised, let alone distributed. In short, most Mastodon users are as much in silos as they were on Facebook or Twitter, just with a less generic group of people around them. It’s just that these new silos aren’t run by corporations, but by some individual. Which is actually worse from a responsibility and liability view point.

It is also visible in how there’s a discussion in the Mastodon community on whether the EU Copyright Directive means there’s a need for upload filters for Mastodon. This worry really only makes sense if you think of Mastodon as similar to Facebook or Twitter. But in terms of full distribution and federation, it makes no sense at all, and I feel Mastodon’s lay-out tricks people into thinking it is a platform.

This type of effect I recognise from other types of technology as well. E.g. what regularly happens in local exchange trading systems (LETS), i.e. alternative currency schemes. There too I’ve witnessed them faltering because the users kept making their alternative currency the same as national fiat currencies. Precisely the thing they said they were trying to get away from, but ending up throwing away all the different possibilities of agency and control they had for the taking.

Dump mimicry as design pattern
So I fully agree with Jerome when he says distributed and federated apps will need to come into their own by using other design patterns. Not by using the design patterns of current big platforms (who will all go the way of ecademy, orkut, ryze, jaiku, myspace, hyves and a plethora of other YASNs. If you don’t know what those were: that’s precisely the point).

In the case of Mastodon one such copied design pattern that can be done away with is the public facing pages and timelines. There are other patterns that can be used for discoverability for instance. Another likely pattern to throw out is the Tweetdeck style interface itself. Both will serve to make it look less like a platform and more like conversations.

Tools need to provide agency and reach
Tools are tools because they provide agency, they let us do things that would otherwise be harder or impossible. Tools are tools because they provide reach, as extensions of our physical presence, not just across space but also across time. For a very long time I have been convinced that tools need to be smaller than us, otherwise they’re not tools of real value. Smaller (see item 7 in my agency manifesto) than us means that the tool is under the full control of the group of users using it. In that sense e.g. Facebook groups are failed tools, because someone outside those groups controls the off-switch. The original promise of social software, when they were mostly blogs and wiki’s, and before they morphed into social media, was that it made publishing, interaction between writers and readers, and iterating on each other’s work ‘smaller’ than writers. Distributed conversations as well as emergent networks and communities were the empowering result of that novel agency.

Jerome also points to something else I think is important

In my opinion the first step is to build products that have value for the individual, and let the social aspects, the network effects, sublime this value. Value at the individual level can be many things. Let me organise my thoughts, let me curate “my” web, etc.

Although I don’t fully agree with the individual versus the network distinction. To me instead of just the individual you can put small coherent groups within a single context as well: the unit of agency in networked agency. So I’d rather talk about tools that are useful as a single instance (regardless of who is using it), and even more useful across instances.

Like blogs mentioned above and mentioned by Jerome too. This blog has value for me on its own, without any readers but me. It becomes more valuable as others react, but even more so when others write in their own space as response and distributed conversations emerge, with technology making it discoverable when others write about something posted here. Like the thermometer in my garden that tells me the temperature, but has additional value in a network of thermometers mapping my city’s microclimates. Or like 3D printers which can be put to use on their own, but can be used even better when designs are shared among printer owners, and used even better when multiple printer owners work together to create more complex artefacts (such as the network of people that print bespoke hand prostheses).

It is indeed needed to spend more energy designing tools that really take distribution and federation as a starting point. That are ‘smaller’ than us, so that user groups control their own tools and have freedom to tinker. This applies to not just online social tools, but to any software tool, and to connected products and the entire maker scene just as much.

The Mastodon community worries about whether the new EU copyright directive (which won’t enter into force for 2 years) will mean upload filters being necessary for the use of the ActivityPub protocol.

I can’t logically see why that would be, but only because I don’t compare Mastodon to e.g. Twitter or Facebook. Yet if you do then the worry is logical I suspect.

Mastodon is a server and a client for the ActivityPub protocol. In a fully distributed instance of Mastodon you would have only a small group of users, or just one. This is the case in my Mastodon instance, which only I use. (As yet the Mastodon universe isn’t very distributed or decentralised at all, there’s no long tail.)

The ActivityPub protocol basically provides an outbox and inbox for messages. In your outbox others can come get messages you make available to them and your server can put messages in your outbox into someone else’s inbox itself.

The Mastodon server can make what you put into your outbox publicly available to all that way. Others can put messages for you in your inbox and the Mastodon client can show publicly what you receive in your inbox.

But making anything public isn’t necessary at all. In fact I don’t need my public facing profile and message timeline on my Mastodon instance at all. They are non-essential. Without such pages there’s no way to argue that the messages I receive in my inbox are uploaded by others to a platform, and falling within scope of a potential need for an upload filter.

My Mastodon instance isn’t a platform, and the messages others send to it aren’t uploads. The existence and form of other ActivityPub clients and servers demonstrates that neatly. I currently send ActivityPub messages from my weblog as well, without them being visible on my blog, and I can receive them in my Mastodon, or any other AP client without them being visible for others, just as I can read any answers to that message on the back-end of my blog without it being visible to anyone but me and the sender(s). Essentially AP is more like one-to-one messaging with the ability to do one-to-many and many-to-many as well.

The logical end game of decentralisation is full distribution into instances with only individuals or tight knit groups. Federated where useful. The way the Mastodon client is laid out (sort of like Tweetdeck) suggests we’re dealing with a platform-like thing, but that’s all it is: just lay-out. I could give my e-mail client a similar lay-out (one column with mail threads from my most contacted peers, one with mails just to me, one with all mails sent through the same mail server, one with all mails received from other mail servers by this one.) That would however not turn my mail server plus client into a platform. It would still be e-mail.

Mastodon’s lay-out is confusing matters by trying to be like Twitter and Tweetdeck instead of being its own thing, and I posit all ‘upload filter’ worries stem from this confusion.

Bryan Alexander writes a thoughtful post about media literacy, specifically in the US context, and in relation to the role of education, in response to an ongoing conversation on it:

How should we best teach digital and media literacy?  How can such teaching respond to today’s politically and technologically polarized milieu? Last week a discussion brewed across Twitter…

Towards the end of his critical discussion he makes

One more point: I’m a bit surprised to not see more calls for the open web in this conversation. If we want to get away from platforms we see as multiply dangerous (Facebook in particular, it seems), then we could posit some better sites. I’m for RSS and the blogosphere. Others may plump for Mastodon.

I think this an important aspect. To me the open web is about agency, the power to do something, to act. In this case to critically engage with information flows and contributing your own perspectives on your own website.

Every centralised platform or web silo you use means an implicit vulnerability to being kicked off by the company behind it for arbitrary and not just valid reasons. Even when using it, it means hard borders are drawn about the way you can share, interact or connect to others, to protect the business behind it. Facebook forces you to share links outside your commentary, and doesn’t allow inline hyperlinking as is actually the web’s standard. Your Facebook account can’t directly interact with my Twitter account, not because of technological limitations but because of both their wishes to be silos monopolising your online conversations.

On the open web you acknowledge the existence of various platforms, silos and whatnot, but the interaction circles around your own online space. Your own platform-of-1 that monopolises your own interaction but puts that monopoly in your own hands and that makes no assumption whatsoever about what others do, other than expecting others to use core internet standards and protocols. Your platform-of-1, is your online presence, like this website, from which you alone determine what you share, post, link-to, in what way it is presented, and who can see what.

This includes pushing things into silos. For instance I post to Twitter, and respond to others on Twitter from my own website, and reactions on Twitter come back to me on my website. (Not Facebook, you’re no longer allowed to post / peek over their fence).

This is a source of agency. For me as an individual, as much as for a group. There’s a marked difference between a protest group coordinating themselves on a Facebook group, and e.g. Edgeryders, a network of changemakers building sustainable projects for the common good, which runs their own group platform to interact using Discourse. A direct difference in agency to be able to shape the way you interact versus having to follow predefined common denominator functionality, and an indirect difference in resilience against push-back from others (does someone else control your off-switch?).

In media literacy, as much as in other, complexity-induced, aspects of our connected lives, agency of both you and yours, a networked agency is a key ingredient. Not to build your own competing platforms or media outlets to the existing ones, a common misconceived and unvoiced underlying assumption I feel (“we’ll build the perfect news platform ourselves!”), but to be in control yourself of what comes at you and what flows out from you. You still very well may end up in a bubble of uncritical bias, yet it will be one of your own making, not the making of whichever company happens to run the most popular platform du jour. The open web is your toolkit in gaining and maintaining this agency.

Replied to The powers of digital literacies: responding to danah boyd and all (Bryan Alexander)

A new weblog has been started by Anna Powell-Smith, called Missing Numbers:

Missing Numbers is a blog about the data that the government should collect and measure in the UK, but doesn’t.

I expect that whatever she finds in missing data within the UK public sector, similar or matching examples can be found in other countries, such as here in the Netherlands.

One such Dutch example are the election results per candidate per polling station. The election council (Kiesraad) that certifies election results only needs the aggregated results per municipality, and that is what it keeps track of. Local governments of course have this data immediately after counting the votes, but after providing that data to the Kiesraad their role is finished.

The Open State Foundation (disclosure: I’m its current chairman of the board) in recent years has worked towards ensuring results per polling station are available as open data. In the recent provincial and water authority elections the Minister for the Interior called upon municipalities to publish these results as machine readable data. About 25% complied, the other data files were requested by the Open State Foundation in collaboration with national media to get to a complete data set. This way for the first time, this data now exists as a national data set, and is available to the public.

Viz of all polling station results of the recent elections by the Volkskrant national paper

Added Missing Numbers to my feedreader.

Well, yes, some of that social ‘cost of leaving’ plays a role. Yet:

It’s part of my company’s journey to better information security and data protection. Leaving silo’s, and Slack is just as much one as is Facebook, although with a different business model, is part of that. Similarly we’re starting to use our own cloud, in order to not use Google docs, Onedrive and the like. Our clients have different (and contradictory) rules against some of those silos, and we want to offer our own environment in which we can collaborate with clients as well. So our cloud and our Slack replacement run on our own server in a Dutch data center. This makes it easier to show GDPR compliance as well.

Within the company I’m the only heavy Slack user, taking part in about half a dozen Slack spaces. Still 90% of my Slack interaction is within my company.

Importing our Slack history into Rocket.chat, as well as that the URL of our Rocket.chat space is called Slack, help make a soft landing. Similarly Rocket.chat’s interface is similar to Slack’s.

Our cloud integrates well with Rocket, better than with Slack.

For mobile having another app on it is hardly an issue, given we all have half a dozen chat apps on it already.
For desktop it will be less automatic to make the switch, but adding Rocketchat to the dock will help.

So, there will be an adaption cost, but I’m optimistic it will be low, given our starting point. Over time I’ll reflect on how it went.

Screenshot of Rocketchat with previous Slack historty loaded

Replied to a post by Frank Meeuwsen

Wat me bij deze diensten toch erg interesseert is de kosten van overstap voor de overige gebruikers. Met name de mentale overstap. Ik kan me voorstellen dat je huidige conversatiepartners in Slack zelf ook meer Slack-koppelingen hebben. Dan is het handig om alles bij elkaar in één Slack app te hebben. Rocketchat voelt dan als “weer een extra app” wat transitie en acceptatie lastiger kan maken. Ik ben benieuwd hoe je daar mee om gaat!

The Netherlands has the lushest and tastiest grass in the world according to discerning geese, and millions flock to Dutch fields because of it. Farmers rather use the grass for their dairy cows, and don’t like the damage the geese cause to their fields. To reduce damage geese are scared away, their nests spiked, and hunted. Each year some 80.000 geese are shot in the Province South-Holland alone. The issue is that the Dutch don’t eat much wild goose, and hunters don’t like to hunt if they know the game won’t be eaten. The role of the provincial government in the case of these geese is that they compensate farmers for damage to their fields.

20190414 005 Cadzand, Grote Canadese gans
“All your base belong to us…”, Canada geese in a Dutch field (photo Jac Janssen, CC-BY)

In our open data work with the Province South-Holland we’re looking for opportunities where data can be used to increase the agency of both the province itself and external stakeholders. Part of that is talking to those stakeholders to better understand their work, the things they struggle with, and how that relates to the policy aims of the province.

So a few days ago, my colleague Rik and I met up on a farm outside Leiden, in the midst of those grass fields that the geese love, with several hunters, a civil servant, and the CEO of Hollands Wild that sells game meat to both restaurants and retail. We discussed the particular issues of hunting geese (and inspected some recently shot ones), the effort of dressing game, and the difficulties of cultivating demand for geese. Although a goose fetches a hunter just 25 cents, butchering geese is very intensive and not automated, which means that consumable meat is very expensive. Too expensive for low end use (e.g. in pet food), and even for high end use where it needs to compete with much more popular types of game, such as hare, venison and wild duck. We tasted some marinated raw goose meat and goose carpaccio. Data isn’t needed to improve communication between stakeholders on the production side (unless there emerges a market for fresh game, in contrast to the current distribution of only frozen products), but might play a role in the distribution part of the supply chain.

Today with the little one I sought out a local shop that carries Hollands Wild’s products. I bought some goose meat, and tonight we enjoyed some cold smoked goose. One goose down, 79.999 to go.



Yesterday afternoon and evening we held an open house at our new The Green Land offices in a late 19th century former technical school building in the heart of Utrecht. It was a pleasant event with some 30 people attending, some good conversations with people I hadn’t met before, and an informal dinner party afterwards.



My colleague Frank saying a few words, with Utrecht’s 13th century cathedral tower in the background. Word is, you’re only truly in Utrecht, if you can see the tower from your windows