My friend Peter has been blogging for exactly 20 years yesterday. His blog is a real commonplace book, and way more than my blog, an eclectic mixture of personal things, professional interests, and the rhythm of life of his hometown. When you keep that up for long enough, decades even, it stops being a random collection and becomes a body of work, an œuvre. Œuvre really is the right word, according to Peter he’s written 2.67 million words. Novels on average have eighty thousand words, so Peter’s blog is over 33 novels long. Most novelists aren’t that prolific.

I’m a regular reader of Peter’s blog exactly because of the quirky mix of observations, travelogues, personal things, snippets of code, and reflection. To me the way he weaves all those things into one, is what we also tried to achieve with our Smart Stuff that Matters unconference (in which Peter participated): bringing global (technology) developments back to the size of your own home life, your own city life, and letting your life inform how you want to shape and use the tools available to you. So that, in the words of Heinz Wittenbrink (another participant last summer), when you discuss themes important to you, you actually are discussing the details of your own life:

To list the themes [….of the sessions I attended…] fails to express what was special about the unconference: that you meet people or meet them again, for whom these themes are personal themes, so that they are actually talking about their lives when they talk about them. At an unconference like this one does not try to create results that can be broadcast in abstracted formulations, but through learning about different practices and discussing them, extend your own living practice and view it from new perspectives. These practices or ways of living cannot be separated from the relationships in which and with which you live, and the relationships you create or change at such an event like this.

Heinz’ last sentence in that quote “These practices or ways of living cannot be separated from the relationships in which and with which you live.” is true and important as well.

Of the 20 years Peter kept up his blog, I’ve known him for 14 years almost to the day, ever since we met in a Copenhagen hotel lobby mid June 2005. Our first meeting was aptly technology mediated. As Heinz wrote, our blogging practices cannot be separated from the relationships in which we live. Traces of our connections are visible through the years, building on each others thinking, meeting up in various places, visiting each others homes. Not just our specific connection but the shared connections to so many others.
Blogging isn’t just a reflection of our lives, but is an active part in weaving the connections that make up our lives.

Next week Peter organises an unconference, called Crafting {:} a Life. Modelled after Elmine’s and my unconference birthday parties, Peter is celebrating not just the 20 year milestone of his blog and his company. He’s celebrating in the same way he blogs, the completeness of our lives, both the sweet and the bitter. It’s in that contrast where beauty lives to me, and how we can appreciate the value of the connections we weave. Elmine and I will go visit Peter and his family, with 50 or so others, on Prince Edward Island in Canada in a few days.

Peter recently said about our and other participants coming to Canada from Europe
It is humbling to consider that each of these old friends is coming across the ocean to join us for the simple act of spending time together talking about life for a while.
We felt the same every time we did an edition of our unconferences. Elmine wrote last summer, looking back on her birthday unconference Smart Stuff That Matters:

How do you put into words how much it means to you that friends travel [literally] across the world to attend your birthday party? … How can I describe how much it means to me to be able to connect all those people Ton and I collected in our lives, bring them together in the same space and for all of them to hit it off? That they all openly exchanged life stories, inspired each other, geeked out together, built robots together?

The simple act of spending time together talking about life for a while. is a rather rich and powerful thing to do, Peter, which packs the full complexity of being human. We look forward to seeing you, Catherine and Oliver next week, as well as 50 or so others that came to form your ‘global village’ while you were engaged in blogging a life.

I need to write more extensively about two things that I for now want to link / bookmark here, both coming from Neil Mather.

One is local-first software, an article by Ink and Switch:

In this article we propose “local-first software”: a set of principles for software that enables both collaboration and ownership for users. Local-first ideals include the ability to work offline and collaborate across multiple devices, while also improving the security, privacy, long-term preservation, and user control of data.

This resonates with me on two frequencies, one the notion that tools need to be useful on their own, and more useful when connected across instances, the other that information strategies and agency in my mind correlate with social distance.

The second thing is Neil’s reference to Gevulot. At IndieWebCamp Utrecht one session took place around oversharing and conditional sharing. Gevulot is a device that allows for very precise contextual sharing, in the SF trilogy The Quantum Thief by Finnish author Hannu Rajaniemi (previously mentioned in this blog).

Gevulot is a form of privacy practised in the Oubliette. It involved complex cryptography and the exchange of public and private keys, to ensure that individuals only shared that information or sensory data that they wished to. Gevulot was disabled in agoras.

This resonates again with information strategies and the role of social distance, but also with how I think that our tools need to align with how we humans actually interact such as flexibly and fluently switching between different levels of disclosure for different aspects of our lives in conversation with someone. That link to a posting on what I’d like my tools to do is from 2006, and my description of a ideal reader more recently is still consistent with it over a decade later (albeit from the reading perspective, not the sharing perspective). Gevulot from now is definitely the shorthand I will use for these type of explorations.

Yesterday we had our monthly all hands meeting at my company. In these meetings we allocate some time to various things to increase our team’s knowledge and skills. This time we looked at information security, and I gave a little intro to start a more long term discussion and effort to raise information security in our company.

When people discuss information security it’s often along the lines of ‘if you want to do it right I’d have to go full paranoid, and that is completely over the top, so I won’t bother with it at all’. This is akin to saying that because it makes no sense to turn your home into an impenetrable fortress against invaders, you’ll just leave the door standing open. In practice you’ll do something in between those two extremes, and have locks on the door.

Impregnable Fortress The Magic Door
doorLock
Fortress or open door? That’s a false dilemma. (fortress by Ryan Lea, CC-BY, open door by Hartwig HKD, CC-BY-SA and locked door by Robert Montalvo CC-BY)

You know the locks on your door won’t keep out very determined burglars or say a swat team, but it will raise the time and effort needed for less determined invaders to a point they will be discouraged.
At the same time keeping the door closed and locked isn’t just useful to keep out burglars but also serves as a way to keep out the wind, rain and leaves and dust blowing in from the street.
Similarly in information security you won’t keep out determined government three letter agencies, but there too there are basic hygiene measures and a variety of measures to raise the cost of more casual or less determined attacks. Like with preventative measures at home, information security can be viewed in layers on a spectrum.

I tried to tease out those layers, from the most basic to the most intensive:

  1. hygiene
  2. keeping your files available
  3. basic steps against loss or theft, also on the road
  4. protect client information, and compliance
  5. secure communication and exchanges
  6. preventing danger to others
  7. traveling across borders outside of the Schengen area
  8. active defence against being targeted
  9. active defence against being targeted by state actors

For each of those levels there are multiple dimensions to consider. First of all in recent years a new group of actors interested in your data has clearly emerged. The tech companies for whom adtech is their business model started tracking you as much as they can get away with. This adds the need for measures to all but the most intensive levels, but especially means the basic levels intensify.
Then there’s the difference between individual measures, and what can be arranged at the level of our organisation, and how those two interplay.

Practically each level can be divided first along the lines of our two primary devices, laptop and phone. Second, there’s a distinction between technological measures, and behaviour (operational security).

the list of levels, and the distinction in dimensions as I showed them yesterday

I provided examples of how that plays out on the more basic levels, and on the most intensive level. E.g. on the level of hygiene, technological measures you can think of are firewalls, spam and virus filters, a privacy screen, ad blockers and tracker blockers, using safer browsers. Behavioural measures are not clicking links before checking what they lead to, recognising phishing attempts, not plugging in usb sticks from others, using unique user names and passwords, using different browsers for different tasks, and switching off wifi, bluetooth and gps (on mobile) when you’re not specifically using them.

Over the years working on open data I’ve increasingly become aware of and concerned about information security, and since early 2014 actively engaging with it. I’m more or less at level 7 of the list above, and with the company I think we need to be at level 5 at least, whereas some of us haven’t quite reached level 1 at the moment. From the examples I gave, and showing some of the (simple) things I do, we had a conversation about the most pressing questions and issues each of us has. This we’ll use to sequence steps. We’ll create short faq’s and/or how-to sheets, we’ll suggest tools and behavioral measures, suggest what needs a collective choice, and provide help with adoption / implementation. I feel with this we have a ‘gentle’ approach, that avoids overwhelm that leads to not taking measures at all.

The first things people mentioned because they were worried about it are: usernames/passwords, e-mail, trackers, vpn, and handling copies of ID’s.
So we’ll take those as starting points.

If you want to read up on information security and operational security around your devices, dearly missed Arjan Kamphuis’s book on information security for journalists is a very useful resource. My approach as described is more geared to the actual context of the people involved, and what I know about their habits and routines, and to the context of our work and typical projects.

Via Iskander Smit an interesting editorial on practices in digital ethics landed in my inbox: Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical, by Luciano Floridi, director of the Digital Ethics lab of Oxford University’s Internet Institute.

It lists 5 groups of practices that subvert (by being distracting or destructive) actual ethical principles being applied in digital technology. They are very recognisable from ethical discussions I’ve been in or witness to. The paper also provides some pointers in how to address them.
I will list and quote the five practices, and add a sixth that I come across regularly in my own work. Together they are digital ethics dark patterns so to speak:

  1. ethics shopping: “the malpractice of choosing, adapting, or revising (“mixing and matching”) ethical principles, guidelines, codes, frameworks, or other similar standards (especially but not only in the ethics of AI), from a variety of available offers, in order to retrofit some pre-existing behaviours (choices, processes, strategies, etc.), and hence justify them a posteriori, instead of implementing or improving new behaviours by benchmarking them against public, ethical standards.
  2. ethics bluewashing: the malpractice of making unsubstantiated or misleading claims about, or implementing superficial measures in favour of, the ethical values and benefits of digital processes, products, services, or other solutions in order to appear more digitally ethical than one is.
  3. ethics lobbying: the malpractice of exploiting digital ethics to delay, revise, replace, or avoid good and necessary legislation (or its enforcement) about the design, development, and deployment of digital processes, products, services, or other solutions. (can you say big tech?)
  4. ethics dumping: the malpractice of (a) exporting research activities about digital processes, products, services, or other solutions, in other contexts or places (e.g. by European organisations outside the EU) in ways that would be ethically unacceptable in the context or place of origin and (b) importing the outcomes of such unethical research activities.
  5. ethics shirking: the malpractice of (a) exporting research activities about digital processes, products, services, or other solutions, in other contexts or places (e.g. by European organisations outside the EU) in ways that would be ethically unacceptable in the context or place of origin and (b) importing the outcomes of such unethical research activities.
  6. To which I want to add a sixth, based on observations in my work in various organisations, and what pops up ethics-related in my feedreader:

  7. ethics futurising: the malpractice of discussing ethics and even hiring ethics advisors only for technology and processes that are 10 years into the future for the organisation in question. E.g. AI ethics in a company that as yet has nothing to do with AI. At the same time that same ethical soul searching is not applied to currently relevant and used practices, technology or processes. It has a part of ethics bluewashing in it (pattern 2, being seen as ethical rather than being ethical), but there’s something else at play as well, a blind spot to ethical questions being relevant in reflecting on current digital practices, tech choices, processes and e.g. data collection methods, the assumption being that current practice is all right.
  8. I find this sixth one both distracting and destructive: it let’s an organisation believe they are on top of digital ethics issues, but it is all unrelated to their core activities. As a result staff currently involved in real questions are left to their own devices. Which means that for instance data protection officers are lonely figures and often all but ignored by their organisations until after a (legal) issue arises.

Late June, I will be contributing to a course of the The Hague Academy for Local Governance on Integrity and Anti-Corruption efforts. My part in this course will look at the role and use of open data and transparency efforts.

How open data helps create transparency, like public procurement data, land registry and land banks data, ultimate beneficial ownership data, public spending, (farming) subsidies, politicians expenses, etc. and the role of ‘many eyes’ in such cases.
I will certainly talk about how open data can create new agency, levelling the playing field between citizens and government entities (like local budget monitoring does). Also the role of investigative journalism (like Follow the Money here in NL), especially the cross-border variety, leaks, novel research groups like Bellingcat, and crowd sourced efforts to wade through large responses made to Freedom of Information requests, mapping impact of civil war, or detecting war crimes. All examples of, let’s call it ‘Data Driven Daylight’. I probably will also need to talk a bit about data provenance and data governance, as well as how understanding the basics of technology is a prerequisite if you have a role in preventing and detecting integrity and corruption issues.

My experiences in open data, work for the World Bank, and the UNDP (for which I contributed to an anti-corruption training a few years ago), as well as my role as board member of the leading Dutch transparency NGO Open State Foundation will be the basis.

A project I’m involved has won funding from the SIDN Fund. SIDN is the Dutch domain name authority, and they run a fund to promote, innovate, and stimulate internet use, to build a ‘stronger internet for all’.
With the Open Nederland association, the collective of makers behind the Dutch Creative Commons Chapter, of which I’m a board member, we received funding for our new project “Filter me niet!” (Don’t filter me.)

With the new EU Copyright Directive, the position of copyrights holders is in flux the coming two years. Online platforms will be responsible for ensuring copyrights on content you upload. In practice this will mean that YouTube, Facebook, and all those other platforms will filter out content where they have doubts concerning origin, license or metadata. For makers this is a direct threat, as they run the risk of seeing their uploads blocked even while they clearly hold the needed copyright. False positives are already a very common phenomenon, and this will likely get worse.

With Filtermeniet.nl (Don’t filter me) we want to aid makers that want to upload their work, by inserting a bit of advice and assistance right when they want to hit that upload button. We’ll create a tool, guide and information source for Dutch media makers, through which they can declare the license that fits them best, as well as improve metadata. In order to lower the risk of being automatically filtered out for the wrong reasons.

Yesterday I realised once again the importance of watching how others work with their tools. During the demo’s of what people worked on during IndieWebCamp Utrecht I was watching remotely as Frank demoed his OPML importer for Microsub servers. At some point he started sending messages to his Microsub server’s API, and launched Postman for it. It was the first takeaway from his demo. I decided to look Postman up, install it, and resolved to blog about the importance about sharing your set-up and showing people your workflows.

Then Peter independently, from a different cause, beat me to it with “You do it like that?”.

So consider this reinforcement of that message!

Björn Wijers demoing, with Dylan, Neil and Julia in the photo looking on

Most of yesterday’s participants returned today to get under the hood of their websites and build something. I didn’t attend in person, but participated remotely in the opening session this morning, and the demo’s this afternoon. The demo session has just concluded and some cool things were created, or at least started. Here are a few:

Frank Meeuwsen worked on an OPML importer for Aperture, a microsub server. This way it is possible to import the feeds from your existing RSS reader into your microsub server. Very useful to aid migrating to a new way of reading online content.

Jeremy Cherfas worked on displaying his gps tracks on his site, using Compass

Rosemary Orchard, extending on that, created the option of sharing her geo location on her site for a specified number of minutes.

Neil Mather installed a separate WordPress install to experiment with ActivityPub, and succeeded in sending messages from WordPress to Mastodon, and receive back replies.

Björn Wijers wrote a tool that grabs book descriptions from GoodReads for him to post to his blog when he finishes a book.

Martijn van der Ven picked up on Djoerd Hiemstra’s session yesterday on federated search, and created a search tool that searches within the weblogs of IndieWeb community members.

That concludes the first IndieWebCamp in Utrecht, with a shout-out to all who contributed.

This is a quick exploration of my current and preferred feed reading patterns. As part of my activities, for Day 2, the hack day, of IndieWebCamp Utrecht.

I currently use a stand alone RSS reader, which only consumes RSS feeds. I also experiment with TinyTinyRSS which is a self-hosted feed-grabber and reader. I am attracted to TinyTiny RSS beacue 1) it has a database I can access, 2) it can create RSS from any selection I make, and it publishes a ‘live’ OPML file of feeds I track, which I use as blogroll in the side bar.

What I miss is being able to follow ‘any’ feed, for instance JSON feeds which would allow tracking anything that has an API. Tracking #topics on Twitter, or people’s tweets. Or adding newsletters, so I can keep them out of my mail client, and add them to my reader. And there are things that I think don’t have feeds, but I might be able to create them. E.g. URLs mentioned in Slack channels, or conversation notes I take (currently in Evernote).

Using IndieWeb building blocks: the attraction of IndieWeb here is that it makes a distinction between collecting / grabbing feeds and reading them. A Microsub server grabs and stores feeds. A Microsub client then is the actual reader.
Combined with Micropub, the ability to post to your own site from a different client, allows directly sharing or responding from a reader. In the background Webmention then works its magic of pulling all that together so that the full interaction can be shown on my blog.

The sharing buttons in a (microsub client) reader like Monocle are ‘liking’, ‘repost’ and ‘reply’. This list is too short to my taste. Bookmarking, ‘repost with short remarks’ and ‘turn into a draft for long form’ are obvious additions. But there’s another range of things to add about sharing into channels that aren’t my website or not a website at all, and channels that aren’t fully public.

To get things under my own control, first I want to run my own microsub server, so I have the collected feeds somewhere I can access. And so I can start experimenting with collecting types of feeds that aren’t RSS.

It was a beautiful morning, cycling along the canal in Utrecht, for the first IndieWebCamp. In the offices of shoppagina.nl about a dozen people found each other for a day of discussions, demo’s and other sessions on matters of independent web activities. As organisers Frank and I aimed to not just discuss the IndieWeb as such, but also how to tap into the more general growing awareness of what the silos mean for online discourse. To seek connection with other initiatives and movements of similar minded people.

P1050053Frank’s opening keynote

After Frank kicking off, and introducing the key concepts of IndieWeb, we did an introduction round of everyone there. Some familiar faces, from last year’s IndieWebCamp in Nürnberg, and from last night’s early bird dinner, but also new ones. Here’s a list with their (personal) websites.

Sebastiaan http://seblog.nl
Rosemary http://rosemaryorchard.com/
Jeremy https://jeremycherfas.net
Neil http://doubleloop.net/
Martijn https://vanderven.se/martijn/
Ewout http://www.onedaycompany.nl/
Björn https://burobjorn.nl
Harold http://www.storyconnect.nl/
Dylan http://dylanharris.org
Frank http://diggingthedigital.com
Djoerd https://djoerdhiemstra.com/
Ton https://www.zylstra.org/blog
Johan https://driaans.nl
Julia http://attentionfair.com

After intro’s we collectively created the schedule, the part of the program I facilitated.

20190518_115552The program, transcribed here with links to notes and videos

Halfway through the first session I attended, on the IndieWeb buidling blocks, an urgent family matter meant I had to leave, just as Frank and I were starting to prepare lunch.

Later in the afternoon I remotely followed the etherpad notes and the live stream of a few sessions. Things that stood out for me:

Federated Search
Djoerd Hiemstra talked us through federated search. Search currently isn’t on the radar of indieweb efforts, but if indieweb is about taking back control, search cannot be a blind spot. Search being your gateway to the web, means there’s a huge potential for manipulation. Federated search is a way of trying to work around that. Interestingly the tool Djoerd and his team at Twente University developed doesn’t try to build a new but different database to get to a different search tool. This I take as a good sign, the novel shouldn’t mimic what it is trying to replace or defeat.

Discovery
This was an interesting discussion about how to discover new people, new sources, that are worthwile to follow. And how those tactics translate to indieweb tools. Frank rightly suggested a distinction between discovery, how to find others, and discoverability, how to be findable yourself. For me this session comes close to the topic I had suggested for the schedule, people centered navigation and personal information strategies. As I had to leave that session didn’t happen. I will need to go through the notes once more, to see what I can take from this.

Readers
Sebastiaan took us all through the interplay of microsub servers (that fetch feeds), readers (which are normally connected to the feed fetcher, but not in the IndieWeb), and how webmention and micropub enable directly responding and sharing from your reader interface. This is the core bit I need to match more closely with my own information strategies. One element is that IndieWeb discussions assume sharing is always about online sharing. But I never only think of it that way. Processing information means putting it in a variety of channels, some might be online, but others might be e-mails to clients or peers. It may mean bookmarked on my blog, or added to a curated bookmark collection, or stored with a note in my notes collection.

Day 2: building stuff
The second day, tomorrow, is about taking little steps to build things. I will again follow the proceedings remotely as far a possible. But the notes of the sessions about reading, and discovery are good angles for me to start. I’d like to try to scope out my specs for reading, processing and writing/sharing in more detail. And hopefully do a small thing to run a reader locally to tinker.

This looks like a very useful work, by over 65 authors and a team of editors including Mor Rubinstein and Tim Davis: The State of Open Data

A little over a decade has passed since open data became a real topic globally and in the EU. I had my first discussions about open data in the spring of 2008, and started my first open data project, for the Dutch Ministry for the Interior, in January 2009. The State of Open Data looks at what has been achieved around the world over that decade since, but also looks forward:

How will open data initiatives respond to new concerns about privacy, inclusion, and artificial intelligence? And what can we learn from the last decade in order to deliver impact where it is most needed? The State of Open Data brings together over 65 authors from around the world to address these questions and to take stock of the real progress made to date across sectors and around the world, uncovering the issues that will shape the future of open data in the years to come.

Over 18 months the authors and editors worked to pull all this material together. That is quite an impressive effort. I look forward to working my way through the various parts in the coming time. Next to the online version African Minds has made a hard copy version available, as well as a free downloadable PDF. That PDF comes in at 594 pages, so don’t expect to take it all in in one sitting.

Hossein Derakhshan makes an important effort to find more precise language to describe misinformation (or rather mis- dis- and mal- information). In this Medium article, he takes a closer look at the different combinations of actors and targets, along the lines of state, non-state entities and the public.

Table by Hossein Derakhshan, from article DisInfo Wars

One of his conclusions is

…that from all the categories, those three where non-state organisations are targeted with dis-/malinfomation (i.e. SN, NN, and PN) are the most effective in enabling the agents to reach their malicious goals. Best example is still how US and UK state organisations duped independent and professional media outlets such as the New York Times into selling the war with Iraq to the public.
….
The model, thus, encourages to concentrate funds and efforts on non-state organisations to help them resist information warfare.

He goes on to say that public protection against public agents is too costly, or too complicated:

the public is easy to target but very hard (and expensive) to protect – mainly because of their vast numbers, their affective tendencies, and the uncertainty about the kind and degree of the impact of bad information on their minds

I feel that this is where our individual civic duty to do crap detection, and call it out when possible, or at least not spread it, comes into play as inoculation.

Jerome Velociter has an interesting riff on how Diaspora, Mastodon and similar decentralised and federated tools are failing their true potential (ht Frank Meeuwsen).

He says that these decentralised federated applications are trying to mimic the existing platforms too much.

They are attempts at rebuilding decentralized Facebook and Twitter

This tendency has multiple faces
I very much recognise this tendency, for this specific example, as well as in general for digital disruption / transformation.

It is recognisable in discussions around ‘fake news’ and media literacy where the underlying assumption often is to build your own ‘perfect’ news or media platform for real this time.

It is visible within Mastodon in the missing long tail, and the persisting dominance of a few large instances. The absence of a long tail means Mastodon isn’t very decentralised, let alone distributed. In short, most Mastodon users are as much in silos as they were on Facebook or Twitter, just with a less generic group of people around them. It’s just that these new silos aren’t run by corporations, but by some individual. Which is actually worse from a responsibility and liability view point.

It is also visible in how there’s a discussion in the Mastodon community on whether the EU Copyright Directive means there’s a need for upload filters for Mastodon. This worry really only makes sense if you think of Mastodon as similar to Facebook or Twitter. But in terms of full distribution and federation, it makes no sense at all, and I feel Mastodon’s lay-out tricks people into thinking it is a platform.

This type of effect I recognise from other types of technology as well. E.g. what regularly happens in local exchange trading systems (LETS), i.e. alternative currency schemes. There too I’ve witnessed them faltering because the users kept making their alternative currency the same as national fiat currencies. Precisely the thing they said they were trying to get away from, but ending up throwing away all the different possibilities of agency and control they had for the taking.

Dump mimicry as design pattern
So I fully agree with Jerome when he says distributed and federated apps will need to come into their own by using other design patterns. Not by using the design patterns of current big platforms (who will all go the way of ecademy, orkut, ryze, jaiku, myspace, hyves and a plethora of other YASNs. If you don’t know what those were: that’s precisely the point).

In the case of Mastodon one such copied design pattern that can be done away with is the public facing pages and timelines. There are other patterns that can be used for discoverability for instance. Another likely pattern to throw out is the Tweetdeck style interface itself. Both will serve to make it look less like a platform and more like conversations.

Tools need to provide agency and reach
Tools are tools because they provide agency, they let us do things that would otherwise be harder or impossible. Tools are tools because they provide reach, as extensions of our physical presence, not just across space but also across time. For a very long time I have been convinced that tools need to be smaller than us, otherwise they’re not tools of real value. Smaller (see item 7 in my agency manifesto) than us means that the tool is under the full control of the group of users using it. In that sense e.g. Facebook groups are failed tools, because someone outside those groups controls the off-switch. The original promise of social software, when they were mostly blogs and wiki’s, and before they morphed into social media, was that it made publishing, interaction between writers and readers, and iterating on each other’s work ‘smaller’ than writers. Distributed conversations as well as emergent networks and communities were the empowering result of that novel agency.

Jerome also points to something else I think is important

In my opinion the first step is to build products that have value for the individual, and let the social aspects, the network effects, sublime this value. Value at the individual level can be many things. Let me organise my thoughts, let me curate “my” web, etc.

Although I don’t fully agree with the individual versus the network distinction. To me instead of just the individual you can put small coherent groups within a single context as well: the unit of agency in networked agency. So I’d rather talk about tools that are useful as a single instance (regardless of who is using it), and even more useful across instances.

Like blogs mentioned above and mentioned by Jerome too. This blog has value for me on its own, without any readers but me. It becomes more valuable as others react, but even more so when others write in their own space as response and distributed conversations emerge, with technology making it discoverable when others write about something posted here. Like the thermometer in my garden that tells me the temperature, but has additional value in a network of thermometers mapping my city’s microclimates. Or like 3D printers which can be put to use on their own, but can be used even better when designs are shared among printer owners, and used even better when multiple printer owners work together to create more complex artefacts (such as the network of people that print bespoke hand prostheses).

It is indeed needed to spend more energy designing tools that really take distribution and federation as a starting point. That are ‘smaller’ than us, so that user groups control their own tools and have freedom to tinker. This applies to not just online social tools, but to any software tool, and to connected products and the entire maker scene just as much.

The Mastodon community worries about whether the new EU copyright directive (which won’t enter into force for 2 years) will mean upload filters being necessary for the use of the ActivityPub protocol.

I can’t logically see why that would be, but only because I don’t compare Mastodon to e.g. Twitter or Facebook. Yet if you do then the worry is logical I suspect.

Mastodon is a server and a client for the ActivityPub protocol. In a fully distributed instance of Mastodon you would have only a small group of users, or just one. This is the case in my Mastodon instance, which only I use. (As yet the Mastodon universe isn’t very distributed or decentralised at all, there’s no long tail.)

The ActivityPub protocol basically provides an outbox and inbox for messages. In your outbox others can come get messages you make available to them and your server can put messages in your outbox into someone else’s inbox itself.

The Mastodon server can make what you put into your outbox publicly available to all that way. Others can put messages for you in your inbox and the Mastodon client can show publicly what you receive in your inbox.

But making anything public isn’t necessary at all. In fact I don’t need my public facing profile and message timeline on my Mastodon instance at all. They are non-essential. Without such pages there’s no way to argue that the messages I receive in my inbox are uploaded by others to a platform, and falling within scope of a potential need for an upload filter.

My Mastodon instance isn’t a platform, and the messages others send to it aren’t uploads. The existence and form of other ActivityPub clients and servers demonstrates that neatly. I currently send ActivityPub messages from my weblog as well, without them being visible on my blog, and I can receive them in my Mastodon, or any other AP client without them being visible for others, just as I can read any answers to that message on the back-end of my blog without it being visible to anyone but me and the sender(s). Essentially AP is more like one-to-one messaging with the ability to do one-to-many and many-to-many as well.

The logical end game of decentralisation is full distribution into instances with only individuals or tight knit groups. Federated where useful. The way the Mastodon client is laid out (sort of like Tweetdeck) suggests we’re dealing with a platform-like thing, but that’s all it is: just lay-out. I could give my e-mail client a similar lay-out (one column with mail threads from my most contacted peers, one with mails just to me, one with all mails sent through the same mail server, one with all mails received from other mail servers by this one.) That would however not turn my mail server plus client into a platform. It would still be e-mail.

Mastodon’s lay-out is confusing matters by trying to be like Twitter and Tweetdeck instead of being its own thing, and I posit all ‘upload filter’ worries stem from this confusion.