As a long time netizen it is easy to forget that for many now online their complete experience of the internet is within the web silos. I frequent silos, but I’ve always kept a place well outside of it for over two decades. When you’ve never ‘played outside’, building your own space beyond the silos can be an eye-opener. Greg McVerry pointed to the blog of one of his students, who described the experience of stepping outside the silos (emphasis mine):

The fact that I now have a place where I can do that, where I can publish my thoughts whenever I want in a place open for people to read and to not be afraid of doing so, is liberating. I’ve always wanted a space online to call my own. I’m so tired of all the endless perfection I see on social media. My space, “Life Chapter by Chapter” is real. It’s me, personified by a website. And though this post is not digitally enhanced in any way, I love it because it’s representative of the bottom line of what I’ve learned in EDU 106. I’m my own person on this site, I’m not defined by Instagram, Facebook, or Twitter. I can post what I want, when I want, how I want. It’s a beautiful thing.

That’s a beautiful thing, indeed. Maybe this is the bit that Frank Meeuwsen and I need to take as the key to the story when writing a book, as Elja challenged us today (in Dutch).

Alnwick Garden - Walled Garden
There’s a world outside the walled garden. (Photo by Gail Johnson, CC-BY-NC)

Next week it is 50 years ago that Doug Engelbart (1925-2013) and his team demonstrated all that has come to define interactive computing. Five decades on we still don’t have turned everything in that live demo into routine daily things. From the mouse, video conferencing, word processing, outlining, drag and drop, digital mind mapping, to real time collaborative editing from multiple locations. In 1968 it is all already there. In 2018 we are still catching up with several aspects of that live demonstrated vision though. Doug Engelbart and team ushered in the interactive computing era to “augment human intellect”, and on the 50th anniversary of The Demo a symposium will ask the question what augmenting the human intellect can look like in the 21st century.


A screenshot of Doug Engelbart during the 1968 demo

The 1968 demo was later named ‘the Mother of all Demos‘. I first saw it in its entirety at the 2005 Reboot conference in Copenhagen. Doug Engelbart had a video conversation with us after the demo. To me it was a great example, not merely of prototyping new tech, but most of all of proposing a coherent and expansive vision of how different technological components and human networked interaction and routines can together be used to create new agency and new possibilities. To ‘augment human intellect’ indeed. That to me is the crux, to look at the entire constellation of humans, our connections, routines, methods and processes, our technological tools and achieving our desired impact. Likely others easily think I’m a techno-optimist, but I don’t think I am. I am generally an optimist yes, but to me what is key is our humanity, and to create tools and methods that enhance and support it. Tech as tools, in context, not tech as a solution, on its own. It’s what my networked agency framework is about, and what I try to express in its manifesto.

Paul Duplantis has blogged about where the planned symposium, and more importantly us in general, may take the internet and the web as our tools.

Doug Engelbart on video from Calif.
Doug Engelbart on screen in 2005, during a video chat after watching the 1968 Demo at Reboot 7

Some links I thought worth reading the past few days

Last weekend during the Berlin IndieWeb Camp, Aaron Parecki gave a brief overview of where he/we is/are concerning the ‘social reader’. This is of interest to me because since ever I have been reading RSS, I’m doing by hand what he described doing more automatically.

These are some notes I made watching the live stream of the event.

Compared to the algorithmic timelines of FB, Twitter and Instagram, that show you what they decide to show you, the Social Reader is about taking control: follow the things you want, in the order that you want.
RSS readers were and are like that. But RSS reading never went past linear reading of all the posts from all your feeds in reverse chronological order. No playing around with how these feeds are presented to you. And no possibility to from within the reader take actions based on the things you read (sharing, posting, bookmarking, flagging, storing etc.): there are not action buttons on your feedreader, other than mark as unread or archive.

In the IndieWeb world, publishing works well Aaron said, but reading has been an issue (at least if it goes beyond reading a blog and commenting).
That’s why he built Monocle, and Aperture. Aperture takes all kinds of feeds, RSS, JSON, Twitter, and even scripts pushing material to it. These are grouped in channels. Monocle is a reader on top of that, where he presents those channels in a nice way. Then he added action buttons to it. Like reply etc. Those actions you initiate directly in the reader, and always post to your own site. The other already existing IndieWeb building blocks then send it to the original source of the item you’re responding to. See Aaron’s posting from last March with screenshots “Building an IndieWeb Reader“, to get a feeling for how it all looks in practice.

The power of this set-up is that it separates the layers, of how you collect material, how work on that material, and how you present content. It looked great when Aaron demo’d it when I met him at IWC Nürnberg two weeks earlier.

For me, part of the actions I’d like to take are definitely outside the scope of my own website, or at the very least outside the public part of my website. See what I wrote about my ideal feed reader. Part of automation of actions I’d want to point to different workflows on my own laptop for instance. To feed into desk research, material for client updates, and things like that.

I’m interested in running things like Aperture and Monocle locally, but a first step is exploring them in the way Aaron provides them to test drive. Aperture works fine. But I can’t yet get Monocle to work for me. This is I guess the same issue I ran into two weeks ago with how my site doesn’t support sending authorisation headers.

Today at 14:07, sixteen years ago I published my first blogpost. The first few months I posted on Blogger, but after 6 months, deciding having a blog was no longer just an experiment, I moved to my own domain and where it has since resided. First it was hosted at a server I ran from my home, later I moved to a hosting package for more reliability.

Interestingly in that first blogpost only the links to personal domains still work, all the others have since become obsolete. Radio Userland no longer exists, nor does the Knowledge Board platform that I mention and even refer to as a place to find out more about me. In my first blogpost I also link to an image that was hosted on my server at home, using the subdomain name my internet provider gave me back then. That provider was sold in 2006 and that subdomain name no longer exists either. Blogger itself does still exist and even keeps my old Blogger.com blog alive. But Google has of course shown frequently they can and do kill services at short notice, or suspend your account.

The only original link in that first posting that still works is the one to David Gurteen’s blog hosted on his own domain gurteen.com, and his blogpost actually preserves some of the things I wrote at the now gone Knowledge Board. Although the original link to Lilia’s blogpost on Radio Userland no longer works, I could repair the link because she moved to her own domain in the same week I launched my blog. The link to Seb’s Radio Userland site has been preserved in archive.org. Which goes to say: if you care about your own data, your own writing, your own journal of thoughts, you need to be able to control the way your creative output can be accessed online. Otherwise it’s just a bit of content that serves as platform fodder.

So in a sense my very first blogpost in hindsight is a ringing endorsement for the IndieWeb principle of staying in control of your stuff. That goes further than having your own domain, but it’s a key building block.

Last year the anniversary of this blog coincided with leaving Facebook and returning to writing in this space more. That certainly worked out. Maybe I should use this date to yearly reflect on how my online behaviours do or don’t aid my networked agency.

As a first step to better understand the different layers of adding microformats to my site (what is currently done by the theme, what by plugins etc.), I decided to start with: what is supposed to go where?

I made a post-it map on my wall to create an overview for myself. The map corresponds to the front page of my blog.

Green is content, pink is h- elements, blue u- elements, and yellow p- elements, with the little square ones covering dt- and rel’s. All this is based on the information provided on the http://microformats.org/wiki/Main_Page, and not on what my site actually does. So next step is a comparison of what I expect to be there from this map, to what is actually there. This map is also a useful step to see how to add microformats to the u-design theme for myself.

Mapping microformats on my site

This weekend the IndieWeb Camp Berlin is taking place. I attended the Nuremberg edition two weeks ago. Remotely following the livestream for a bit this morning, while I work on a few website related things in the spirit of IndieWeb.

Putting the livestream up on the wall, to easier follow what is demo’d on screen, and keeping my standing desk screen(s) focused on the work in front of me.

20181103_113347 20181103_111839

From the recent posting on Mastodon and it currently lacking a long tail, I want to highlight a specific notion, and that’s why I am posting it here separately. This is the notion that tool usage having a long tail is a measure of distribution, and as such a proxy for networked agency. [A long tail is defined as the bottom 80% of certain things making up over 50% of a ‘market’. The 80% least sold books in the world make up more than 50% of total book sales. The 80% smallest Mastodon instances on the other hand account for less than 15% of all Mastodon users, so it’s not a long tail].

To me being able to deploy and control your own tools (both technology and methods), as a small group of connected individuals, is a source of agency, of empowerment. I call this Networked Agency, as opposed to individual agency. Networked also means that running your own tool is useful in itself, and even more useful when connected to other instances of the same tool. It is useful for me to have this blog even if I am its only reader, but my blog is even more useful to me because it creates conversations with other bloggers, it creates relationships. That ‘more useful when connected’ is why distributed technology is important. It allows you to do your own thing while being connected to the wider world, but you’re not dependent on that wider world to be able to do your own thing.

Whether a technology or method supports a distributed mode, in other words is an important feature to look for when deciding to use it or not. Another aspect is the threshold to adoption of such a tool. If it is too high, it is unlikely that people will use it, and the actual distribution will be very low, even if in theory the tools support it. Looking at the distribution of usage of a tool is then a good measure of success of a tool. Are more people using it individually or in small groups, or are more people using it in a centralised way? That is what a long tail describes: at least 50% of usage takes place in the 80% of smallest occurrences.

In June I spoke at State of the Net in Trieste, where I talked about Networked Agency. One of the issues raised there in response was about scale, as in “what you propose will never scale”. I interpreted that as a ‘centralist’ remark, and not a ‘distributed’ view, as it implied somebody specific would do the scaling. In response I wrote about the ‘invisible hand of networks‘:

“Every node in a network is a scaler, by doing something because it is of value to themselves in the moment, changes them, and by extension adding themselves to the growing number of nodes doing it. Some nodes may take a stronger interest in spreading something, convincing others to adopt something, but that’s about it. You might say the source of scaling is the invisible hand of networks.”

In part it is a pun on the ‘invisible hand of markets’, but it is also a bit of hand waving, as I don’t actually had precise notions of how that would need to work at the time of writing. Thinking about the long tail that is missing in Mastodon, and thus Mastodon not yet building the distributed social networking experience that Mastodon is intended for, allows me to make the ‘invisible hand of networks’ a bit more visible I think.

If we want to see distributed tools get more traction, that really should not come from a central entity doing the scaling. It will create counter-productive effects. Most of the Mastodon promotion comes from the first few moderators that as a consequence now run large de-facto centralised services, where 77% of all participants are housed on 0,7% (25 of over 3400) of servers. In networks smartness needs to be at the edges goes the adagium, and that means that promoting adoption needs to come from those edges, not the core, to extend the edges, to expand the frontier. In the case of Mastodon that means the outreach needs to come from the smallest instances towards their immediate environment.

Long tail forming as an adoption pattern is a good way then to see if broad distribution is being achieved.
Likely elements in promoting from the edge, that form the ‘invisible hand of networks’ doing the scaling are I suspect:

  • Show and tell, how one instance of tool has value to you, how connected instances have more value
  • Being able to explain core concepts (distribution, federation, agency) in contextually meaningful ways
  • Being able to explain how you can discover others using the same tool, that you might want to connect to
  • Lower thresholds of adoption (technically, financially, socially, intellectually)
  • Reach out to groups and people close to you (geographically, socially, intellectually), that you think would derive value from adoption. Your contextual knowledge is key to adoption.
  • Help those you reach out to set up their own tools, or if that is still too hard, ‘take them in’ and allow them the use of your own tools (so they at least can experience if it has value to them, building motivation to do it themselves)
  • Document and share all you do. In Bruce Sterling’s words: it’s not experimenting if you’re not publishing about it.

stm18
An adoption-inducing setting: Frank Meeuwsen explaining his steps in leaving online silos like Facebook, Twitter, and doing more on the open web. In our living room, during my wife’s birthday party.

Today the 2.6 version of Mastodon has been released. It now has built-in support for “rel=me”, which allows verification. Meaning that I can show on my Mastodon profile a link to my site and can proof that both are under my control.

Rel=me is something you add to a link on your own site, to indicate that the page or site you link to also belongs to you. The page you link to needs to link back to your site and make it reciprocal. This is machine readable, and allows others to establish that different pages are under control of the same person or entity.

On my own site I use ‘rel=me’ in the about section in the right hand column. First, if you check the html source of my page, you’ll see that I say that this site (zylstra.org/blog) is my primary site, by making it the only link that has a ‘u-uid’ class (uid is unique id). It also has rel=”me”, meaning the relationship I have with the linked site, is that it is me: 

class="u-url url u-uid uid" rel="me" href='https://www.zylstra.org/blog'

Further down in that About segment you find other links, to my Mastodon and Twitter profiles. If you look at those links you will see it says:

rel=”me” href=’https://m.tzyl.nl/@ton'

saying my Mastodon profile is also me, and similarly to say that a specific Twitter profile is also me (I maintain other Twitter profiles as well but they’re not me, but my company etc.):

rel=”me” href=’https://twitter.com/ton_zylstra'

To close the loop that allows verification that that is true, both my Mastodon profile and my Twitter profile need to link back to my site in a way that machines can check. For Twitter that is easiest: it has a specific place in a user profile for a website address. Like in the image below.

In Mastodon I can add multiple URLs to my profile but there was no way for me to explicitly say in my Mastodon profile that a specific link is the one that represents my online identity. But now I can add a rel=me link in my Mastodon profile, so that both my website and my Mastodon profile link to each other in a verifiable way, proving both are under my control. As you can see in the image below it was available on a single instance already for testing purposes (the green mark signifies verification with the linked site), and now with today’s release it is available to all Mastodon instances.

So how is verification of control over different pages by the same person useful? It may be useful to show that another Twitter profile with my name is not me, because there’s no two-way link between that profile and my site. If you have multiple rel=me references it becomes harder for others to fake specific parts of your online identity. Further, it allows additional functionality like logging in on a different site using credentials from another site you control. It also makes it possible to map networks, and discovery. Site X links to profile X with rel=me on Twitter. There X follows Y, and Y’s profile says, site Y is under her control. Now I know that Site X and Site Y’s authors are somehow connected. If I’m following site X, I may find it interesting to also regularly read site Y.

As soon as my Mastodon instance has been updated to the latest version, which will likely be sometime today, I will add the rel=me in my Mastodon profile, making the link between this site and that profile verifiable.

[UPDATE] It now works on my Mastodon instance:

So, I’m posting this using Quill, a micropub client, that I am running on my own laptop (see steps I took). Intending this as a step towards being able to draft postings offline (which I am used to doing, usually in a text editor, or Evernote), as well as posting without using the WordPress back-end.

Aim: run Quill locally, to write draft posts offline (and later maybe see if I can store drafts locally).

(I run MAMP PRO on my Mac, I also run a WordPress install locally, with all IndieWeb plugins enabled and a Sempress theme)

Quill: https://github.com/aaronpk/Quill

I downloaded, installed in http://localhost:8888/quill
The installation instructions mention using Composer to install a range of dependencies. I did not know what that was, so had to Google around to find out it is a tool to install php dependencies. I followed the instructions at https://getcomposer.org/download/ to install Composer on my Mac.
Then I could call the URL http://localhost:8888/quill/public/index.php ok.

However it doesn’t load images correctly and links don’t work as they are relative to http://localhost:8888/ and not http://localhost:8888/quill/public

Aaron, who created Quill, told me Quill expects to run as a root domain.
So: I added a host quill.test on port 80 in my MAMP set-up, with the /public as root folder. Now Quill loads fine and URLs work.

To get it to work right with mysql on my laptop I added a database called quill. I first had created new user, but that didn’t work. So I used an existing root user for that. I had to also run this sql query to create a table in the database that Quill uses.

After that it worked fine. Next up, thinking about how I’d like to change Quill, as an offline tool for me to prepare postings. Also want to experiment with using it to post to different blogs.

Just quickly jotting some thoughts down about bookmarking, as part of a more general effort of creating an accurate current overview of my information strategies.

Currently I store all my bookmarks in Evernote, by storing the full article or pdf (not just the url, removing the risk of it being unavailable later, or behind a paywall). I sometimes add a brief annotation at the start, and may add one or more tags.

I store bookmarks to Evernote from my browser on the laptop, but also frequently from my mobile, where I pick them out of various timelines.
There are several reasons I store bookmarks.

  • I store predictions people make, to be able to revisit them later, and check on whether they came true or not.
  • I store news paper articles to preserve how certain events were depicted at the time they happened (without the historic reinterpretation that usually follows later)
  • I store pages for later reading (replacing Instapaper)
  • I store bookmarks for sharing in (collated) blogposts, or on Twitter, or to send to a specific person (‘hey, this looks like what you were looking for last week’)
  • I store bookmarks around topics I am currently interested in, as resource for later or current desk research, or for a current project.
  • I store bookmarks as reminders (‘maybe this restaurant is a place to go to sometime when next in Berlin’, ‘possible family trip’, ‘possible interesting conference to attend’)

In the past, when I still used Delicious, when it had a social networking function, I also used bookmarking for discovery of other people. Because social tools work in triangles (as I said in 2006) I would check in Delicious who else had also bookmarked something, and with which tags they did so. The larger the difference in tags (e.g. I’d tag ‘knowledge management’ and they’d tag ‘medication’) or difference in jargon (me ‘complexity’, they ‘wicked_problem’, another ‘intractable’), the likelier someone would be part of different communities than me, but focusing on the same things. Then I’d seek out their blog etc, and start following their rss feeds. It was a good way to find people based on professional interests and extend my informal learning network. A way to diversify my inputs for various topics.

Visualization of my del.icio.us bookmarks
A visualisation of Kars Alfrink’s Delicious bookmarks, based on usage of tags, 2006, CC-BY

Looking at that list of uses, I notice that it is a mixture of things that can be public, things that can be public to some, and things that are just for my eyes. I also know that I don’t like publishing single bookmarks to my blog, unless I have an extended annotation to publish with it (more a reflection or response to a link, than just bookmarking that link). Single bookmarks posted to a blog I experience as cluttering up the timeline (but they could be on a different page perhaps).
The tagging is key as a filing mechanism, and annotation can be a helpful hint to my future self why I stored it, as much as a thought or an association.

When I think of ‘bringing bookmarking home’ in the sense of using only non-silo tools and owning the data myself, several aspects are important:

  • The elements I need to store: URL, date/time stored, full article/pdf, title, tags, notes. Having a full local copy of a page or PDF is a must-have for me, you can’t rely on something being there the next time you look at an URL.
  • The things I want to be able to do with it are mostly a filtering on tags I think (connecting it to one or more persons, interests, projects, channels etc.), and then having different actions/processes tied to that filtering.
  • I’d want to have the bookmarks available offline on my laptop, as well as available for sharing across devices.
  • It would be great if there was something that would allow the social networking type of bookmarking I described, or make it possible in decentralised fashion

When I look at some of the available open source bookmarking tools that I can self-host I notice that mostly the ability to save full pages/documents and the offline functionality are missing elements. So maybe I should try and glue together something from different building blocks found elsewhere.

What do you use for bookmarking? How do you use bookmarks?

A summary overview of changes I made to this site, to make it more fully a indieweb hub / my core online presence.

Theme related tweaks

  • Created child theme of Sempress, to be able to change appearance and functions
  • Renamed comments to reactions (as they contain likes, reposts, mentions etc.)
    in the entry-footer template and the comments template
  • Removed h-card microformats, and put in a generic link to my about page for the author in the Sempress function sempress_posted_on. Without a link to the author mentions show up as anonymous elsewhere.
  • Removed the sharing buttons I used (although they were GDPR compliant using the Sharriff plugin, but they got in the way a lot I felt.

Functionality related tweaks

Now that we’ve visited the Nürnberg IndieWeb Camp this weekend, Frank Meeuwsen and I are thinking about doing an IndieWeb Camp in the Netherlands sometime next spring. Likely in Utrecht, although if we find a good event to piggyback on elsewhere, it can be someplace else.

If you want to get involved with organising this, do ping me.

At IndieWeb Camp Nürnberg today I worked on changing the way my site displays webmentions. Like I wrote earlier, I would like for all webmentions to have a snippet of the linking article, so you get some context to decide if you want to go to that article or not.

It used to be that way in the past with pingbacks, but my webmentions get shown as “Peter mentioned this on ruk.ca”.

After hunting down where in my site this gets determined, I ended up in a file my Semantic Weblinks plugin, called class-linkbacks-handler.php. In this file I altered “get_comment_type_excerpts” function (which sets the template for a webmention), and the function “comment_text_excerpt”, where that template gets filled. I also altered the max length of webmentions that are shown in their entirety. My solution takes a snippet from the start of the webmention. I will later change it to taking a snippet from around the specific place where it links to my site. But at least I succeeded in changing this, and now know where to do that.

When the next update of this plugin takes place I will need to take care, as then my changes will get overwritten. But that too is less important for now.


The webmentions for this posting are now shown as a snippet from the source, below the sentence that was previously the only thing shown.