Last weekend during the Berlin IndieWeb Camp, Aaron Parecki gave a brief overview of where he/we is/are concerning the ‘social reader’. This is of interest to me because since ever I have been reading RSS, I’m doing by hand what he described doing more automatically.

These are some notes I made watching the live stream of the event.

Compared to the algorithmic timelines of FB, Twitter and Instagram, that show you what they decide to show you, the Social Reader is about taking control: follow the things you want, in the order that you want.
RSS readers were and are like that. But RSS reading never went past linear reading of all the posts from all your feeds in reverse chronological order. No playing around with how these feeds are presented to you. And no possibility to from within the reader take actions based on the things you read (sharing, posting, bookmarking, flagging, storing etc.): there are not action buttons on your feedreader, other than mark as unread or archive.

In the IndieWeb world, publishing works well Aaron said, but reading has been an issue (at least if it goes beyond reading a blog and commenting).
That’s why he built Monocle, and Aperture. Aperture takes all kinds of feeds, RSS, JSON, Twitter, and even scripts pushing material to it. These are grouped in channels. Monocle is a reader on top of that, where he presents those channels in a nice way. Then he added action buttons to it. Like reply etc. Those actions you initiate directly in the reader, and always post to your own site. The other already existing IndieWeb building blocks then send it to the original source of the item you’re responding to. See Aaron’s posting from last March with screenshots “Building an IndieWeb Reader“, to get a feeling for how it all looks in practice.

The power of this set-up is that it separates the layers, of how you collect material, how work on that material, and how you present content. It looked great when Aaron demo’d it when I met him at IWC Nürnberg two weeks earlier.

For me, part of the actions I’d like to take are definitely outside the scope of my own website, or at the very least outside the public part of my website. See what I wrote about my ideal feed reader. Part of automation of actions I’d want to point to different workflows on my own laptop for instance. To feed into desk research, material for client updates, and things like that.

I’m interested in running things like Aperture and Monocle locally, but a first step is exploring them in the way Aaron provides them to test drive. Aperture works fine. But I can’t yet get Monocle to work for me. This is I guess the same issue I ran into two weeks ago with how my site doesn’t support sending authorisation headers.

As a first step to better understand the different layers of adding microformats to my site (what is currently done by the theme, what by plugins etc.), I decided to start with: what is supposed to go where?

I made a post-it map on my wall to create an overview for myself. The map corresponds to the front page of my blog.

Green is content, pink is h- elements, blue u- elements, and yellow p- elements, with the little square ones covering dt- and rel’s. All this is based on the information provided on the http://microformats.org/wiki/Main_Page, and not on what my site actually does. So next step is a comparison of what I expect to be there from this map, to what is actually there. This map is also a useful step to see how to add microformats to the u-design theme for myself.

Mapping microformats on my site

Earlier this week I discussed microformats with Elmine. Microformats make your website machine readable, allowing other computers and applications to e.g. find out where my contact information is, and the metadata from my postings.

It was a discussion that branched off a conversation on online representation and marketing. I currently use the Sempress theme on this blog as it does microformats pretty well as far as I can tell, but it doesn’t look all that nice. Previously I had used the microformats plugin in a regular theme, but that didn’t work really well (the plugin is not at fault, it’s a best effort)

Ideally I’d like to add microformats to other sites I use, not just this blog. That means I’d like to add it to a generic theme like U-design. As I think it would be less effort to add microformats to U-design, than make Sempress look better and more generic for my sites. Elmine approached the creators of U-design, but microformats are not on their list of priorities. They do already support schema.org out of the box.

The steps I think I need to make:

  • Map-out visually where I want to use which microformats where and how. [UPDATE: done]
  • Then take a much closer look at the code of the existing Microformats plugin as well as the functions.php file of the Sempress theme
  • to see how the first hooks into existing themes, and how the second shapes the microformats to be added to the html of the page.
  • Determine if one or the other is usable with U-design as is, or alternatively which parts to re-use / adapt

From the recent posting on Mastodon and it currently lacking a long tail, I want to highlight a specific notion, and that’s why I am posting it here separately. This is the notion that tool usage having a long tail is a measure of distribution, and as such a proxy for networked agency. [A long tail is defined as the bottom 80% of certain things making up over 50% of a ‘market’. The 80% least sold books in the world make up more than 50% of total book sales. The 80% smallest Mastodon instances on the other hand account for less than 15% of all Mastodon users, so it’s not a long tail].

To me being able to deploy and control your own tools (both technology and methods), as a small group of connected individuals, is a source of agency, of empowerment. I call this Networked Agency, as opposed to individual agency. Networked also means that running your own tool is useful in itself, and even more useful when connected to other instances of the same tool. It is useful for me to have this blog even if I am its only reader, but my blog is even more useful to me because it creates conversations with other bloggers, it creates relationships. That ‘more useful when connected’ is why distributed technology is important. It allows you to do your own thing while being connected to the wider world, but you’re not dependent on that wider world to be able to do your own thing.

Whether a technology or method supports a distributed mode, in other words is an important feature to look for when deciding to use it or not. Another aspect is the threshold to adoption of such a tool. If it is too high, it is unlikely that people will use it, and the actual distribution will be very low, even if in theory the tools support it. Looking at the distribution of usage of a tool is then a good measure of success of a tool. Are more people using it individually or in small groups, or are more people using it in a centralised way? That is what a long tail describes: at least 50% of usage takes place in the 80% of smallest occurrences.

In June I spoke at State of the Net in Trieste, where I talked about Networked Agency. One of the issues raised there in response was about scale, as in “what you propose will never scale”. I interpreted that as a ‘centralist’ remark, and not a ‘distributed’ view, as it implied somebody specific would do the scaling. In response I wrote about the ‘invisible hand of networks‘:

“Every node in a network is a scaler, by doing something because it is of value to themselves in the moment, changes them, and by extension adding themselves to the growing number of nodes doing it. Some nodes may take a stronger interest in spreading something, convincing others to adopt something, but that’s about it. You might say the source of scaling is the invisible hand of networks.”

In part it is a pun on the ‘invisible hand of markets’, but it is also a bit of hand waving, as I don’t actually had precise notions of how that would need to work at the time of writing. Thinking about the long tail that is missing in Mastodon, and thus Mastodon not yet building the distributed social networking experience that Mastodon is intended for, allows me to make the ‘invisible hand of networks’ a bit more visible I think.

If we want to see distributed tools get more traction, that really should not come from a central entity doing the scaling. It will create counter-productive effects. Most of the Mastodon promotion comes from the first few moderators that as a consequence now run large de-facto centralised services, where 77% of all participants are housed on 0,7% (25 of over 3400) of servers. In networks smartness needs to be at the edges goes the adagium, and that means that promoting adoption needs to come from those edges, not the core, to extend the edges, to expand the frontier. In the case of Mastodon that means the outreach needs to come from the smallest instances towards their immediate environment.

Long tail forming as an adoption pattern is a good way then to see if broad distribution is being achieved.
Likely elements in promoting from the edge, that form the ‘invisible hand of networks’ doing the scaling are I suspect:

  • Show and tell, how one instance of tool has value to you, how connected instances have more value
  • Being able to explain core concepts (distribution, federation, agency) in contextually meaningful ways
  • Being able to explain how you can discover others using the same tool, that you might want to connect to
  • Lower thresholds of adoption (technically, financially, socially, intellectually)
  • Reach out to groups and people close to you (geographically, socially, intellectually), that you think would derive value from adoption. Your contextual knowledge is key to adoption.
  • Help those you reach out to set up their own tools, or if that is still too hard, ‘take them in’ and allow them the use of your own tools (so they at least can experience if it has value to them, building motivation to do it themselves)
  • Document and share all you do. In Bruce Sterling’s words: it’s not experimenting if you’re not publishing about it.

stm18
An adoption-inducing setting: Frank Meeuwsen explaining his steps in leaving online silos like Facebook, Twitter, and doing more on the open web. In our living room, during my wife’s birthday party.

Today the 2.6 version of Mastodon has been released. It now has built-in support for “rel=me”, which allows verification. Meaning that I can show on my Mastodon profile a link to my site and can proof that both are under my control.

Rel=me is something you add to a link on your own site, to indicate that the page or site you link to also belongs to you. The page you link to needs to link back to your site and make it reciprocal. This is machine readable, and allows others to establish that different pages are under control of the same person or entity.

On my own site I use ‘rel=me’ in the about section in the right hand column. First, if you check the html source of my page, you’ll see that I say that this site (zylstra.org/blog) is my primary site, by making it the only link that has a ‘u-uid’ class (uid is unique id). It also has rel=”me”, meaning the relationship I have with the linked site, is that it is me: 

class="u-url url u-uid uid" rel="me" href='https://www.zylstra.org/blog'

Further down in that About segment you find other links, to my Mastodon and Twitter profiles. If you look at those links you will see it says:

rel=”me” href=’https://m.tzyl.nl/@ton'

saying my Mastodon profile is also me, and similarly to say that a specific Twitter profile is also me (I maintain other Twitter profiles as well but they’re not me, but my company etc.):

rel=”me” href=’https://twitter.com/ton_zylstra'

To close the loop that allows verification that that is true, both my Mastodon profile and my Twitter profile need to link back to my site in a way that machines can check. For Twitter that is easiest: it has a specific place in a user profile for a website address. Like in the image below.

In Mastodon I can add multiple URLs to my profile but there was no way for me to explicitly say in my Mastodon profile that a specific link is the one that represents my online identity. But now I can add a rel=me link in my Mastodon profile, so that both my website and my Mastodon profile link to each other in a verifiable way, proving both are under my control. As you can see in the image below it was available on a single instance already for testing purposes (the green mark signifies verification with the linked site), and now with today’s release it is available to all Mastodon instances.

So how is verification of control over different pages by the same person useful? It may be useful to show that another Twitter profile with my name is not me, because there’s no two-way link between that profile and my site. If you have multiple rel=me references it becomes harder for others to fake specific parts of your online identity. Further, it allows additional functionality like logging in on a different site using credentials from another site you control. It also makes it possible to map networks, and discovery. Site X links to profile X with rel=me on Twitter. There X follows Y, and Y’s profile says, site Y is under her control. Now I know that Site X and Site Y’s authors are somehow connected. If I’m following site X, I may find it interesting to also regularly read site Y.

As soon as my Mastodon instance has been updated to the latest version, which will likely be sometime today, I will add the rel=me in my Mastodon profile, making the link between this site and that profile verifiable.

[UPDATE] It now works on my Mastodon instance:

[TL;DR: A long tail is needed for distributed technology to be sustainable I think, otherwise it’s just centralisation and single points of failure in a different form. A long tail means the bottom 80% take over 50% of a market, and the top 20% under 50%. Mastodon currently has over 85% of its participants in the top 20% of instances, and it’s worse than that as 77% of participants are in 0,7% of instances. Just 15% are in the bottom 80% of instances. There’s a power law distribution, but it’s not a long tail. What can Mastodon do to get there and to sustainability?]

On 6 October 2016 Mastodon was launched, and its originator Eugen Rochko looks back in a blogpost on the journey of the past two years.

I joined on 7 April 2017, 6 months after its launch, at the Mastodon.cloud instance. I posted some messages for a month, then fell quiet for half a year. A few messages last March, and then I started using it more frequently last month, in the run-up to figuring out how to run Mastodon for myself (which for now means a hosted solution, but still aiming for running it from the home router). It’s now part of my daily information diet, but no guarantee yet it will last, although being certain I have ‘my half’ of the conversation on a domain I own helps a lot towards maintaining worthwhile exchanges.

Eugen’s blogpost is rightfully proud of what has been accomplished. It’s not yet proof of the sustainability of federated solutions though as he suggests.

He shares a few interesting numbers about the usage of Mastodon. The median of the 3460 known instances is 8 users. In total there are 1.627.557 registered accounts. The largest instance has 415.941 members, while the top 3 together have 52% of users, meaning the number 2 and 3 average 215.194 accounts. The top 25 largest instances have 77% or 1.253.219 members, meaning that the numbers 4-25 average 18.495 users. As the median is 8 it means the smallest 1730 instances have at most 8*1730 = 13.840 users. It also means that the number 26 to number 1730 instances have at least 360.498 members, or an average of 211. This tells us there’s a Pareto power law distribution: the top 20% of instances hold at least 85% of users at the moment. That also means there is no long tail, just a stub that holds at most 15% of Mastodon users only. For a long tail to exist, the smallest 80% of instances should account for over 50% of users, or over three times more than the current number.

As the purpose of Mastodon is distribution, where federation allows everyone to connect regardless of their instances (sort of like e-mail), I think Mastodon can only be deemed sustainable if there is a true long tail. Meaning, that while the number of users goes up, the number of instances should go up at a faster rate. So that over 50% of all Mastodon users will be on the 80% smallest or even individual instances. In the current numbers we should be most interested in the 50% of instances that now have 8 or less users, and find out what drives those instances, so we may have many many more of them. We should also think about what a bigger-to-smaller-instances funnel for members can look like, not just leave it to chance. I think that the top 25 Mastodon instances, which is just 0.7% of the total, currently having 77% of all users is very problematic from a sustainability perspective. Because that level of concentration is completely at odds with the stated purpose of Mastodon: distribution.

Eugen Rochko in his anniversary posting points at a critical article from April 2017 in Mashable, implying that criticaster has been been proven wrong definitively. I disagree. While much of the ‘predictions’ in that article are indeed silly, it also contains a few hints as to where sustainability may be found. The criticaster doesn’t get federation (yet likely uses mail everyday), and complains about discovery (yet likely is relieved not all his personal e-mail addresses are to be found in Google). Yet if we can’t explain distribution and federation, and can’t or don’t communicatie how discovery works in such a setting then we won’t be able to make a long tail grow. For more people to adopt small or individual instance we need to bring the threshold for running your own instance way down, and then way down again. To the level of at most one click installing a script on any regular hosting service, and creating a first account.

Using open protocols, like ActivityPub which Mastodon supports, is key in getting more people out of walled gardens and silos, and on the open web. Tracking its adoption is a useful measure of success, but 2 years of existence is not a sign of sustainability at all. What Eugen Rochko has kicked off with Mastodon is valuable and very laudable, but we have barely started getting to where we need to be for it to stick.

Sebastiaan at IWC Nürnberg last weekend did some cool stuff with visualising feeds he follows, as well as find a way of surfacing stuff from outside his feeds because those in his feeds talk about it or like it. That is very exciting to me as it creates a peripheral view, and really puts your network to use as a filter. He follows up with a good posting on readers.

Towards the end of that posting there’s some discussion of how to combat ways of feed overwhelm.
That Sebastiaan, reminds me of what I wrote about my feedreading strategies in 2005 (take a look at the images there, they help in understanding the text that follows).

I think it is useful to think not just of what you yourself consume in terms of feeds, and how to optimise that, but also in terms of the feedback loops you need/want back to the authors of some of your feeds.

Your network is a filter, and a certain level of feedback is needed to be able to spot patterns that lift signals above the noise, the peripheral vision you described. Both individually and collectively. But too much feedback creates echo-chambers. So the overall quality of your network / network’s feeds and interaction is part of the equation in thinking about feed overwhelm. It introduces needs for alternating and deliberate phases of divergence and convergence, and being able to judge diversity and quality of your network.

It’s in that regard very important to realise that there’s a key factor not present in your feeds that is enormously useful for filtering: your own personal knowledge about the author of a feed. If you can tag feeds with what you know of their authors (coder, Berlin, Drupal, e.g.), and how you perceive the social distance between you and them (from significant other to total stranger), you can do even more visualising by asking questions like “what are the topics that European front-end developers I know are excited about this week”, or by visualising what communities are talking about. Social distance also is a factor in dealing with overwhelm: I for instance read a handful of people important to me every day when they have posted, and others I don’t read if I don’t have time, and I therefore group my feeds by social distance.

Finally, overwhelm is more likely if you approach feeds as drinking from a tap. But again, you know things that are not present in your feeds: current interests you have, questions you have, things you’re working on. A listener more likely hears those things better that are close to them. This points to less a river-of-news approach, and more to an active interrogation of feeds based on your personal ‘agenda’ at a time of your choosing.

Fear of missing out is not important, especially not when the feedback loops, that I mentioned above, between authors exist. If it is a signal of some sort, and not noise, it will bounce around your network-as-a-filter for a while, and is likely to be there in some form still, when you next take a look. If it is important and you overlooked it, it will come up again when you look another time.

Also see my posting about my ideal feedreader, from a few months ago.

At IndieWeb Camp Nürnberg today I worked on changing the way my site displays webmentions. Like I wrote earlier, I would like for all webmentions to have a snippet of the linking article, so you get some context to decide if you want to go to that article or not.

It used to be that way in the past with pingbacks, but my webmentions get shown as “Peter mentioned this on ruk.ca”.

After hunting down where in my site this gets determined, I ended up in a file my Semantic Weblinks plugin, called class-linkbacks-handler.php. In this file I altered “get_comment_type_excerpts” function (which sets the template for a webmention), and the function “comment_text_excerpt”, where that template gets filled. I also altered the max length of webmentions that are shown in their entirety. My solution takes a snippet from the start of the webmention. I will later change it to taking a snippet from around the specific place where it links to my site. But at least I succeeded in changing this, and now know where to do that.

When the next update of this plugin takes place I will need to take care, as then my changes will get overwritten. But that too is less important for now.


The webmentions for this posting are now shown as a snippet from the source, below the sentence that was previously the only thing shown.

Task 1 now complete. My blog declared 16 h-cards, one for each time my name was mentioned as author under the 15 blogposts on my front page, and 1 in the side bar. That last one is the only one I want to have, so I wanted to remove those underneath blogposts.

To do that, I had to create a child theme of the theme I use, Sempress. I created it on my hosting server directly, not through WordPress.

In the original theme I then hunted down the function used to show the author information for each posting, the sempress_posted_on function. This by viewing the various Sempress files in the WordPress internal Themes Editor. Then I copied that over to my child theme, and changed it. I simply removed the bits that turned my name into a link and all the h-card elements declared as classes around it. There’s no need to link to my author page here. I’m the only author, don’t have a profile page, and if you look at the ‘author archive’ it is a list of all the postings on this site.

I also cleaned up my single remaining h-card, adding a “p-note” class so that the blurb becomes part of the h-card, and making sure it lists the e-mail addresses correctly now.

The child theme I created will be useful for changing the way webmentions are presented on my blog as well.

A great effect of spending a day in the same room with 20 or so more geeking inclined others, is you get a lot of examples, tools and services mentioned. And geek is as geek does, I try them out on the spot. Today this helped me become aware that something is wrong on my server with the OAuth authentication I run. I thought that it was working fine, as it is no problem to actually use it, for instance to log in with my own domain name at the IndieWeb wiki. But when interacting with my micropublishing endpoint not all goes well.

Today I noticed that:

  • When I try to post from Micropublish.net, I can log in at micropublish.net, but when I try to post I get an ‘unauthorized’ error
  • When I try to use the Omnibear Firefox add-on it authorises ok, but then endlessly tries to load the list of syndication targets
  • When I use Quill to post, it posts fine, but does not load the list of syndication targets

Those missing syndication targets (now that I understand what they are from todays sessions) was what first caught my eye. Testing the micropublish endpoint on my server myself I got the correct response, but Quill turned out to get ‘unauthorized’ as response for that request, just like micropublish.net got for posting.

The endpoint gives a correct response

In WordPress my IndieAuth plugin has a diagnostic tool, and running that, it turns out an authorisation header is not send out.

Which seems to be causing the problems. Reading in the links provided it seems like with XML-RPC, my hoster is actively blocking that header. [UPDATE: It is not, it’s just not available in the way the server currently runs PHP] Resulting in exactly the same experience as I had with XML-RPC, that it seems to be only half working (namely the ‘safe’ uses work, while the rest fails). There’s a work around, renaming the headers that get send out, and implementing that work-around is a thing for me to do tomorrow. To see if I can get around being unauthorised. [UPDATE: That workaround did not work until now]

We’re in a time where whatever is presented to us as discourse on Facebook, Twitter or any of the other platforms out there, may or may not come from humans, bots, or someone/a group with a specific agenda irrespective of what you say or respond. We’ve seen it at the political level, with outside influences on elections, we see it in things like gamer gate, and in critiques of the last Star Wars movie. It creates damage on a societal level, and it damages people individually. To quote Angela Watercutter, the author of the mentioned Star Wars article,

…it gets harder and harder to have an honest discussion […] when some of the speakers are just there to throw kerosene on a flame war. And when that happens, when it’s impossible to know which sentiments are real and what motivates the people sharing them, discourse crumbles. Every discussion […] could turn into a […] fight — if we let it.

Discourse disintegrates I think specifically when there’s no meaningful social context in which it takes place, nor social connections between speakers in that discourse. The effect not just stems from that you can’t/don’t really know who you’re conversing with, but I think more importantly from anyone on a general platform being able to bring themselves into the conversation, worse even force themselves into the conversation. Which is why you never should wade into newspaper comments, even though we all read them at times because watching discourse crumbling from the sidelines has a certain addictive quality. That this can happen is because participants themselves don’t control the setting of any conversation they are part of, and none of those conversations are limited to a specific (social) context.

Unlike in your living room, over drinks in a pub, or at a party with friends of friends of friends. There you know someone. Or if you don’t, you know them in that setting, you know their behaviour at that event thus far. All have skin in the game as well misbehaviour has immediate social consequences. Social connectedness is a necessary context for discourse, either stemming from personal connections, or from the setting of the place/event it takes place in. Online discourse often lacks both, discourse crumbles, entropy ensues. Without consequence for those causing the crumbling. Which makes it fascinating when missing social context is retroactively restored, outing the misbehaving parties, such as the book I once bought by Tinkebell where she matches death threats she received against the sender’s very normal Facebook profiles.

Two elements therefore are needed I find, one in terms of determining who can be part of which discourse, and two in terms of control over the context of that discourse. They are point 2 and point 6 in my manifesto on networked agency.

  • Our platforms need to mimick human networks much more closely : our networks are never ‘all in one mix’ but a tapestry of overlapping and distinct groups and contexts. Yet centralised platforms put us all in the same space.
  • Our platforms also need to be ‘smaller’ than the group using it, meaning a group can deploy, alter, maintain, administrate a platform for their specific context. Of course you can still be a troll in such a setting, but you can no longer be one without a cost, as your peers can all act themselves and collectively.
  • This is unlike on e.g. FB where the cost of defending against trollish behaviour by design takes more effort than being a troll, and never carries a cost for the troll. There must, in short, be a finite social distance between speakers for discourse to be possible. Platforms that dilute that, or allow for infinite social distance, is where discourse can crumble.

    This points to federation (a platform within control of a specific group, interconnected with other groups doing the same), and decentralisation (individuals running a platform for one, and interconnecting them). Doug Belshaw recently wrote in a post titled ‘Time to ignore and withdraw?‘ about how he first saw individuals running their own Mastodon instance as quirky and weird. Until he read a blogpost of Laura Kalbag where she writes about why you should run Mastodon yourself if possible:

    Everything I post is under my control on my server. I can guarantee that my Mastodon instance won’t start profiling me, or posting ads, or inviting Nazis to tea, because I am the boss of my instance. I have access to all my content for all time, and only my web host or Internet Service Provider can block my access (as with any self-hosted site.) And all blocking and filtering rules are under my control—you can block and filter what you want as an individual on another person’s instance, but you have no say in who/what they block and filter for the whole instance.

    Similarly I recently wrote,

    The logical end point of the distributed web and federated services is running your own individual instance. Much as in the way I run my own blog, I want my own Mastodon instance.

    I also do see a place for federation, where a group of people from a single context run an instance of a platform. A group of neighbours, a sports team, a project team, some other association, but always settings where damaging behaviour carries a cost because social distance is finite and context defined, even if temporary or emergent.

    In just over a week I will be joining the Nuremberg IndieWebCamp, together with Frank Meeuwsen. As I said earlier, like Frank, I’m wondering what I could be working on, talking about, or sharing at the event. Especially as the event is set up to not just talk but also build things.

    So I went through my blogpostings of the past months that concerned the indie web, and made a list of potential things. They are of varying feasibility and scope, so I can probably strike off quite a few, and should likely go for the most simple one, which could also be re-used as building block for some of the less easy options. The list contains 13 things (does that have a name, a collection of 13 things, like ‘odd dozen’ or something? Yes it does: a baker’s dozen, see comment by Ric below.). They fall into a few categories: webmention related, rss reader related, more conceptual issues, and hardware/software combinations.

    1. Getting WebMention to display the way I want, within the Sempress theme I’m using here. The creator of the theme, Matthias Pfefferle, may be present at the event. Specifically I want to get some proper quotes displayed underneath my postings, and also understand much better what webmention data is stored and where, and how to manipulate it.
    2. Building a growing list of IndieWeb sites by harvesting successful webmentions from my server logs, and publish that in a re-usable (micro-)format (so that you could slowly map the Indieweb over time)
    3. Make it much easier for myself to blog from mobile, or mail to my blog, using the MicroPub protocol, e.g. using the micropublish client.
    4. Dive into the TinyTinyRSS datastructure to better understand. First to be able to add tags to feeds (not articles), as per my wishlist for RSS reader functionality.
    5. Make basic visualisation possible on top of TinyTinyRSS database, as a step to a reading mode based on pattern detection
    6. Allow better search across TinyTinyRSS, full text, to support the reading mode of searching material around specific questions I hold
    7. Adding machine translation to TinyTinyRSS, so I can diversify my reading, and compare original to its translation on a post by post basis
    8. Visualising conversations across blogs, both for understanding the network dynamics involved and for discovery
    9. Digging up my old postings 2003-2005 about my information strategies and re-formulate them for networked agency and 2018
    10. Find a way of displaying content (not just postings, but parts of postings) limited to a specific audience, using IndieAuth.
    11. Formulate my Networked Agency principles, along the lines of the IndieWeb principles, for ‘indietech’ and ‘indiemethods’
    12. Attempt to run FreedomBone on a Raspberry Pi, as it contains a range of tools, including GnuSocial for social networking. (Don’t forget to bring a R Pi for it)
    13. Automatically harvest my Kindle highlights and notes and store them locally in a way I can re-use.

    These are the options. Now I need to pick something that is actually doable with my limited coding skills, yet also challenges me to learn/do something new.

    Previously I had tried to get GNU Social running on my own hosted domain as a way to interact with Mastodon. I did not get it to work, for reasons unclear to me, I could follow people on Mastodon but would not receive messages, nor would they see mine.

    This morning I saw the message below in my Mastodon timeline.

    It originates from Peter Rukavina’s own GNU Social install. So at least he got the ‘sending mentions’ part working. He is also able to receive my replies, as my responses show up underneath his original message. Including ones I limited the visibility of it seems.

    Now I am curious to compare notes. Which version of GNU Social? Any tweaks? Does Peter receive my timeline? How do permissions propagate (I only let people follow me after I approve them)? And more. I notice that his URL structures are different from those in my GNU Social install for instance.