From the recent posting on Mastodon and it currently lacking a long tail, I want to highlight a specific notion, and that’s why I am posting it here separately. This is the notion that tool usage having a long tail is a measure of distribution, and as such a proxy for networked agency. [A long tail is defined as the bottom 80% of certain things making up over 50% of a ‘market’. The 80% least sold books in the world make up more than 50% of total book sales. The 80% smallest Mastodon instances on the other hand account for less than 15% of all Mastodon users, so it’s not a long tail].

To me being able to deploy and control your own tools (both technology and methods), as a small group of connected individuals, is a source of agency, of empowerment. I call this Networked Agency, as opposed to individual agency. Networked also means that running your own tool is useful in itself, and even more useful when connected to other instances of the same tool. It is useful for me to have this blog even if I am its only reader, but my blog is even more useful to me because it creates conversations with other bloggers, it creates relationships. That ‘more useful when connected’ is why distributed technology is important. It allows you to do your own thing while being connected to the wider world, but you’re not dependent on that wider world to be able to do your own thing.

Whether a technology or method supports a distributed mode, in other words is an important feature to look for when deciding to use it or not. Another aspect is the threshold to adoption of such a tool. If it is too high, it is unlikely that people will use it, and the actual distribution will be very low, even if in theory the tools support it. Looking at the distribution of usage of a tool is then a good measure of success of a tool. Are more people using it individually or in small groups, or are more people using it in a centralised way? That is what a long tail describes: at least 50% of usage takes place in the 80% of smallest occurrences.

In June I spoke at State of the Net in Trieste, where I talked about Networked Agency. One of the issues raised there in response was about scale, as in “what you propose will never scale”. I interpreted that as a ‘centralist’ remark, and not a ‘distributed’ view, as it implied somebody specific would do the scaling. In response I wrote about the ‘invisible hand of networks‘:

“Every node in a network is a scaler, by doing something because it is of value to themselves in the moment, changes them, and by extension adding themselves to the growing number of nodes doing it. Some nodes may take a stronger interest in spreading something, convincing others to adopt something, but that’s about it. You might say the source of scaling is the invisible hand of networks.”

In part it is a pun on the ‘invisible hand of markets’, but it is also a bit of hand waving, as I don’t actually had precise notions of how that would need to work at the time of writing. Thinking about the long tail that is missing in Mastodon, and thus Mastodon not yet building the distributed social networking experience that Mastodon is intended for, allows me to make the ‘invisible hand of networks’ a bit more visible I think.

If we want to see distributed tools get more traction, that really should not come from a central entity doing the scaling. It will create counter-productive effects. Most of the Mastodon promotion comes from the first few moderators that as a consequence now run large de-facto centralised services, where 77% of all participants are housed on 0,7% (25 of over 3400) of servers. In networks smartness needs to be at the edges goes the adagium, and that means that promoting adoption needs to come from those edges, not the core, to extend the edges, to expand the frontier. In the case of Mastodon that means the outreach needs to come from the smallest instances towards their immediate environment.

Long tail forming as an adoption pattern is a good way then to see if broad distribution is being achieved.
Likely elements in promoting from the edge, that form the ‘invisible hand of networks’ doing the scaling are I suspect:

  • Show and tell, how one instance of tool has value to you, how connected instances have more value
  • Being able to explain core concepts (distribution, federation, agency) in contextually meaningful ways
  • Being able to explain how you can discover others using the same tool, that you might want to connect to
  • Lower thresholds of adoption (technically, financially, socially, intellectually)
  • Reach out to groups and people close to you (geographically, socially, intellectually), that you think would derive value from adoption. Your contextual knowledge is key to adoption.
  • Help those you reach out to set up their own tools, or if that is still too hard, ‘take them in’ and allow them the use of your own tools (so they at least can experience if it has value to them, building motivation to do it themselves)
  • Document and share all you do. In Bruce Sterling’s words: it’s not experimenting if you’re not publishing about it.

stm18
An adoption-inducing setting: Frank Meeuwsen explaining his steps in leaving online silos like Facebook, Twitter, and doing more on the open web. In our living room, during my wife’s birthday party.

[TL;DR: A long tail is needed for distributed technology to be sustainable I think, otherwise it’s just centralisation and single points of failure in a different form. A long tail means the bottom 80% take over 50% of a market, and the top 20% under 50%. Mastodon currently has over 85% of its participants in the top 20% of instances, and it’s worse than that as 77% of participants are in 0,7% of instances. Just 15% are in the bottom 80% of instances. There’s a power law distribution, but it’s not a long tail. What can Mastodon do to get there and to sustainability?]

On 6 October 2016 Mastodon was launched, and its originator Eugen Rochko looks back in a blogpost on the journey of the past two years.

I joined on 7 April 2017, 6 months after its launch, at the Mastodon.cloud instance. I posted some messages for a month, then fell quiet for half a year. A few messages last March, and then I started using it more frequently last month, in the run-up to figuring out how to run Mastodon for myself (which for now means a hosted solution, but still aiming for running it from the home router). It’s now part of my daily information diet, but no guarantee yet it will last, although being certain I have ‘my half’ of the conversation on a domain I own helps a lot towards maintaining worthwhile exchanges.

Eugen’s blogpost is rightfully proud of what has been accomplished. It’s not yet proof of the sustainability of federated solutions though as he suggests.

He shares a few interesting numbers about the usage of Mastodon. The median of the 3460 known instances is 8 users. In total there are 1.627.557 registered accounts. The largest instance has 415.941 members, while the top 3 together have 52% of users, meaning the number 2 and 3 average 215.194 accounts. The top 25 largest instances have 77% or 1.253.219 members, meaning that the numbers 4-25 average 18.495 users. As the median is 8 it means the smallest 1730 instances have at most 8*1730 = 13.840 users. It also means that the number 26 to number 1730 instances have at least 360.498 members, or an average of 211. This tells us there’s a Pareto power law distribution: the top 20% of instances hold at least 85% of users at the moment. That also means there is no long tail, just a stub that holds at most 15% of Mastodon users only. For a long tail to exist, the smallest 80% of instances should account for over 50% of users, or over three times more than the current number.

As the purpose of Mastodon is distribution, where federation allows everyone to connect regardless of their instances (sort of like e-mail), I think Mastodon can only be deemed sustainable if there is a true long tail. Meaning, that while the number of users goes up, the number of instances should go up at a faster rate. So that over 50% of all Mastodon users will be on the 80% smallest or even individual instances. In the current numbers we should be most interested in the 50% of instances that now have 8 or less users, and find out what drives those instances, so we may have many many more of them. We should also think about what a bigger-to-smaller-instances funnel for members can look like, not just leave it to chance. I think that the top 25 Mastodon instances, which is just 0.7% of the total, currently having 77% of all users is very problematic from a sustainability perspective. Because that level of concentration is completely at odds with the stated purpose of Mastodon: distribution.

Eugen Rochko in his anniversary posting points at a critical article from April 2017 in Mashable, implying that criticaster has been been proven wrong definitively. I disagree. While much of the ‘predictions’ in that article are indeed silly, it also contains a few hints as to where sustainability may be found. The criticaster doesn’t get federation (yet likely uses mail everyday), and complains about discovery (yet likely is relieved not all his personal e-mail addresses are to be found in Google). Yet if we can’t explain distribution and federation, and can’t or don’t communicatie how discovery works in such a setting then we won’t be able to make a long tail grow. For more people to adopt small or individual instance we need to bring the threshold for running your own instance way down, and then way down again. To the level of at most one click installing a script on any regular hosting service, and creating a first account.

Using open protocols, like ActivityPub which Mastodon supports, is key in getting more people out of walled gardens and silos, and on the open web. Tracking its adoption is a useful measure of success, but 2 years of existence is not a sign of sustainability at all. What Eugen Rochko has kicked off with Mastodon is valuable and very laudable, but we have barely started getting to where we need to be for it to stick.

Sebastiaan at IWC Nürnberg last weekend did some cool stuff with visualising feeds he follows, as well as find a way of surfacing stuff from outside his feeds because those in his feeds talk about it or like it. That is very exciting to me as it creates a peripheral view, and really puts your network to use as a filter. He follows up with a good posting on readers.

Towards the end of that posting there’s some discussion of how to combat ways of feed overwhelm.
That Sebastiaan, reminds me of what I wrote about my feedreading strategies in 2005 (take a look at the images there, they help in understanding the text that follows).

I think it is useful to think not just of what you yourself consume in terms of feeds, and how to optimise that, but also in terms of the feedback loops you need/want back to the authors of some of your feeds.

Your network is a filter, and a certain level of feedback is needed to be able to spot patterns that lift signals above the noise, the peripheral vision you described. Both individually and collectively. But too much feedback creates echo-chambers. So the overall quality of your network / network’s feeds and interaction is part of the equation in thinking about feed overwhelm. It introduces needs for alternating and deliberate phases of divergence and convergence, and being able to judge diversity and quality of your network.

It’s in that regard very important to realise that there’s a key factor not present in your feeds that is enormously useful for filtering: your own personal knowledge about the author of a feed. If you can tag feeds with what you know of their authors (coder, Berlin, Drupal, e.g.), and how you perceive the social distance between you and them (from significant other to total stranger), you can do even more visualising by asking questions like “what are the topics that European front-end developers I know are excited about this week”, or by visualising what communities are talking about. Social distance also is a factor in dealing with overwhelm: I for instance read a handful of people important to me every day when they have posted, and others I don’t read if I don’t have time, and I therefore group my feeds by social distance.

Finally, overwhelm is more likely if you approach feeds as drinking from a tap. But again, you know things that are not present in your feeds: current interests you have, questions you have, things you’re working on. A listener more likely hears those things better that are close to them. This points to less a river-of-news approach, and more to an active interrogation of feeds based on your personal ‘agenda’ at a time of your choosing.

Fear of missing out is not important, especially not when the feedback loops, that I mentioned above, between authors exist. If it is a signal of some sort, and not noise, it will bounce around your network-as-a-filter for a while, and is likely to be there in some form still, when you next take a look. If it is important and you overlooked it, it will come up again when you look another time.

Also see my posting about my ideal feedreader, from a few months ago.

We’re in a time where whatever is presented to us as discourse on Facebook, Twitter or any of the other platforms out there, may or may not come from humans, bots, or someone/a group with a specific agenda irrespective of what you say or respond. We’ve seen it at the political level, with outside influences on elections, we see it in things like gamer gate, and in critiques of the last Star Wars movie. It creates damage on a societal level, and it damages people individually. To quote Angela Watercutter, the author of the mentioned Star Wars article,

…it gets harder and harder to have an honest discussion […] when some of the speakers are just there to throw kerosene on a flame war. And when that happens, when it’s impossible to know which sentiments are real and what motivates the people sharing them, discourse crumbles. Every discussion […] could turn into a […] fight — if we let it.

Discourse disintegrates I think specifically when there’s no meaningful social context in which it takes place, nor social connections between speakers in that discourse. The effect not just stems from that you can’t/don’t really know who you’re conversing with, but I think more importantly from anyone on a general platform being able to bring themselves into the conversation, worse even force themselves into the conversation. Which is why you never should wade into newspaper comments, even though we all read them at times because watching discourse crumbling from the sidelines has a certain addictive quality. That this can happen is because participants themselves don’t control the setting of any conversation they are part of, and none of those conversations are limited to a specific (social) context.

Unlike in your living room, over drinks in a pub, or at a party with friends of friends of friends. There you know someone. Or if you don’t, you know them in that setting, you know their behaviour at that event thus far. All have skin in the game as well misbehaviour has immediate social consequences. Social connectedness is a necessary context for discourse, either stemming from personal connections, or from the setting of the place/event it takes place in. Online discourse often lacks both, discourse crumbles, entropy ensues. Without consequence for those causing the crumbling. Which makes it fascinating when missing social context is retroactively restored, outing the misbehaving parties, such as the book I once bought by Tinkebell where she matches death threats she received against the sender’s very normal Facebook profiles.

Two elements therefore are needed I find, one in terms of determining who can be part of which discourse, and two in terms of control over the context of that discourse. They are point 2 and point 6 in my manifesto on networked agency.

  • Our platforms need to mimick human networks much more closely : our networks are never ‘all in one mix’ but a tapestry of overlapping and distinct groups and contexts. Yet centralised platforms put us all in the same space.
  • Our platforms also need to be ‘smaller’ than the group using it, meaning a group can deploy, alter, maintain, administrate a platform for their specific context. Of course you can still be a troll in such a setting, but you can no longer be one without a cost, as your peers can all act themselves and collectively.
  • This is unlike on e.g. FB where the cost of defending against trollish behaviour by design takes more effort than being a troll, and never carries a cost for the troll. There must, in short, be a finite social distance between speakers for discourse to be possible. Platforms that dilute that, or allow for infinite social distance, is where discourse can crumble.

    This points to federation (a platform within control of a specific group, interconnected with other groups doing the same), and decentralisation (individuals running a platform for one, and interconnecting them). Doug Belshaw recently wrote in a post titled ‘Time to ignore and withdraw?‘ about how he first saw individuals running their own Mastodon instance as quirky and weird. Until he read a blogpost of Laura Kalbag where she writes about why you should run Mastodon yourself if possible:

    Everything I post is under my control on my server. I can guarantee that my Mastodon instance won’t start profiling me, or posting ads, or inviting Nazis to tea, because I am the boss of my instance. I have access to all my content for all time, and only my web host or Internet Service Provider can block my access (as with any self-hosted site.) And all blocking and filtering rules are under my control—you can block and filter what you want as an individual on another person’s instance, but you have no say in who/what they block and filter for the whole instance.

    Similarly I recently wrote,

    The logical end point of the distributed web and federated services is running your own individual instance. Much as in the way I run my own blog, I want my own Mastodon instance.

    I also do see a place for federation, where a group of people from a single context run an instance of a platform. A group of neighbours, a sports team, a project team, some other association, but always settings where damaging behaviour carries a cost because social distance is finite and context defined, even if temporary or emergent.

    Previously I had tried to get GNU Social running on my own hosted domain as a way to interact with Mastodon. I did not get it to work, for reasons unclear to me, I could follow people on Mastodon but would not receive messages, nor would they see mine.

    This morning I saw the message below in my Mastodon timeline.

    It originates from Peter Rukavina’s own GNU Social install. So at least he got the ‘sending mentions’ part working. He is also able to receive my replies, as my responses show up underneath his original message. Including ones I limited the visibility of it seems.

    Now I am curious to compare notes. Which version of GNU Social? Any tweaks? Does Peter receive my timeline? How do permissions propagate (I only let people follow me after I approve them)? And more. I notice that his URL structures are different from those in my GNU Social install for instance.

    After I moved my Mastodon accounts from a general server to a self-run instance last week, I was curious to see how many of the followers I had on the old account would make the move to my current Mastodon account. After all the ‘cost of leaving’ is always a consideration when changing course in social media usage, in this case specifically the portability of your existing network. Last week I had 43 followers on the old account, and I now have 11 on my new account, so that is about 25%. Let’s see if it grows in the coming days. Likely some of the followers I had no longer use Mastodon. So another question is when I reach the same number of followers by engaging in new conversations.

    As I didn’t succeed yet in getting Mastodon to run on a Raspberry Pi, nor in running a Gnu Social instance that actually federates on my hosting package, I’ve opted for an intermediate solution to running my own Mastodon instance.

    Key in all this is satisfying three dimensions: control, flexibility and ease of use. My earlier attempts satisfy the control and flexibility dimensions, but as I have a hard time getting them to work, do not satisfy the ease of use dimension yet.

    At the same time I did not want to keep using Mastodon on a generic server much longer, as it builds up a history there which with every conversation ups the cost of leaving.

    The logical end point of the distributed web and federated services is running your own individual instance. Much as in the way I run my own blog, I want my own Mastodon instance.

    Such an individual instance needs to be within my own scope of control. This means having it at a domain I own. and being able to move everything to a different server at will.

    There is a hoster, Masto.host run by Hugo Gameiro, who provides Mastodon hosting as a monthly subscription. As it allows me to use my own domain name, and provides me with admin privileges of the mastodon instance, this is a workable solution. When I succeed in getting my own instance of Mastodon running on the Rapsberry Pi, I can simply move the entire instance at Masto.host to it.

    Working with Hugo at Masto.host was straightforward. After registering for the service, Hugo got in touch with me to ensure the DNS settings on my own domain were correct, and briefly afterwards everything was up and running.
    Frank Meeuwsen, who started using Masto.host last month, kindly wrote up a ‘moving your mastodon account’ guide in his blog (in Dutch). I followed (most) of that, to ensure a smooth transition.

    Using Mastodon? Do follow me at https://m.tzyl.nl/@ton.

    Screenshots of my old Mastodon.cloud account, and my new one on my own domain. And the goodbye and hello world messages from both.

    In the past few days I tried a second experiment to run my own Mastodon instance. Both to actually get a result, but also to learn how easy or hard it is to do. The first round I tried running something on a hosted domain. This second round I tried to get something running on a Raspberry Pi.

    The Rapsberry Pi is a 35 Euro computer, making it very useful for stand-alone solutions or as a cheap hardware environment to learn things like programming.

    20180923_144442Installing Debian Linux on the Rapsberry Pi

    I found this guide by Wim Vanderbauwhede, which describes installing both Mastodon and Pleroma on a Raspberry Pi 3. I ordered a Raspberry Pi 3 and received it earlier this week. Wim’s guide points to another guide by on how to install Ruby on Rails and PostgresSQL on a Rapsberry Pi. The link however was dead, and that website offline. However archive.org had stored several snapshots, which I save to Evernote.

    Installing Ruby on Rails went fine using the guide, as did installing PostgresSQL. Then I returned to Wim’s guide, now pointing to the Mastodon installation guide. This is where the process currently fails for me: I can’t extend the Ubuntu repositories mentioned, nor node.js.

    So for now I’m stalled. I’ll try to get back to it later next week.

    Last week the 2nd annual Techfestival took place in Copenhagen. As part of this there was a 48 hour think tank of 150 people (the ‘Copenhagen 150‘), looking to build the Copenhagen Catalogue, as a follow-up of last year’s Copenhagen Letter of which I am a signee. Thomas, initiator of the Techfestival had invited me to join the CPH150 but I had to decline the invitation, because of previous commitments I could not reschedule. I’d have loved to contribute however, as the event’s and even more the think tank’s concerns are right at the heart of my own. My concept of networked agency and the way I think about how we should shape technology to empower people in different ways runs in parallel to how Thomas described the purpose of the CPH150 48 hour think tank at its start last week.

    For me the unit of agency is the individual and a group of meaningful relationships in a specific context, a networked agency. The power to act towards meaningful results and change lies in that group, not in the individual. The technology and methods that such a group deploys need to be chosen deliberately. And those tools need to be fully within scope of the group itself. To control, alter, extend, tinker, maintain, share etc. Such tools therefore need very low adoption thresholds. Tools also need to be useful on their own, but great when federated with other instances of those tools. So that knowledge and information, learning and experimentation can flow freely, yet still can take place locally in the (temporary) absence of such wider (global) connections. Our current internet silos such as Facebook and Twitter clearly do not match this description. But most other technologies aren’t shaped along those lines either.

    As Heinz remarked earlier musing about our unconference, effective practices cannot be separated from the relationships in which you live. I added that the tools (both technology and methods) likewise cannot be meaningfully separated from the practices. Just like in the relationships you cannot fully separate between the hyperlocal, the local, regional and global, due to the many interdependencies and complexity involved: what you do has wider impact, what others do and global issues express themselves in your local context too.

    So the CPH150 think tank effort to create a list of principles that takes a human and her relationships as the starting point to think about how to design tools, how to create structures, institutions, networks fits right with that.

    Our friend Lee Bryant has a good description of how he perceived the CPH150 think tank, and what he shared there. Read the whole thing.

    Meanwhile the results are up: 150 principles called the Copenhagen Catalogue, beautifully presented. You can become signatory to those principles you deem most valuable to stick to.

    At our birthday unconference STM18 last week, Frank gave a presentation (PDF) on running your own website and social media tools separate from the commercial silos like Facebook, Twitter etc. Collected under the name IndieWeb (i.e. the independent web), this is basically what used to be the default before we welcomed the tech companies’ silos into town. The IndieWeb never went away of course, I’ve been blogging in this exact same space for 16 years now, and ran a personal website for just under a decade before that. For broader groups to take their data and their lives out of silos it requires however easy options out, and low-threshold replacement tools.

    One of the silos to replace is Twitter. There are various other tools around, like Mastodon. What they have in common is that it’s not run by a single company, but anyone can run a server, and then they federate, i.e. all work together. So that if I am on server 128, and you are on server 512 our messages still arrive in the right spot.

    I’ve been looking at running a Mastodon instance, or similar, myself for a while. Because yes, there are more Mastodon servers (I have accounts on mastodon.cloud and on mastodon.nl), but I know even less about who runs them and their tech skills, attitudes or values than I know about Twitter. I’ve just exchanged a big silo for a smaller one. The obvious logical endpoint of thinking about multiple instances or servers, is that instances should be individual, or based on existing groups that have some cohesion. More or less like e-mail, which also is a good analogy to think of when trying to understand Mastodon account names.

    Ideally, running a Mastodon instance would be something you do yourself, and which at most has your household members in it. Or maybe you run one for a specific social context. So how easy is it, to run Mastodon myself.

    Not easy.

    I could deploy it on my own VPS. But maintaining a VPS is rather a lot of work. And I would need to find out if I run the right type of operating system and other packages to be able to do it. Not something for everyone, nor for me without setting aside some proper time.

    Or I could spin up a Mastodon instance at Amazon’s server parks. That seems relatively easy to do, requiring a manageable list of mouse clicks. It doesn’t really fit my criteria though, even if it looks like a relatively quick way to at least have my own instance running. It would take me out of Twitter’s software silo, but not out of Amazon’s hardware silo. Everything would still be centralised on a US server, likely right next to the ones Twitter is using. Meaning I’d have more control over my own data, but not be bringing my stuff ‘home’.

    Better already is something like Masto.host, run by a volunteer named Hugo Gameiro who’s based in Portugal. It provides ease of use in terms of running your own instance, which is good, but leaves open issues of control and flexibility.

    So I’d like a solution that either can run on a package with my local hosting provider or figure out how to run it on cheap hardware like Raspberry Pi which can be connected to my home router. The latter one I’d prefer, but for now I am looking to learn how easy it is to do the former.

    Mastodon and other similar tools like Pleroma require various system components my hosting provider isn’t providing, nor likely to be willing to provide. Like many other hosters they do have library of scripts you can automatically install with all the right dependencies and settings. In the section ‘social media’ it doesn’t mention Mastodon or any other ‘modern’ varity, but they do list GnuSocial and its predecessor StatusNet. GnuSocial is a script that uses the same protocols like Mastodon, OStatus and ActivityPub. So it should be able to communicate with Mastodon.

    I installed it and created an account for myself (and myself as administrator). Then I tried to find ways to federate with Mastodon instances. The interface is rather dreadful, and none of the admin settings seemed to hint at anything that lies beyond the GnuSocial instance itself, no mention of anything like federation.

    The interface of GnuSocial

    However in my profile a button labelled “+remote” popped up. And through that I can connect to other people on other instances. Such as the people I am connected to on Mastodon already. I did that, and it nicely links to their profiles. But none of their messages show up in my stream. Even if it looks I can send messages to them from my GnuSocial instance as I can do things like @someotheruser, they don’t seem to arrive. So if I am indeed sending something, there’s no-one listening at the other end.

    I did connect to others externally

    And I can send messages to them, although they do not seem to arrive

    So that leaves a number of things for next steps to explore. Also on Mastodon in conversation with Maarten I noticed that I need to express better what I’m after. Something for another posting. To be continued.

    Elmine says this about the difficulty to describe her feelings about having almost 70 guests, friends, family, clients, peers, neighbours, spend two days in our home. Where the youngest was 8 weeks, the oldest 80 years. Where the shortest trip made was from right next door, and the furthest from Canada and Indonesia, and the rest from somewhere in between:

    I try to find words to describe what happened the past few days, but everything I write down feels incomplete and abstract. How do you put into words how much it means to you that friends travel across the world to attend your birthday party? That you can celebrate a new year in life with friends you haven’t been able to meet for four years (or longer)? Who’s lives have changed so drastically in those years, including my own, but still pick up where you left the conversation all those years before? How can I describe how much it means to me to be able to connect all those people Ton and I collected in our lives, bring them together in the same space and for all of them to hit it off? That they all openly exchanged life stories, inspired each other, geeked out together, built robots together?

    It was an experience beyond words. It was, yet again, an epic birthday party.

    It also extends to the interaction we had with those who could not attend, because the invitation and response also trigger conversations about how other people are doing and what is going on in their lives.

    I completely share Elmine’s sense of awe.

    Our event meant bringing together some 45 people. They all know at least one of us two, but mostly don’t know each other. Some type of introduction is therefore useful, but you don’t want to take much time out of the day itself for it, as often intro-rounds are dreary and meaningless exercises that sap energy and of which you don’t remember much immediately after. So we’ve aimed for our events to have a first activity that is also an intro-round, but serves a bigger purpose for the event.

    Previously we’ve done 1-on-1 intro conversations that also produced a hand drawn map of connections or of skills and experiences in the group, to be re-used to find the right people for subsequent sessions. We’ve done groups of 5 to 6 to create Personas, as the first step of the design process to make something yourself. This time we settled on an idea of Elmine, to do what can best be described as Anecdote Circles Lite. Anecdote circles are a process to elicit experiences and stories from a group as they reveal implicit knowledge and insights about a certain topic (PDF). You group people together and prompt them with one or more questions that ask about specific occasions that have strong feelings attached to it. Others listen and can write down what stands out for them in the anecdote shared.

    The starting point of the unconference theme ‘Smart Stuff That Matters’ was our move to Amersfoort last year. It means getting to know, find your way in, and relate to a new house, a different neighbourhood, a different city. And do that in the light of what you need to fulfill your needs to be at home and feel supported in the new environment. But in a broader light you can use the same questions to take a fresh look at your own environment, and make it ‘smarter’ in being at home and feeling supported. Our opening exercise was shaped to nudge the participants along the same path.

    In my opening remarks, after singing a birthday song together for Elmine, I sketched our vision for the event much as in the previous paragraph. Then I asked all participants to find 3 or 4 others that you preferrably do not know, and find a spot in the house or garden (inviting them to explore the house/garden on their own that way too, giving them permission to do so as it were). The question to prompt conversation was “Think back to the last time you moved house, and arrived in a new environment. What was most disappointing to you about your new place/live? What was pleasantly surprising to you about your new place/live?” With those questions and pen & paper everybody was off to their first conversations.

    stm18

    The thoughts and observations resulting from the intro-round

    Judging by Peter’s description of it, it went well. It’s quoted here in full as it describes both the motivation for and the layeredness of the experience quite well. I take Peter’s words as proof the process worked as intended.

    The second highlight is an event that preceded Oliver’s talk, the “icebreaker” part of the day that led things off. I have always dreaded the “everybody introduce yourself” part of meetings, especially meetings of diverse people whose lives inevitably seem much more interesting than my own; this, thankfully, was dispensed with, and instead we were prompted to gather with people we didn’t yet know and to talk about our best and worst moves in life.

    What proceeded from this simple prompt was a rich discussion of what it’s like to live as an expat, how difficult it is to make friends as an adult, and the power of neighbourhood connections. Oliver and I were in a group with Heinz and Elja and Martyn, and we talked for almost an hour. I have no idea what any of the others in our group do for a living, but I know that Martyn mowed his lawn this week in preparation for a neighbourhood party, that Heinz lives in an apartment block where it’s hard to get to know his neighbours, and that Elja has lived in Hungary, the USA and Turkey, and has the most popular Dutch blog post on making friends.

    During the event Elja shared her adagio that the best way to get to know people after moving to a new environment is to do something together (as opposed to just sitting down for coffee and conversation). It’s pleasantly recursive to see a statement like that as the result of a process designed to follow that adagio in the first place.

    I will transscribe all the post-its and post (some of) it later.

    Some images from previous activities-as-intro-rounds we used in previous editions:

    IMG_6973IMG_7015

    Persona creation / Using the hand drawn skills cards

    Drawing the Sociogram

    Drawing a map of connections, dubbed sociogram, between participants

    In the discussions during Smart Stuff That Matters last Friday, I mentioned a longtime demand I have of social media. The ability to on my blog have different levels of access, of presenting content. But not in the shape of having accounts on my site and corresponding overhead, but more fluid like layers of an onion, corresponding to the social distance between me and a specific reader. Where I write an article, that looks different to a random reader, compared to what e.g. Peter or Frank sees. Maybe even mark-up the content in a way that controls how specific parts of a posting are visible or not. We mused if IndieAuth might be useful here as a first step, as it at least spares me from the maintenance of accounts.

    Do You Have Any Diodes? ….. …. Is probably the most unlikely question I got ever asked out of the blue at a birthday party. However the answer turned out to be yes, I did have two diodes. I didn’t think I did, but taking a look in the one box I suspected might have some electronic components in them, proved me wrong.

    The diodes were needed to increase the strength of the scary noises an evil robot was emitting. This evil robot was being created just outside our front door where the enormous Frysklab truck, containing a mobile FabLab, was completely filling the courtyard. Representing everything that is wrong and evil about some of the devices that are marketed as necessary for a ‘smart home’, the evil robot then got ritually smashed into pieces by Elmine, wielding a gigantic hammer, named ‘The Unmaker’ that a colleague brought with him. That was the official closing act of our unconference “Smart Stuff That Matters“.

    Around all this our 40 or so guests, friends, family members, clients, colleagues, peers, were weaving a rich tapestry of conversations and deepening connections. Something that our friend Peter put into words extremely well. Elmine and I are in awe of the effort and time all who joined us have put into coming to our home and participate in our slightly peculiar way of celebrating birthdays. Birthday parties where evil robots, a hyperloop to send messages from the courtyard to the garden, mythical German bbq-sausages, friendship, philosophy, web technology, new encounters and yes diodes, are all key ingredients to help create a heady mix of fun, inspiration, connection, and lasting memories.

    Thank you all so much for making it so.

    stm18stm18

    stm18stm18

    stm18

    Triggered by some of the previous postings on RSS, I started thinking about what my ideal set-up for RSS reading would be. Because maybe there’s a way to create that for myself.

    A description of how I approach my feeds, and what I would ideally like to be able to do, I already penned a decade ago, and it hasn’t really changed much.

    The basic outline is:

    • I think of feed subscriptions as subscribing to people. I don’t follow your blog, but I follow and interact with you. I used to have a blogroll that reflected that by showing the faces of people whose writing I read. Basically the web is my social network always, In my feed reader every feed title is the name of the author, not the blog’s title.
      my blogroll in 2005, people’s faces, not site names
    • The feeds I subscribe to, I group in folders by subjective social distance, roughly following Dunbar-style group sizes. The dozen closest to me, the 50, the 150, the 500 beyond that, and above that 999 for people I don’t have a direct connection with at all. So my wife’s blog feed is in folder a12, and if I’ve just come across your blog this week and we never met, your feed will be in e999. The Keep Track folder are my own content feeds from various platforms.
      the folders in my current feedreader by social distance
    • There are three reading styles I’d like my reader to support, of which it only does one.
      • I read to see what is going on with people I know, by browsing through their writing from closer to further away, so from the a12 folder towards the e999 folder. This my reader supports, by way of allowing a folder structure for it
      • I read outside-in, looking at the general patterns in all the new postings of a day: what topics come up, what are people working on, what do they care about. This is not supported yet, other than scrolling through the whole thing. A quick overview of topics versus social distance would be useful here.
      • I read inside-out, where I have specific questions, ideas or topics on my mind and want to see if some of the people in my reader have been wrting about it recently. This is not supported yet. A good way to search my feeds would be needed.
    • I would like to be able to tag feeds. So I can contextualise the author (coder, lives in Portugal, interested in privacy by design, works independently). This allows me to look at different groups of people across the social distance related folders. E.g. “what are the people I follow in Berlin up to this week, as I will be visiting in a few days?” “What are the current concerns in the IndieWeb community?” Ten years ago I visualised that as below
      Plotting contexts

      Social distances with community and multi-faceted contexts plotted on them

    • I would like to be able to pull in tags of postings and have full content search functionality. This would support my inside-out reading. “What is being said today in my feeds about that conference I didn’t go to?” “Any postings today on privacy by design?”
    • I think I’d like visual representations of which communities are currently most active, and for topics, like heat maps. Alerts on when the level of activity for a feed or a community or subsets of people changes would be nice too.
    • From the reader follow actions, such as saving an article, creating a todo from it, bookmarking it, or sharing it in some channel. An ideal reader should support all those actions, or let me configure actions
    • [UPDATED OCTOBER 2018] From reading a posting by Peter Rukavina I realised I’d also like to have built-in machine translation.

    From the whole IndieWeb exploration of late, I realized that while no feedreader does all the above, it might be possible to build something myself. TinyTiny RSS seems a good starting point. It’s an open source tool you can run as your own instance. It comes with features such as filtering and auto-tagging that might fit my needs. It can be hosted on my own domain, and it has a database I then have back-end access to, to build features it doesn’t have itself (such as visualisations and specific sharing actions). It can also produce RSS feeds. It seems with TinyTiny RSS I could do all kinds of things to the RSS feeds I pull in on my server, and push the results out again as RSS feeds themselves. Those I could load into my regular reader, or republish etc.

    Now need to find a bit of time to set it up and to play with it.