Replied to Checking in on my social media fast by Ben WerdmüllerBen Werdmüller
Three weeks ago, I decided to go dark on social media. ... It's one of the best things I've ever done. I thought I'd check in with a quick breakdown: what worked, and what didn't. Here we go.   Wh...

I recognise what Ben Werdmüller says. About the withdrawal creating space to both read more long form, and to write more myself. Also the replacement dopamine cravings, by looking up your blog’s statistics when the Facebook likes fall away, I had. Indeed as Ben suggests, I also removed the statistics from my website (by disabling JetPack, I never used Google Analytics anyway). Different from him, I never stopped using Twitter or LinkedIn, just cut back Facebook which I felt was the real time sink (also as Twitter nor LinkedIn were on my phone to begin with, and because I use Twitter very differently from how I used Facebook.) Going completely ‘dark’ on social media is also about privilege I feel, so the crux is how conscious are we of our information strategies? How the tools we use support those information strategies or not, and most importantly in the case of social media as a time sink: in how much it’s the tools that shape our info diet, instead of the other way around.

Replied to Some quick quotes on #edu106 and the power of #IndieWeb #creativity #edtechchat #mb by Greg McVerryGreg McVerry
....fun to figure out everything I wanted to do with my website,....gained a sense of voice...,...I’m so tired of all the endless perfection I see on social media......my relationship with technology changed....

After my initial posting on this yesterday, Greg shares a few more quotes from his students. It reminds me of the things both teachers and students said at the end of my 2008 project at Rotterdam university for applied sciences. There, a group of teachers explored how to use digital technology, blogs and the myriad of social web tools, to both support their own learning and change their teaching. The sentiments expressed are similar, if you look at the quotes in the last two sections (change yourself, change your students) of my 2009 posting about it. What jumps out most for me, is the sense of agency, the power that comes from discovering that agency.

As a long time netizen it is easy to forget that for many now online their complete experience of the internet is within the web silos. I frequent silos, but I’ve always kept a place well outside of it for over two decades. When you’ve never ‘played outside’, building your own space beyond the silos can be an eye-opener. Greg McVerry pointed to the blog of one of his students, who described the experience of stepping outside the silos (emphasis mine):

The fact that I now have a place where I can do that, where I can publish my thoughts whenever I want in a place open for people to read and to not be afraid of doing so, is liberating. I’ve always wanted a space online to call my own. I’m so tired of all the endless perfection I see on social media. My space, “Life Chapter by Chapter” is real. It’s me, personified by a website. And though this post is not digitally enhanced in any way, I love it because it’s representative of the bottom line of what I’ve learned in EDU 106. I’m my own person on this site, I’m not defined by Instagram, Facebook, or Twitter. I can post what I want, when I want, how I want. It’s a beautiful thing.

That’s a beautiful thing, indeed. Maybe this is the bit that Frank Meeuwsen and I need to take as the key to the story when writing a book, as Elja challenged us today (in Dutch).

Alnwick Garden - Walled Garden
There’s a world outside the walled garden. (Photo by Gail Johnson, CC-BY-NC)

Some things I thought worth reading in the past days

  • A good read on how currently machine learning (ML) merely obfuscates human bias, by moving it to the training data and coding, to arrive at peace of mind from pretend objectivity. Because of claiming that it’s ‘the algorithm deciding’ you make ML a kind of digital alchemy. Introduced some fun terms to me, like fauxtomation, and Potemkin AI: Plausible Disavowal – Why pretend that machines can be creative?
  • These new Google patents show how problematic the current smart home efforts are, including the precursor that are the Alexa and Echo microphones in your house. They are stripping you of agency, not providing it. These particular ones also nudge you to treat your children much the way surveillance capitalism treats you: as a suspect to be watched, relationships denuded of the subtle human capability to trust. Agency only comes from being in full control of your tools. Adding someone else’s tools (here not just Google but your health insurer, your landlord etc) to your home doesn’t make it smart but a self-censorship promoting escape room. A fractal of the panopticon. We need to start designing more technology that is based on distributed use, not on a centralised controller: Google’s New Patents Aim to Make Your Home a Data Mine
  • An excellent article by the NYT about Facebook’s slide to the dark side. When the student dorm room excuse “we didn’t realise, we messed up, but we’ll fix it for the future” defence fails, and you weaponise your own data driven machine against its critics. Thus proving your critics right. Weaponising your own platform isn’t surprising but very sobering and telling. Will it be a tipping point in how the public views FB? Delay, Deny and Deflect: How Facebook’s Leaders Fought Through Crisis
  • Some of these takeaways from the article just mentioned we should keep top of mind when interacting with or talking about Facebook: FB knew very early on about being used to influence the US 2016 election and chose not to act. FB feared backlash from specific user groups and opted to unevenly enforce their terms or service/community guidelines. Cambridge Analytica is not an isolated abuse, but a concrete example of the wider issue. FB weaponised their own platform to oppose criticism: How Facebook Wrestled With Scandal: 6 Key Takeaways From The Times’s Investigation
  • There really is no plausible deniability for FB’s execs on their “in-house fake news shop” : Facebook’s Top Brass Say They Knew Nothing About Definers. Don’t Believe Them. So when you need to admit it, you fall back on the ‘we messed up, we’ll do better going forward’ tactic.
  • As Aral Balkan says, that’s the real issue at hand because “Cambridge Analytica and Facebook have the same business model. If Cambridge Analytica can sway elections and referenda with a relatively small subset of Facebook’s data, imagine what Facebook can and does do with the full set.“: We were warned about Cambridge Analytica. Why didn’t we listen?
  • [update] Apparently all the commotion is causing Zuckerberg to think FB is ‘at war‘, with everyone it seems, which is problematic for a company that has as a mission to open up and connect the world, and which is based on a perception of trust. Also a bunker mentality probably doesn’t bode well for FB’s corporate culture and hence future: Facebook At War.

Frank writes about how the Netherlands became the first connection outside the USA on the open net by the NSF (as opposed to the military initiated ARPANET academic institutions used then), thirty years ago yesterday on November 17th 1988. Two years previously .nl had been created as the first ever country top level domain. This was the result of the work and specifically the excellent personal connections to their US counterparts of people at the Amsterdam CWI, the center for mathematics. Because of those personal connections the Netherlands was connected very early on to the open internet and still is a major hub. Through that first connection Europe got connected as well, as the CWI was part of the European network of academic institutions EUnet. A large chunk of the European internet traffic still runs through the Netherlands as a consequence.

I went to university in the summer of 1988 and had the opportunity to early on enjoy the fruits of the CWI’s work. From the start I became active in the student association Scintilla at my electronic engineering department at University of Twente. Electronic engineering students had an advantage when it came to access to electronics and personal computers and as a consequence we had very early connectivity. As first year student I was chairman of one of Scintilla’s many committees and in that role I voted in late ’88 / early ’89 to spend 2500 guilders (a huge sum in my mind then) for cables and plugs and 3 ethernet cards for the PC’s we had in use. I remember how on the 10th floor of the department building other members were very carefully connecting the PC’s to each other. It was the first LAN on campus not run by the University itself nor connected to the mainframe computing center. Soon after, that LAN was connected to the internet.

In my mind I’ve been online regularly since late 1989, through Scintilla’s network connections. I remember there was an argument with the faculty because we had started using a subdomain directly of the university, not as a subdomain of the faculty’s own subdomain. We couldn’t, because they hadn’t even activated their subdomain yet. So we waited for them to get moving, under threat of losing funding if we didn’t comply. Most certainly I’ve been online on a daily basis since the moment I joined the Scintilla board in 1990 which by then had moved to the basement of the electronics department building. We at first shared one e-mail address, before running our own mail server. I used telnet a lot, and spent an entire summer, it must have been the summer of ’91 when I was a board member, chatting to two other students who had a summer job as sysadmin at the computer center of a Texas university. The prime perk of that job was they could sit in air conditioning all summer, and play around with the internet connection. Usenet of course. Later Gopher menus, then 25 years ago the web browser came along (which I first didn’t understand as a major change, after all I already had all the connectivity I wanted).

So of those 30 years of open internet in the Netherlands, I’ve been online daily 28 years for certain, and probably a year longer with every-now-and-then connectivity. First from the basement at university, then phoning into the university from home, then (from late ’96) having a fixed IP address through a private ISP (which meant I could run my own server, which was reachable when I phoned into the ISP), until the luxury we have now of a fiber optic cable into our house, delivering a 500Mbit/s two-way connection (we had a 1Gb connection before the move last year, so we actually took a step ‘backwards’).

Having had daily internet access for 28 years, basically all of my adult life, has shaped both my professional and personal life tremendously. Professionally, as none of my past jobs nor my current work would have been possible without internet. None of my work in the past decades would have even existed without internet. My very first paid job was setting up international data transmissions between an electronics provider, their factories, as well as the retail chains that sold their stuff. Personally it has been similar. Most of my every day exchanges are with people from all over the world, and the inspiring mix of people I may call friends and that for instance come to our birthday unconferences I first met online. Nancy White‘s husband and neighbours call them/us her ‘imaginary friends’. Many of our friends are from that ‘imaginary’ source, and over the years we met at conferences, visited each others houses, and keep in regular touch. It never ceases to amaze.

To me the internet was always a network first, and technology second. The key affordance of the internet to me is not exchanging data or connecting computer systems, but connecting people. That the internet in its design principles is a distributed network, and rather closely resembles how human networks are shaped, is something we haven’t leveraged to its full potential yet by far. Centralised services, like the current web silos, don’t embrace that fundamental aspect of internet other than at the hardware level, so I tend to see them as growths more than actualisation of the internets’s foremost affordance. We’ve yet to really embrace what human digital networks may achieve.

Because of that perspective, seeing the digital network as a human network, I am mightily pleased that the reason I have been able to be digitally connected online for almost 30 years, is first and foremost because of a human connection. The connection between Piet Beertema at CWI in Amsterdam to Rick Adams at NSF in the USA, which resulted in the Netherlands coming online right when I started university. That human connection, between two people I’ve never met nor interacted with, essentially shaped the space in which my life is taking its course, which is a rather amazing thought.

This is good news from Flickr. Flickr is amending their changes, to ensure that Creative Commons licensed photos will not be deleted from free accounts that are over their limit. (via Jeremy Cherfas)

Flickr recently announced they would be deleting the oldest photos of free accounts that have more photos than the new limit of 1000 images. This caused concern as some of those free accounts might be old, disused accounts, where there are images with open licenses that are being used elsewhere. Flickr allows search for images with open licenses, and makes it easy to embed those in your own online material. Removing old images might therefore break things, and there were many people calling for Flickr to try and prevent breaking things. And they are, Flickr is providing all public institutions and archives publishing photos to Flickr with a free Pro account, and will also delete no Creative Commons licensed images, if they carried that license before 1 November 2018. (This prevents you from keeping an unlimited free account by simply relicensing the photos, and uploading new photos with CC licenses only.)

Replied to Gab and the decentralized web by Ben WerdmüllerBen Werdmüller
On one side, by creating a robust decentralized web, we could create a way for extremist movements to thrive. On another, by restricting hate speech, we could create overarching censorship that genuinely guts freedom of speech protections

I think this is a false dilemma, Bernd.

I’d say that it would be great if those extremists would see using a distributed tool like Mastodon as the only remaining viable platform for them. It would not suppress their speech. But it woud deny them any amplification, which they now enjoy by being very visible on mainstream platforms, giving them the illusion they are indeed mainstream. It will be much easier to convince, if at all needed, instance moderators to not federate with instances of those guys, reducing them ever more to their own bubble. They can spew hate amongst themselves for eternity, but without amplification it won’t thrive. Jotted down some thoughts on this earlier in “What does Gab’s demise mean for federation?

From the recent posting on Mastodon and it currently lacking a long tail, I want to highlight a specific notion, and that’s why I am posting it here separately. This is the notion that tool usage having a long tail is a measure of distribution, and as such a proxy for networked agency. [A long tail is defined as the bottom 80% of certain things making up over 50% of a ‘market’. The 80% least sold books in the world make up more than 50% of total book sales. The 80% smallest Mastodon instances on the other hand account for less than 15% of all Mastodon users, so it’s not a long tail].

To me being able to deploy and control your own tools (both technology and methods), as a small group of connected individuals, is a source of agency, of empowerment. I call this Networked Agency, as opposed to individual agency. Networked also means that running your own tool is useful in itself, and even more useful when connected to other instances of the same tool. It is useful for me to have this blog even if I am its only reader, but my blog is even more useful to me because it creates conversations with other bloggers, it creates relationships. That ‘more useful when connected’ is why distributed technology is important. It allows you to do your own thing while being connected to the wider world, but you’re not dependent on that wider world to be able to do your own thing.

Whether a technology or method supports a distributed mode, in other words is an important feature to look for when deciding to use it or not. Another aspect is the threshold to adoption of such a tool. If it is too high, it is unlikely that people will use it, and the actual distribution will be very low, even if in theory the tools support it. Looking at the distribution of usage of a tool is then a good measure of success of a tool. Are more people using it individually or in small groups, or are more people using it in a centralised way? That is what a long tail describes: at least 50% of usage takes place in the 80% of smallest occurrences.

In June I spoke at State of the Net in Trieste, where I talked about Networked Agency. One of the issues raised there in response was about scale, as in “what you propose will never scale”. I interpreted that as a ‘centralist’ remark, and not a ‘distributed’ view, as it implied somebody specific would do the scaling. In response I wrote about the ‘invisible hand of networks‘:

“Every node in a network is a scaler, by doing something because it is of value to themselves in the moment, changes them, and by extension adding themselves to the growing number of nodes doing it. Some nodes may take a stronger interest in spreading something, convincing others to adopt something, but that’s about it. You might say the source of scaling is the invisible hand of networks.”

In part it is a pun on the ‘invisible hand of markets’, but it is also a bit of hand waving, as I don’t actually had precise notions of how that would need to work at the time of writing. Thinking about the long tail that is missing in Mastodon, and thus Mastodon not yet building the distributed social networking experience that Mastodon is intended for, allows me to make the ‘invisible hand of networks’ a bit more visible I think.

If we want to see distributed tools get more traction, that really should not come from a central entity doing the scaling. It will create counter-productive effects. Most of the Mastodon promotion comes from the first few moderators that as a consequence now run large de-facto centralised services, where 77% of all participants are housed on 0,7% (25 of over 3400) of servers. In networks smartness needs to be at the edges goes the adagium, and that means that promoting adoption needs to come from those edges, not the core, to extend the edges, to expand the frontier. In the case of Mastodon that means the outreach needs to come from the smallest instances towards their immediate environment.

Long tail forming as an adoption pattern is a good way then to see if broad distribution is being achieved.
Likely elements in promoting from the edge, that form the ‘invisible hand of networks’ doing the scaling are I suspect:

  • Show and tell, how one instance of tool has value to you, how connected instances have more value
  • Being able to explain core concepts (distribution, federation, agency) in contextually meaningful ways
  • Being able to explain how you can discover others using the same tool, that you might want to connect to
  • Lower thresholds of adoption (technically, financially, socially, intellectually)
  • Reach out to groups and people close to you (geographically, socially, intellectually), that you think would derive value from adoption. Your contextual knowledge is key to adoption.
  • Help those you reach out to set up their own tools, or if that is still too hard, ‘take them in’ and allow them the use of your own tools (so they at least can experience if it has value to them, building motivation to do it themselves)
  • Document and share all you do. In Bruce Sterling’s words: it’s not experimenting if you’re not publishing about it.

stm18
An adoption-inducing setting: Frank Meeuwsen explaining his steps in leaving online silos like Facebook, Twitter, and doing more on the open web. In our living room, during my wife’s birthday party.

[TL;DR: A long tail is needed for distributed technology to be sustainable I think, otherwise it’s just centralisation and single points of failure in a different form. A long tail means the bottom 80% take over 50% of a market, and the top 20% under 50%. Mastodon currently has over 85% of its participants in the top 20% of instances, and it’s worse than that as 77% of participants are in 0,7% of instances. Just 15% are in the bottom 80% of instances. There’s a power law distribution, but it’s not a long tail. What can Mastodon do to get there and to sustainability?]

On 6 October 2016 Mastodon was launched, and its originator Eugen Rochko looks back in a blogpost on the journey of the past two years.

I joined on 7 April 2017, 6 months after its launch, at the Mastodon.cloud instance. I posted some messages for a month, then fell quiet for half a year. A few messages last March, and then I started using it more frequently last month, in the run-up to figuring out how to run Mastodon for myself (which for now means a hosted solution, but still aiming for running it from the home router). It’s now part of my daily information diet, but no guarantee yet it will last, although being certain I have ‘my half’ of the conversation on a domain I own helps a lot towards maintaining worthwhile exchanges.

Eugen’s blogpost is rightfully proud of what has been accomplished. It’s not yet proof of the sustainability of federated solutions though as he suggests.

He shares a few interesting numbers about the usage of Mastodon. The median of the 3460 known instances is 8 users. In total there are 1.627.557 registered accounts. The largest instance has 415.941 members, while the top 3 together have 52% of users, meaning the number 2 and 3 average 215.194 accounts. The top 25 largest instances have 77% or 1.253.219 members, meaning that the numbers 4-25 average 18.495 users. As the median is 8 it means the smallest 1730 instances have at most 8*1730 = 13.840 users. It also means that the number 26 to number 1730 instances have at least 360.498 members, or an average of 211. This tells us there’s a Pareto power law distribution: the top 20% of instances hold at least 85% of users at the moment. That also means there is no long tail, just a stub that holds at most 15% of Mastodon users only. For a long tail to exist, the smallest 80% of instances should account for over 50% of users, or over three times more than the current number.

As the purpose of Mastodon is distribution, where federation allows everyone to connect regardless of their instances (sort of like e-mail), I think Mastodon can only be deemed sustainable if there is a true long tail. Meaning, that while the number of users goes up, the number of instances should go up at a faster rate. So that over 50% of all Mastodon users will be on the 80% smallest or even individual instances. In the current numbers we should be most interested in the 50% of instances that now have 8 or less users, and find out what drives those instances, so we may have many many more of them. We should also think about what a bigger-to-smaller-instances funnel for members can look like, not just leave it to chance. I think that the top 25 Mastodon instances, which is just 0.7% of the total, currently having 77% of all users is very problematic from a sustainability perspective. Because that level of concentration is completely at odds with the stated purpose of Mastodon: distribution.

Eugen Rochko in his anniversary posting points at a critical article from April 2017 in Mashable, implying that criticaster has been been proven wrong definitively. I disagree. While much of the ‘predictions’ in that article are indeed silly, it also contains a few hints as to where sustainability may be found. The criticaster doesn’t get federation (yet likely uses mail everyday), and complains about discovery (yet likely is relieved not all his personal e-mail addresses are to be found in Google). Yet if we can’t explain distribution and federation, and can’t or don’t communicatie how discovery works in such a setting then we won’t be able to make a long tail grow. For more people to adopt small or individual instance we need to bring the threshold for running your own instance way down, and then way down again. To the level of at most one click installing a script on any regular hosting service, and creating a first account.

Using open protocols, like ActivityPub which Mastodon supports, is key in getting more people out of walled gardens and silos, and on the open web. Tracking its adoption is a useful measure of success, but 2 years of existence is not a sign of sustainability at all. What Eugen Rochko has kicked off with Mastodon is valuable and very laudable, but we have barely started getting to where we need to be for it to stick.

Sebastiaan at IWC Nürnberg last weekend did some cool stuff with visualising feeds he follows, as well as find a way of surfacing stuff from outside his feeds because those in his feeds talk about it or like it. That is very exciting to me as it creates a peripheral view, and really puts your network to use as a filter. He follows up with a good posting on readers.

Towards the end of that posting there’s some discussion of how to combat ways of feed overwhelm.
That Sebastiaan, reminds me of what I wrote about my feedreading strategies in 2005 (take a look at the images there, they help in understanding the text that follows).

I think it is useful to think not just of what you yourself consume in terms of feeds, and how to optimise that, but also in terms of the feedback loops you need/want back to the authors of some of your feeds.

Your network is a filter, and a certain level of feedback is needed to be able to spot patterns that lift signals above the noise, the peripheral vision you described. Both individually and collectively. But too much feedback creates echo-chambers. So the overall quality of your network / network’s feeds and interaction is part of the equation in thinking about feed overwhelm. It introduces needs for alternating and deliberate phases of divergence and convergence, and being able to judge diversity and quality of your network.

It’s in that regard very important to realise that there’s a key factor not present in your feeds that is enormously useful for filtering: your own personal knowledge about the author of a feed. If you can tag feeds with what you know of their authors (coder, Berlin, Drupal, e.g.), and how you perceive the social distance between you and them (from significant other to total stranger), you can do even more visualising by asking questions like “what are the topics that European front-end developers I know are excited about this week”, or by visualising what communities are talking about. Social distance also is a factor in dealing with overwhelm: I for instance read a handful of people important to me every day when they have posted, and others I don’t read if I don’t have time, and I therefore group my feeds by social distance.

Finally, overwhelm is more likely if you approach feeds as drinking from a tap. But again, you know things that are not present in your feeds: current interests you have, questions you have, things you’re working on. A listener more likely hears those things better that are close to them. This points to less a river-of-news approach, and more to an active interrogation of feeds based on your personal ‘agenda’ at a time of your choosing.

Fear of missing out is not important, especially not when the feedback loops, that I mentioned above, between authors exist. If it is a signal of some sort, and not noise, it will bounce around your network-as-a-filter for a while, and is likely to be there in some form still, when you next take a look. If it is important and you overlooked it, it will come up again when you look another time.

Also see my posting about my ideal feedreader, from a few months ago.

We’re in a time where whatever is presented to us as discourse on Facebook, Twitter or any of the other platforms out there, may or may not come from humans, bots, or someone/a group with a specific agenda irrespective of what you say or respond. We’ve seen it at the political level, with outside influences on elections, we see it in things like gamer gate, and in critiques of the last Star Wars movie. It creates damage on a societal level, and it damages people individually. To quote Angela Watercutter, the author of the mentioned Star Wars article,

…it gets harder and harder to have an honest discussion […] when some of the speakers are just there to throw kerosene on a flame war. And when that happens, when it’s impossible to know which sentiments are real and what motivates the people sharing them, discourse crumbles. Every discussion […] could turn into a […] fight — if we let it.

Discourse disintegrates I think specifically when there’s no meaningful social context in which it takes place, nor social connections between speakers in that discourse. The effect not just stems from that you can’t/don’t really know who you’re conversing with, but I think more importantly from anyone on a general platform being able to bring themselves into the conversation, worse even force themselves into the conversation. Which is why you never should wade into newspaper comments, even though we all read them at times because watching discourse crumbling from the sidelines has a certain addictive quality. That this can happen is because participants themselves don’t control the setting of any conversation they are part of, and none of those conversations are limited to a specific (social) context.

Unlike in your living room, over drinks in a pub, or at a party with friends of friends of friends. There you know someone. Or if you don’t, you know them in that setting, you know their behaviour at that event thus far. All have skin in the game as well misbehaviour has immediate social consequences. Social connectedness is a necessary context for discourse, either stemming from personal connections, or from the setting of the place/event it takes place in. Online discourse often lacks both, discourse crumbles, entropy ensues. Without consequence for those causing the crumbling. Which makes it fascinating when missing social context is retroactively restored, outing the misbehaving parties, such as the book I once bought by Tinkebell where she matches death threats she received against the sender’s very normal Facebook profiles.

Two elements therefore are needed I find, one in terms of determining who can be part of which discourse, and two in terms of control over the context of that discourse. They are point 2 and point 6 in my manifesto on networked agency.

  • Our platforms need to mimick human networks much more closely : our networks are never ‘all in one mix’ but a tapestry of overlapping and distinct groups and contexts. Yet centralised platforms put us all in the same space.
  • Our platforms also need to be ‘smaller’ than the group using it, meaning a group can deploy, alter, maintain, administrate a platform for their specific context. Of course you can still be a troll in such a setting, but you can no longer be one without a cost, as your peers can all act themselves and collectively.
  • This is unlike on e.g. FB where the cost of defending against trollish behaviour by design takes more effort than being a troll, and never carries a cost for the troll. There must, in short, be a finite social distance between speakers for discourse to be possible. Platforms that dilute that, or allow for infinite social distance, is where discourse can crumble.

    This points to federation (a platform within control of a specific group, interconnected with other groups doing the same), and decentralisation (individuals running a platform for one, and interconnecting them). Doug Belshaw recently wrote in a post titled ‘Time to ignore and withdraw?‘ about how he first saw individuals running their own Mastodon instance as quirky and weird. Until he read a blogpost of Laura Kalbag where she writes about why you should run Mastodon yourself if possible:

    Everything I post is under my control on my server. I can guarantee that my Mastodon instance won’t start profiling me, or posting ads, or inviting Nazis to tea, because I am the boss of my instance. I have access to all my content for all time, and only my web host or Internet Service Provider can block my access (as with any self-hosted site.) And all blocking and filtering rules are under my control—you can block and filter what you want as an individual on another person’s instance, but you have no say in who/what they block and filter for the whole instance.

    Similarly I recently wrote,

    The logical end point of the distributed web and federated services is running your own individual instance. Much as in the way I run my own blog, I want my own Mastodon instance.

    I also do see a place for federation, where a group of people from a single context run an instance of a platform. A group of neighbours, a sports team, a project team, some other association, but always settings where damaging behaviour carries a cost because social distance is finite and context defined, even if temporary or emergent.

    Previously I had tried to get GNU Social running on my own hosted domain as a way to interact with Mastodon. I did not get it to work, for reasons unclear to me, I could follow people on Mastodon but would not receive messages, nor would they see mine.

    This morning I saw the message below in my Mastodon timeline.

    It originates from Peter Rukavina’s own GNU Social install. So at least he got the ‘sending mentions’ part working. He is also able to receive my replies, as my responses show up underneath his original message. Including ones I limited the visibility of it seems.

    Now I am curious to compare notes. Which version of GNU Social? Any tweaks? Does Peter receive my timeline? How do permissions propagate (I only let people follow me after I approve them)? And more. I notice that his URL structures are different from those in my GNU Social install for instance.

    After I moved my Mastodon accounts from a general server to a self-run instance last week, I was curious to see how many of the followers I had on the old account would make the move to my current Mastodon account. After all the ‘cost of leaving’ is always a consideration when changing course in social media usage, in this case specifically the portability of your existing network. Last week I had 43 followers on the old account, and I now have 11 on my new account, so that is about 25%. Let’s see if it grows in the coming days. Likely some of the followers I had no longer use Mastodon. So another question is when I reach the same number of followers by engaging in new conversations.

    As I didn’t succeed yet in getting Mastodon to run on a Raspberry Pi, nor in running a Gnu Social instance that actually federates on my hosting package, I’ve opted for an intermediate solution to running my own Mastodon instance.

    Key in all this is satisfying three dimensions: control, flexibility and ease of use. My earlier attempts satisfy the control and flexibility dimensions, but as I have a hard time getting them to work, do not satisfy the ease of use dimension yet.

    At the same time I did not want to keep using Mastodon on a generic server much longer, as it builds up a history there which with every conversation ups the cost of leaving.

    The logical end point of the distributed web and federated services is running your own individual instance. Much as in the way I run my own blog, I want my own Mastodon instance.

    Such an individual instance needs to be within my own scope of control. This means having it at a domain I own. and being able to move everything to a different server at will.

    There is a hoster, Masto.host run by Hugo Gameiro, who provides Mastodon hosting as a monthly subscription. As it allows me to use my own domain name, and provides me with admin privileges of the mastodon instance, this is a workable solution. When I succeed in getting my own instance of Mastodon running on the Rapsberry Pi, I can simply move the entire instance at Masto.host to it.

    Working with Hugo at Masto.host was straightforward. After registering for the service, Hugo got in touch with me to ensure the DNS settings on my own domain were correct, and briefly afterwards everything was up and running.
    Frank Meeuwsen, who started using Masto.host last month, kindly wrote up a ‘moving your mastodon account’ guide in his blog (in Dutch). I followed (most) of that, to ensure a smooth transition.

    Using Mastodon? Do follow me at https://m.tzyl.nl/@ton.

    Screenshots of my old Mastodon.cloud account, and my new one on my own domain. And the goodbye and hello world messages from both.

    In the past few days I tried a second experiment to run my own Mastodon instance. Both to actually get a result, but also to learn how easy or hard it is to do. The first round I tried running something on a hosted domain. This second round I tried to get something running on a Raspberry Pi.

    The Rapsberry Pi is a 35 Euro computer, making it very useful for stand-alone solutions or as a cheap hardware environment to learn things like programming.

    20180923_144442Installing Debian Linux on the Rapsberry Pi

    I found this guide by Wim Vanderbauwhede, which describes installing both Mastodon and Pleroma on a Raspberry Pi 3. I ordered a Raspberry Pi 3 and received it earlier this week. Wim’s guide points to another guide by on how to install Ruby on Rails and PostgresSQL on a Rapsberry Pi. The link however was dead, and that website offline. However archive.org had stored several snapshots, which I save to Evernote.

    Installing Ruby on Rails went fine using the guide, as did installing PostgresSQL. Then I returned to Wim’s guide, now pointing to the Mastodon installation guide. This is where the process currently fails for me: I can’t extend the Ubuntu repositories mentioned, nor node.js.

    So for now I’m stalled. I’ll try to get back to it later next week.