We’re in a time where whatever is presented to us as discourse on Facebook, Twitter or any of the other platforms out there, may or may not come from humans, bots, or someone/a group with a specific agenda irrespective of what you say or respond. We’ve seen it at the political level, with outside influences on elections, we see it in things like gamer gate, and in critiques of the last Star Wars movie. It creates damage on a societal level, and it damages people individually. To quote Angela Watercutter, the author of the mentioned Star Wars article,
…it gets harder and harder to have an honest discussion […] when some of the speakers are just there to throw kerosene on a flame war. And when that happens, when it’s impossible to know which sentiments are real and what motivates the people sharing them, discourse crumbles. Every discussion […] could turn into a […] fight — if we let it.
Discourse disintegrates I think specifically when there’s no meaningful social context in which it takes place, nor social connections between speakers in that discourse. The effect not just stems from that you can’t/don’t really know who you’re conversing with, but I think more importantly from anyone on a general platform being able to bring themselves into the conversation, worse even force themselves into the conversation. Which is why you never should wade into newspaper comments, even though we all read them at times because watching discourse crumbling from the sidelines has a certain addictive quality. That this can happen is because participants themselves don’t control the setting of any conversation they are part of, and none of those conversations are limited to a specific (social) context.
Unlike in your living room, over drinks in a pub, or at a party with friends of friends of friends. There you know someone. Or if you don’t, you know them in that setting, you know their behaviour at that event thus far. All have skin in the game as well misbehaviour has immediate social consequences. Social connectedness is a necessary context for discourse, either stemming from personal connections, or from the setting of the place/event it takes place in. Online discourse often lacks both, discourse crumbles, entropy ensues. Without consequence for those causing the crumbling. Which makes it fascinating when missing social context is retroactively restored, outing the misbehaving parties, such as the book I once bought by Tinkebell where she matches death threats she received against the sender’s very normal Facebook profiles.
Two elements therefore are needed I find, one in terms of determining who can be part of which discourse, and two in terms of control over the context of that discourse. They are point 2 and point 6 in my manifesto on networked agency.
This is unlike on e.g. FB where the cost of defending against trollish behaviour by design takes more effort than being a troll, and never carries a cost for the troll. There must, in short, be a finite social distance between speakers for discourse to be possible. Platforms that dilute that, or allow for infinite social distance, is where discourse can crumble.
This points to federation (a platform within control of a specific group, interconnected with other groups doing the same), and decentralisation (individuals running a platform for one, and interconnecting them). Doug Belshaw recently wrote in a post titled ‘Time to ignore and withdraw?‘ about how he first saw individuals running their own Mastodon instance as quirky and weird. Until he read a blogpost of Laura Kalbag where she writes about why you should run Mastodon yourself if possible:
Everything I post is under my control on my server. I can guarantee that my Mastodon instance won’t start profiling me, or posting ads, or inviting Nazis to tea, because I am the boss of my instance. I have access to all my content for all time, and only my web host or Internet Service Provider can block my access (as with any self-hosted site.) And all blocking and filtering rules are under my control—you can block and filter what you want as an individual on another person’s instance, but you have no say in who/what they block and filter for the whole instance.
Similarly I recently wrote,
The logical end point of the distributed web and federated services is running your own individual instance. Much as in the way I run my own blog, I want my own Mastodon instance.
I also do see a place for federation, where a group of people from a single context run an instance of a platform. A group of neighbours, a sports team, a project team, some other association, but always settings where damaging behaviour carries a cost because social distance is finite and context defined, even if temporary or emergent.
This Article was mentioned on thoughtshrapnel.com
Disintegration of Discourse and Decentralised Tools – made me think of our conversation @budtheteacher @willrich45 zylstra.org/blog/2018/10/d… by @ton_zylstra via @dajbelshaw #edtechchat #MLClearns
The Twitter-like platform Gab has been forced offline, as their payment providers, hosting provider and domain provider all told them their business was no longer welcome. The platform is home to people with extremist views claiming their freedom of speech is under threat. At issue is of course where that speech becomes calling for violence, such as by the Gab-user who horribly murdered 11 people last weekend in Pittsburgh driven by anti-semitic hate.
Will we see an uptick in the use of federated sites such as Mastodon when platforms like Gab that are much more public disappear?
This I think isn’t about extremists being ‘driven underground’ but denying calls for violence, such as happened on Gab, a place in public discourse. An uptick in the use of federated sites would be a good development, as federation allows for much smaller groups to get together around something, whatever it is. In reverse that means no-one else needs to be confronted with it either if they don’t want to. Within the federation of Mastodon sites, I regularly come across instances listing other instances they do not connect to, and for which reasons. It puts the power of supporting welcomed behaviour and pushing back on unwelcome behaviour in the hands of more people, meaning every person running a Mastodon instance (and you can have your own instance), than just Twitter or Facebook management.
example of an instance denying another to be federated with it
That sort of moderation can still be hard, even if the moderator to member ratio is already much better than on the main platforms. But that just points the way to the long tail of much smaller instances, more individual ones even. It means it becomes easier for individuals and small groups to shun small cells, echo-chambers and rage bubbles, and not accidentally ending up in them or being forcefully drawn into them while you were having other conversations, like what can happen on Twitter. See my earlier posting on the disintegration of discourse. You then can do what networks do well: route around the stuff you perceive as damage or non-functional. It creates a stronger power symmetry and communication symmetry. It also denies extremists a wider platform. Yes they can still call for violence, which remains just as despicable. Yes, they can still blame Others for anything and be hateful of them. But they will be doing it in their back yard (or Mastodon instance), not in the park where you like to go walk your dog or do your morning run (or Twitter). They will not have a podium bigger than warranted, they will not have visibility beyond their own in-crowd. And will have to deal with more pushback and reality whenever they step outside such a bubble, without the pleasant illusion ‘everyone on twitter agrees with me’.