How a confused, defensive social media giant steered itself into a disaster
Service announcement: I regularly lie to data gathering platforms like FB. So any message from FB telling you it’s my birthday today can be safely ignored. It’s not. They wanted to check my age when I created the account. They don’t need a day and month for that, and for that matter any year before 2000 will do. I lied to FB. You should too.
For those of you sending birthday wishes: thank you, I appreciate hearing from you. It’s good to know you
You’re right Chris. I disengaged from FB for much the same reasons in November 2017. And it has heavily influenced my blogging. Both writing about the small stuff, and the deeper content have increased by a lot. So welcome ‘back’ on the Indieweb side, outside the silos. Like you I do still maintain a FB profile (but an empty one, with its 11 years history discarded). Mostly because for some parts of my professional network, FB is the internet, and they have no reliable other way, other than email to reach out.
As you use WP, you may want to check out the IndieWeb plugins, especially the Webmention plugin, which allows you to follow conversations distributed over multiple blogs like in the olden days.
Some things I thought worth reading in the past days
- A good read on how currently machine learning (ML) merely obfuscates human bias, by moving it to the training data and coding, to arrive at peace of mind from pretend objectivity. Because of claiming that it’s ‘the algorithm deciding’ you make ML a kind of digital alchemy. Introduced some fun terms to me, like fauxtomation, and Potemkin AI: Plausible Disavowal – Why pretend that machines can be creative?
- These new Google patents show how problematic the current smart home efforts are, including the precursor that are the Alexa and Echo microphones in your house. They are stripping you of agency, not providing it. These particular ones also nudge you to treat your children much the way surveillance capitalism treats you: as a suspect to be watched, relationships denuded of the subtle human capability to trust. Agency only comes from being in full control of your tools. Adding someone else’s tools (here not just Google but your health insurer, your landlord etc) to your home doesn’t make it smart but a self-censorship promoting escape room. A fractal of the panopticon. We need to start designing more technology that is based on distributed use, not on a centralised controller: Google’s New Patents Aim to Make Your Home a Data Mine
- An excellent article by the NYT about Facebook’s slide to the dark side. When the student dorm room excuse “we didn’t realise, we messed up, but we’ll fix it for the future” defence fails, and you weaponise your own data driven machine against its critics. Thus proving your critics right. Weaponising your own platform isn’t surprising but very sobering and telling. Will it be a tipping point in how the public views FB? Delay, Deny and Deflect: How Facebook’s Leaders Fought Through Crisis
- Some of these takeaways from the article just mentioned we should keep top of mind when interacting with or talking about Facebook: FB knew very early on about being used to influence the US 2016 election and chose not to act. FB feared backlash from specific user groups and opted to unevenly enforce their terms or service/community guidelines. Cambridge Analytica is not an isolated abuse, but a concrete example of the wider issue. FB weaponised their own platform to oppose criticism: How Facebook Wrestled With Scandal: 6 Key Takeaways From The Times’s Investigation
- There really is no plausible deniability for FB’s execs on their “in-house fake news shop” : Facebook’s Top Brass Say They Knew Nothing About Definers. Don’t Believe Them. So when you need to admit it, you fall back on the ‘we messed up, we’ll do better going forward’ tactic.
- As Aral Balkan says, that’s the real issue at hand because “Cambridge Analytica and Facebook have the same business model. If Cambridge Analytica can sway elections and referenda with a relatively small subset of Facebook’s data, imagine what Facebook can and does do with the full set.“: We were warned about Cambridge Analytica. Why didn’t we listen?
- [update] Apparently all the commotion is causing Zuckerberg to think FB is ‘at war‘, with everyone it seems, which is problematic for a company that has as a mission to open up and connect the world, and which is based on a perception of trust. Also a bunker mentality probably doesn’t bode well for FB’s corporate culture and hence future: Facebook At War.
We’re in a time where whatever is presented to us as discourse on Facebook, Twitter or any of the other platforms out there, may or may not come from humans, bots, or someone/a group with a specific agenda irrespective of what you say or respond. We’ve seen it at the political level, with outside influences on elections, we see it in things like gamer gate, and in critiques of the last Star Wars movie. It creates damage on a societal level, and it damages people individually. To quote Angela Watercutter, the author of the mentioned Star Wars article,
…it gets harder and harder to have an honest discussion […] when some of the speakers are just there to throw kerosene on a flame war. And when that happens, when it’s impossible to know which sentiments are real and what motivates the people sharing them, discourse crumbles. Every discussion […] could turn into a […] fight — if we let it.
Discourse disintegrates I think specifically when there’s no meaningful social context in which it takes place, nor social connections between speakers in that discourse. The effect not just stems from that you can’t/don’t really know who you’re conversing with, but I think more importantly from anyone on a general platform being able to bring themselves into the conversation, worse even force themselves into the conversation. Which is why you never should wade into newspaper comments, even though we all read them at times because watching discourse crumbling from the sidelines has a certain addictive quality. That this can happen is because participants themselves don’t control the setting of any conversation they are part of, and none of those conversations are limited to a specific (social) context.
Unlike in your living room, over drinks in a pub, or at a party with friends of friends of friends. There you know someone. Or if you don’t, you know them in that setting, you know their behaviour at that event thus far. All have skin in the game as well misbehaviour has immediate social consequences. Social connectedness is a necessary context for discourse, either stemming from personal connections, or from the setting of the place/event it takes place in. Online discourse often lacks both, discourse crumbles, entropy ensues. Without consequence for those causing the crumbling. Which makes it fascinating when missing social context is retroactively restored, outing the misbehaving parties, such as the book I once bought by Tinkebell where she matches death threats she received against the sender’s very normal Facebook profiles.
Two elements therefore are needed I find, one in terms of determining who can be part of which discourse, and two in terms of control over the context of that discourse. They are point 2 and point 6 in my manifesto on networked agency.
This is unlike on e.g. FB where the cost of defending against trollish behaviour by design takes more effort than being a troll, and never carries a cost for the troll. There must, in short, be a finite social distance between speakers for discourse to be possible. Platforms that dilute that, or allow for infinite social distance, is where discourse can crumble.
This points to federation (a platform within control of a specific group, interconnected with other groups doing the same), and decentralisation (individuals running a platform for one, and interconnecting them). Doug Belshaw recently wrote in a post titled ‘Time to ignore and withdraw?‘ about how he first saw individuals running their own Mastodon instance as quirky and weird. Until he read a blogpost of Laura Kalbag where she writes about why you should run Mastodon yourself if possible:
Everything I post is under my control on my server. I can guarantee that my Mastodon instance won’t start profiling me, or posting ads, or inviting Nazis to tea, because I am the boss of my instance. I have access to all my content for all time, and only my web host or Internet Service Provider can block my access (as with any self-hosted site.) And all blocking and filtering rules are under my control—you can block and filter what you want as an individual on another person’s instance, but you have no say in who/what they block and filter for the whole instance.
Similarly I recently wrote,
The logical end point of the distributed web and federated services is running your own individual instance. Much as in the way I run my own blog, I want my own Mastodon instance.
I also do see a place for federation, where a group of people from a single context run an instance of a platform. A group of neighbours, a sports team, a project team, some other association, but always settings where damaging behaviour carries a cost because social distance is finite and context defined, even if temporary or emergent.
Slate saw their traffic from Facebook drop by 87% in a year after changes in how FB prioritises news and personal messages in your timeline. Talking Points Memo reflects on it and doing so formulates a few things I find of interest.
“Facebook is a highly unreliable company. We’ve seen this pattern repeat itself a number of times over the course of company’s history: its scale allows it to create whole industries around it depending on its latest plan or product or gambit. But again and again, with little warning it abandons and destroys those businesses.” …”Google operates very, very differently.”..”Yet TPM gets a mid-low 5-figure check from Google every month for the ads we run on TPM through their advertising services. We get nothing from Facebook.”..”Despite being one of the largest and most profitable companies in the world Facebook still has a lot of the personality of a college student run operation, with short attention spans, erratic course corrections and an almost total indifference to the externalities of its behavior.”
This first point I think is very much about networks and ecosystems, do you see others as part of your ecosystem or merely as a temporary leg-up until you can ditch them or dump externalities on.
The second point TPM makes is about visitors versus ‘true audience’.
“we are also seeing a shift from a digital media age of scale to one based on audience. As with most things in life, bigger is, all things being equal, better. But the size of a publication has no necessary connection to its profitability or viability.” It’s a path to get to a monopoly that works for tech (like FB) but not for media, the author Josh Marshall says. “…the audience era is vastly better for us than the scale era”
Audience, or ‘true audience’ as TPM has it, are the people who have a long time connection to you, who return regularly to read articles. The ones you’re building a connection with, for which TPM, or any newsy site, is an important node in their network. Scaling there isn’t about the numbers, although numbers still help, but the quality of those numbers and the quality of what flows through the connections between you and readers. The invisible hand of networks more than trying to get ever more eye-balls.
Scale thinking would make blogging like I do useless, network thinking makes it valuable, even if there are just 3 readers, myself included. It’s ‘small b’ blogging as Tom Critchlow wrote a few months ago. “Small b blogging is learning to write and think with the network“. Or as I usually describe it: thinking out loud, and having distributed conversations around it. Big B blogging, Tom writes, in contrast “is written for large audiences. Too much content on the web is designed for scale” and pageviews, where individual bloggers seem to mimick mass media companies. Because that is the only example they encounter.