Bookmarked Mechanisms of Techno-Moral Change: A Taxonomy and Overview (by John Danaher and Henrik Skaug Sætra)

Via Stephen Downes. Overview of how, through what mechanisms, technology changes work moral changes. At first glance seems to me a sort of detailing of Smits’ 2002 PhD thesis Monster theory, looking at how tech changes can challenge cultural categories, and diving into the specific part where cultural categories are adapted to fit new tech in. The citations don’t mention Smits or the anthropological work of Mary Douglas it is connected to. It does cite references by Peter-Paul Verbeek and Marianne Boenink (all three from the PSTS department I studied at), so no wonder I sense a parallel here.

The first example mentioned in the table explaining the six identified mechanisms points in this direction of a parallel too: the 70s redefinition of death as brain death was a redefinition of cultural concepts to assimilate tech change was also used as example in Smits’ work. The third example is a direct parallel to my 2008 post on empathy as shifting cultural category because of digital infrastructure, and how I talked about hyperconnected individuals and the impact on empathy in 2010 when talking about the changes bringing forth MakerHouseholds.

Where Monster theory was meant as a tool to understand and diagnose discussions of new tech, wherein the assmilation part (both cultural categories and technology get adapted) is the pragmatic route (the mediation theory of Peter Paul Verbeek is located there too), it doesn’t as such provide ways to act / intervene. Does this taxonomy provide options to act?
Or is this another descriptive way to locate where moral effects might take place, and the various types of responses to Monsters still determine the potential moral effect?

The paper is directly available, added it to my Zotero library for further exploration.

Many people study the phenomenon of techno-moral change but, to some extent, the existing literature is fragmented and heterogeneous – lots of case studies and examples but not enough theoretical unity. The goal of this paper is to bring some order to existing discussions by proposing a taxonomy of mechanisms of techno-moral change. We argue that there are six primary mechanisms..

John Danaher

Bookmarked Disinformation and its effects on social capital networks (Google Doc) by Dave Troy

This document by US journalist Dave Troy positions resistance against disinformation not as a matter of factchecking and technology but as one of reshaping social capital and cultural network topologies. I plan to read this, especially the premises part looks interesting. Some upfront associations are with Valdis Krebs’ work on the US democratic / conservative party divide where he visualised it based on cultural artefacts, i.e. books people bought (2003-2008), to show spheres and overlaps, and with the Finnish work on increasing civic skills which to me seems a mix of critical crap detection skills woven into a social/societal framework. Networks around a belief or a piece of disinformation for me also point back to what I mentioned earlier about generated (and thus fake) texts, how attempts to detect such fakes usually center on the artefact not on the richer tapestry of information connections (last 2 bullet points and final paragraph) around it (I called it provenance and entanglement as indicators of authenticity recently, entanglement being the multiple ways it is part of a wider network fabric). And there’s the more general notion of Connectivism where learning and knowledge are situated in networks too.

The related problems of disinformation, misinformation, and radicalization have been popularly misunderstood as technology or fact-checking problems, but this ignores the mechanism of action, which is the reconfiguration of social capital. By recasting these problems as one problem rooted in the reconfiguration of social capital and network topology, we can consider solutions that might maximize public health and favor democracy over fascism …

Dave Troy

Bookmarked Will A.I. Become the New McKinsey? by Ted Chiang in the New Yorker

Ted Chiang realises that corporates are best positioned to leverage the affordances of algorithmic applications, and that that is where the risk of the ‘runaway AIs’ resides. I agree that they are best positioned, because corporations are AI’s non-digital twin, and have been recognised as such for a decade.

Brewster Kahle said (in 2014) that corporations should be seen as the 1st generation AIs, and Charlie Stross reinforced it (in 2017) by dubbing corporations ‘Slow AI’ as corporations are context blind, single purpose algorithms. That single purpose being shareholder value. Jeremy Lent (in 2017) made the same point when he dubbed corporations ‘socio-paths with global reach’ and said that the fear of runaway AI was focusing on the wrong thing because “humans have already created a force that is well on its way to devouring both humanity and the earth in just the way they fear. It’s called the Corporation“. Basically our AI overlords are already here: they likely employ you. Of course existing Slow AI is best positioned to adopt its faster young, digital algorithms. It as such can be seen as the first step of the feared iterative path of run-away AI.

The doomsday scenario is … A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value.

Ted Chiang

I’ll repeat the image I used in my 2019 blogpost linked above:

Your Slow AI overlords looking down on you, photo Simone Brunozzi, CC-BY-SA

Bookmarked Inside the secret list of websites that make AI like ChatGPT sound smart (by By Kevin Schaul, Szu Yu Chen and Nitasha Tiku in the Washington Post)

The Washington Post takes a closer look at Google’s C4 dataset, which is comprised of the content of 15 million websites, and has been used to train various LLM’s. Perhaps also the one used by OpenAI for e.g. ChatGPT, although it’s not known what OpenAI has been using as source material.

They include a search engine, which let’s you submit a domain name and find out how many tokens it contributed to the dataset (a token is usually a word, or part of a word).

Obviously I looked at some of the domains I use. This blog is the 102860th contributor to the dataset, with 200.000 tokens (1/10000% of the total).


Screenshot of the Washington Post’s search tool, showing the result for this domain, zylstra.org.

Bookmarked Feedly launches strikebreaking as a service (by Molly White)

Molly White does a good write-up of the extremely odd and botched launch by Feedly of a service to keep tabs on protests that might impact your brand, assets or people. Apparently in that order too. When E first mentioned this to me I was confused. What’s the link with a feedreader after all? Feedly’s subsequent excuse ‘we didn’t consider abuse of this service’ sounds rather hollow, as their communications around it seem to precisely focus on the potential abuse being the service announced.

The question ‘how did Feedly end-up here?’ kept revolving in my mind. Turns out the starting point is logged in my own blog:

Machines can have a big role in helping understand the information, so algorithms can be very useful, but for that they have to be transparent and the user has to feel in control. What’s missing today with the black-box algorithms is where they look over your shoulder, and don’t trust you to be able to tell what’s right.

Edwin Khodabakchian cofounder and CEO of RSS reader Feedly, in Wired, March 2018

In a twisted way I can see the reflection of that quote in the service Feedly announced. Specifically w.r.t. the first part, using algorithms to better understand information. The second part seems to have gone missing in the past 5 years though, the bit about transparency, avoiding black boxes, and putting users in control. Especially the ‘not trusting people to tell what’s right’ grates. It seems to me Feedly users in the past days very much could tell what’s right and Feedly hoped they wouldn’t.

I do agree with the 2018 quote though, but ‘algorithmic interpretation as a service‘ isn’t what follows to me. That’s just a different way of commoditising your customers.
Algorithmic spotting of emergent patterns is relevant if I can define the context and network of people (and perhaps media sources) whose feeds I follow. For that I need to be in control of the algorithm, and need to be the one who defines what specific emergent patterns I am interested in. That is on my list for my ideal feed reader. But this botched Feedly approach isn’t that.

Bookmarked The push to AI is meant to devalue the open web so we will move to web3 for compensation (by Mita Williams)

Adding this interesting perspective from Mita Williams to my notes on the effects of generative AI. She positions generative AI as bypassing the open web entirely (abstracted away into the models the AIs run on). Thus sharing is disincentivised as sharing no longer brings traffic or conversation, if it is only used as model-fodder. I’m not at all sure if that is indeed the case, but from as early as YouTube’s 2016 Flickr images database being used for AI model training, such as IBM’s 2019 facial recognition efforts, it’s been a concern. Leading to questions about whether existing (Creative Commons) licenses are fit for purpose anymore. Specifically Williams pointing to not only the impact on an individual creator but also on the level of communities they form, are part of and interact in, strikes me as worth thinking more about. The erosion of (open source, maker, collaborative etc) community structures is a whole other level of potential societal damage.

Mita Williams suggests the described erosion is not an effect but an actual aim by tech companies, part of a bait and switch. A re-siloing, an enclosing of commons, where being able to see something in return for online sharing again is the lure. Where the open web may fall by the wayside and become even more niche than it already is.

…these new systems (Google’s Bard, the new Bing, ChatGPT) are designed to bypass creators work on the web entirely as users are presented extracted text with no source. As such, these systems disincentivize creators from sharing works on the internet as they will no longer receive traffic…

Those who are currently wrecking everything that we collectively built on the internet already have the answer waiting for us: web3.

…the decimation of the existing incentive models for internet creators and communities (as flawed as they are) is not a bug: it’s a feature.

Mita Williams