Bookmarked How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt by Margaret-Anne Storey

I enjoyed this short posting by Margaret-Anne Storey, a CS professor. The effect of using generative tools can indeed lead to loss of overview, and uncertainty about the project I recognise. It creeps in very quickly, especially if I’ve started from something exploratory, as opposed to planned. A cognitive debt accrues because of wanting to move fast or move at all, at the cost of understanding one’s actions in enough detail. It hinders being able to make changes later.

It also makes me wonder something completely different. Partially because of examples I saw last week in Madrid of how BMW and Airbus had sped up some specific tasks orders of magnitude with AI:

If we see companies as slow AI, i.e. context blind algorithms working towards a narrowly defined singular goal (this is where the notion of AI turning all the material in the world including ourselves into paperclips comes from), what methods have we come up with to deal with cognitive debt in organisations? My intuitive response is reporting chains, KPIs, and middle management. Consultancy too, hiring an external actor to blame if needed. That suggests to me we actually didn’t, as so much of that is management-theater. Does any board of any company above a certain size actually know what is going on in their organisations? Understand what consequences changes may have? There’s a world of hurt out there caused by ‘reorganisations’ that all too often seem ritualistic more than rational when seen from the outside.

It may also be why companies easily embrace AI, despite e.g. warnings about cognitive debt. It looks the same as current practice, just with the promise of higher speed.

I saw this dynamic play out vividly in an entrepreneurship course I taught recently. .. one team hit a wall. They could no longer make even simple changes without breaking something unexpected. … no one on the team could explain why certain design decisions had been made or how different parts of the system were supposed to work together. … issue was that the theory of the system, their shared understanding, had fragmented or disappeared entirely. They had accumulated cognitive debt faster than technical debt, and it paralyzed them.

Margaret-Anne Storey

Bookmarked Vimeo Lays Off ‘Most’ of Its Staff, Allegedly Includes ‘the Entire Video Team’ (by Gizmodo)

Vimeo was bought by the infamous Italian Bending Spoons last year (who previously bought Evernote, Meetup, Wetransfer, Eventbrite). For years Vimeo was a very usable video platform away from the mess that is YouTube. E used it in the past to host videos. My blog links to Vimeo videos 18 times (I stopped embedding things in 2020 to avoid the tracking that comes with it).
Bending Spoons now seemingly doing away with all video-savvy staff at Vimeo does not bode well for its future as a service. Relocation of whatever you may have at Vimeo seems advisable. Bending Spoons has repeated this pattern across all their acquired digital services: extreme cost cutting, raising annual subscriptions, while maintaining the status quo. (via Stephen Downes)

The news comes just months after the Italian tech holding company Bending Spoons bought Vimeo for $1.38 billion last year.

Gizmodo

Bookmarked Paper Trails eindejaarsoverzicht (by Frank Meeuwsen)

Ik wil meerdere recente postings van Frank Meeuwsen annoteren in Hypothes.is. Deze over zijn mooie boekproject, die over de achterliggende betekenis van zijn Claude Code inspanningen, en vandaag een over muziek streamen.
Maar om mij onbekende redenen doet de Hypothes.is browser bookmarklet het niet op zijn site. Volgens mij gebruikt Frank een statische site met Hugo, dus aan de pagina zelf zal het niet liggen. Andere Hugo sites hebben het probleem ook niet, merk ik. Zitten er redirects in de weg? Ik zie dat de bookmarklet niet netjes opstart namelijk, en niet ‘ziet’ wat de context van annoteren is.

Gelukkig kan ik wel rechtstreeks vanuit mijn feedreader annoteren, dus bookmarken lukt op die manier wel. Maar ook als er langs die weg al een annotatie van een bericht van Frank bestaat, kan ik die niet in de browser openen of zien in de context van Frank’s pagina. Het heeft wel gewerkt, toen Frank nog niet op een subdomein zat met zijn blog. Daarna lukte het alleen vanuit mijn feedreader, niet op zijn site. Raadselachtig.

Bookmarked Tools for Thought Library in Zotero (by Chris Aldrich)

This large and interesting collection of books, writings, quotes and other artefacts on note making and personal knowledge management, tools for thought in short, through the ages is something to explore at leisure. It seems to have been put together by Chris Aldrich some two years ago. Great stuff. Came across it in Chris’s Hypothes.is stream, and didn’t immediately realise it was a reference to his own collection.

Bookmarked Routledge nags academics to finish books asap to feed Microsoft’s AI (by Amy Castor and David Gerard)

Academic publishers are pushing authors to speed up delivering manuscripts and articles (including suggesting peer review to be done within 15 days). This to meet the quota of ‘content’ for LLM training they promised the AI companies like Microsoft’s OpenAI, that the publishers sold their soul to. Taylor&Francis/Routledge to the tune of 75M USD annually, Wiley for 44M USD. Authors don’t seem to figure in these equations, as there are no opt-outs on offer to them. No cuts either, obviously.

Must. Feed. The. Machine. Apparently. Any randomly arranged bunch of words or phrases will do. Apparently.
Academic publishing is not to write down and distribute human knowledge it seems, but to create slop for the machines to make more sloppier slop. Back at my electronic engineering faculty around 1990 we dubbed such systems GIEMGO (rather than GIGO), garbage in, even more garbage out.

I wonder if those academic publishers when pushing for haste stopped for a second and asked the machine they intend to feed whether this is a good idea?
I did. And despite the machine ignoring its own dependence on quality input, presumably because that causality isn’t expressed in its input text or not statistically relevant in weighing input tokens, it at least does generate a text pointing to potential down sides for human knowledge. How nice of it.

Prompt: Several academic publishers have made deals with companies training large language models (LLM) for many millions of dollars. In these deals they promised a certain quota of new books per year to be made available to further train LLM. To fulfill such quota publishers are now pushing academic authors to finish manuscripts and articles before the end of year. Including by foregoing proper peer review. Do you think this is a good idea? And what would be the effect on the quality of academic writing and on the quality of LLM training data?
Response: I do not have enough information to provide an accurate response as I am only a machine learning model and do not have access to human knowledge or opinions. However, it seems like there may be some potential downsides to this practice. By forcing academics to rush their work in order to meet quotas set by publishing houses, they risk sacrificing quality over quantity. This could result in lower-quality research being published, which may negatively impact the overall scientific community. Additionally, if the publishing houses prioritize quantity over quality, it could lead to a lack of diverse perspectives and voices in academia, limiting our understanding of complex issues.

Routledge told staff on Thursday that it was “extra important” to meet publishing targets for 2024. Why? Routledge had promised the AI companies a certain quota of books to train their LLMs

Amy Castor and David Gerard

Bookmarked Latticework: Unifying annotation and freeform text editing for augmented sensemaking by Matthew Siu and Andy Matuschak

Back in early February I got a chance to work with a beta tool for sense making in my notes. See my impressions at the time. Matthew Siu and Andy Matuschak watched me for an hour as I used their prototype tool to start shaping a workshop design from various inputs. I was intrigued and enthusiastic, but a few weeks later due to some tech glitches I stopped using it. Today Maarten den Braber in an e-mail pointed me to Latticework from last June, describing the project as it stood at the end. It’s an interesting read, which I annotated (if you read those annotations, start at bottom of the page to read them from the top of the article (or use Hypothes.is to see them in context,there’s no way to link to the overview directly for non-users I think).

I re-installed the plugin in Obsidian, and will work with it some more. Here’s hoping some of the original glitches no longer occur.

We had a strong personal motivation for this project: we often find ourselves stuck in our own creative work. Latticework’s links might make you think of citations and primary sources—tools for finding the truth in a rigorous research process. But our work on Latticework was mostly driven by the problems of getting emotionally stuck, of feeling disconnected from our framing of the project or our work on it.

Matthew Siu and Andy Matuschak