On 22 and 23 March, roughly in a month, the first European personal knowledge management (pkm) summit will take place in Utrecht, the Netherlands.
Over two days a varied line-up of PKM practitioners will present, show and discuss how they shqpe their personal learning and information strategies.

Personal knowledge management is enjoying a wave of renewed attention due to a new group of note making tools that has emerged in the past few years (such Roam, Logseq, Obsidian, Notion et al). But personal knowledge management is way older. People generally notice things around them, and strive to make sense of the world they live in, wether on a highly practical level or a more abstract one. The urge behind PKM therefore is deeply human. The methods and availability of tools have changed over time, as has the perspective on what constitutes personal knowledge.

Over two days a long list of well known and less well known practitioners of personal knowledge management is lined up. I had the pleasure of finding and approaching people to participate as speaker or as workshop host. This includes experienced voices like Harold Jarche. Next to invited speakers and hosts, there will be ample time on the schedule to do your own impromptu session, unconference style. The program will be shaped and finalised in the coming week or so.

The event is organised by the Dutch community ‘Digital Fitness’, and a non-profit effort. There is space for at most 200 people, and there are still tickets available. Tickets are 200 Euro for the two day event. The venue is a short walk from Utrecht Central Station, at Seats2Meet.

I hope to see you there!

Bookmarked Disinformation and its effects on social capital networks (Google Doc) by Dave Troy

This document by US journalist Dave Troy positions resistance against disinformation not as a matter of factchecking and technology but as one of reshaping social capital and cultural network topologies. I plan to read this, especially the premises part looks interesting. Some upfront associations are with Valdis Krebs’ work on the US democratic / conservative party divide where he visualised it based on cultural artefacts, i.e. books people bought (2003-2008), to show spheres and overlaps, and with the Finnish work on increasing civic skills which to me seems a mix of critical crap detection skills woven into a social/societal framework. Networks around a belief or a piece of disinformation for me also point back to what I mentioned earlier about generated (and thus fake) texts, how attempts to detect such fakes usually center on the artefact not on the richer tapestry of information connections (last 2 bullet points and final paragraph) around it (I called it provenance and entanglement as indicators of authenticity recently, entanglement being the multiple ways it is part of a wider network fabric). And there’s the more general notion of Connectivism where learning and knowledge are situated in networks too.

The related problems of disinformation, misinformation, and radicalization have been popularly misunderstood as technology or fact-checking problems, but this ignores the mechanism of action, which is the reconfiguration of social capital. By recasting these problems as one problem rooted in the reconfiguration of social capital and network topology, we can consider solutions that might maximize public health and favor democracy over fascism …

Dave Troy

With the release of various interesting text generation tools, I’m starting an experiment this and next month.

I will be posting computer generated text, prompted by my own current interests, to a separate blog and Mastodon account. For two months I will explore how such generated texts may create interaction or not with and between people, and how that feels.

There are several things that interest me.

I currently experience generated texts as often bland, as flat planes of text not hinting at any richness of experience of the author lying behind it. The texts are fully self contained, don’t acknowledge a world outside of it, let alone incorporate facets of that world within itself. In a previous posting I dubbed it an absence of ‘proof of work’.

Looking at human agency and social media dynamics, asymmetries often take agency away. It is many orders of magnitude easier to (auto)post disinformation or troll than it is for individuals to guard and defend against. Generated texts seem to introduce new asymmetries: it is much cheaper to generate reams of text and share them, than it is in terms of attention and reading for an individual person to determine if they are actually engaging with someone and intentionally expressed meaning, or are confronted with a type of output where only the prompt that created it held human intention.

If we interact with a generated text by ourselves, does that convey meaning or learning? If annotation is conversation, what does annotating generated texts mean to us? If multiple annotators interact with eachother, does new meaning emerge, does meaning shift?

Can computer generated texts be useful or meaningful objects of sociality?

Right after I came up with this, my Mastodon timeline passed me this post by Jeff Jarvis, which seems to be a good example of things to explore:


I posted this imperfect answer from GPTchat and now folks are arguing with it.

Jeff Jarvis

My computer generated counterpart in this experiment is Artslyz Not (which is me and my name, having stepped through the looking glass). Artslyz Not has a blog, and a Mastodon account. Two computer generated images show us working together and posing together for an avatar.


The generated image of a person and a humanoid robot writing texts


The generated avatar image for the Mastodon account

Bookmarked Agency Made Me Do It by Mike Travers

This looks like an interesting site to explore and follow (though there is no feed). First in terms of the topic, agency. I’m very interested myself in the role of technology in agency, specifically networked agency which is located in the same spot where a lot of our everyday complexity lives. Second in terms of set-up. Mike Travers left his old blog behind to create this new site, generated from his Logseq notes, which is “more like an open notebook project. Parts of it are essay-like but other parts are collections of rough notes or pointers to content that doesn’t exist yet. The two parts are somewhat intertwingled”. I’m interested in that intertwingling to shape this space here differently in similar ways, although unlike Travers with existing content maintained. Something that shows the trees and the forest at the same time, as I said about it earlier.

Agency Made Me Do It, an evolving hypertext document which is trying to be some combination of personal wiki and replacement for my old blog. … I’ve been circling around the topic of agency for a few decades now. I wrote a dissertation on how metaphors of agency are baked into computers, programming languages, and the technical language engineers use to talk about them. … I’m using “agency” as kind of a magic word to open up the contested terrain where physical causality and the mental intersect. … We are all forced to be practitioners of agency, forced to construct ourselves as agents…

Mike Travers

Bookmarked Using GPT-3 to augment human intelligence: Learning through open-ended conversations with large language models by Henrik Olof Karlsson

Wow, this essay comes with a bunch of examples of using the GPT-3 language model in such fascinating ways. Have it stage a discussion between two famous innovators and duke it out over a fundamental question, run your ideas by an impersonation of Steve Jobs, use it to first explore a new domain to you (while being aware that GPT-3 will likely confabulate a bunch of nonsense). Just wow.
Some immediate points:

  • Karlsson talks about prompt engineering, to make the model spit out what you want more closely. Prompt design is an important feature in large scale listening, to tap into a rich interpreted stream of narrated experiences. I can do prompt design to get people to share their experiences, and it would be fascinating to try that experience out on GPT-3.
  • He mentions Matt Webbs 2020 post about prompting, quoting “it’s down to the human user to interview GPT-3“. This morning I’ve started reading Luhmann’s Communicating with Slip Boxes with a view to annotation. Luhmann talks about the need for his notes collection to be thematically open ended, and the factual status or not of information to be a result of the moment of communication. GPT-3 is trained with the internet, and it hallucinates. Now here we are communicating with it, interviewing it, to elicit new thoughts, ideas and perspectives, similar to what Luhmann evocatively describes as communication with his notes. That GPT-3 results can be totally bogus is much less relevant as it’s the interaction that leads to new notions within yourself, and you’re not after using GPT-3s output as fact or as a finished result.
  • Are all of us building notes collections, especially those mimicking Luhmann as if it was the originator of such systems of note taking, actually better off learning to prompt and interrogate GPT-3?
  • Karlsson writes about treating GPT-3 as an interface to the internet, which allows using GPT-3 as a research assistant. In a much more specific way than he describes this is what the tool Elicit I just mentioned here does based on GPT-3 too. You give Elicit your research question as a prompt and it will come up with relevant papers that may help answer it.

On first reading this is like opening a treasure trove, albeit a boobytrapped one. Need to go through this in much more detail and follow up on sources and associations.

Some people already do most of their learning by prompting GPT-3 to write custom-made essays about things they are trying to understand. I’ve talked to people who prompt GPT-3 to give them legal advice and diagnose their illnesses. I’ve talked to men who let their five-year-olds hang out with GPT-3, treating it as an eternally patient uncle, answering questions, while dad gets on with work.

Henrik Olof Karlsson

Bookmarked Elicit.org

A while ago I mentioned Research Rabbit here as a tool to find research papers, based on the ones already in my collection (e.g. through syncing with Zotero). Last week I created an account at Elicit. It’s a natural language processing based algorithm to find relevant papers for you based on a specific research question you give it to work with (although it can also take your own collection of papers as a starting point). My first attempt after creating an account yielded very interesting suggestions. Will certainly try this out more, as a tool assisting literature review.

I found Elicit because Maggie Appleton’s feed told me she’s joining the company, Ought, that created Elicit.

Elicit is a research assistant using language models like GPT-3 to automate parts of researchers’ workflows. Currently, the main workflow in Elicit is Literature Review. If you ask a question, Elicit will show relevant papers and summaries of key information about those papers in an easy-to-use table.

Elicit FAQ