The innovations in personal knowledge management are sparse and far between, is a phrase that has circulated in my mind the past three weeks. Chris Aldrich in his online presentation at PKM Summit expressed that notion while taking us through an interesting timeline of personal knowledge management related practices. As his talk followed that timeline, it didn’t highlight the key innovations as an overview in itself. I had arranged the session because I wanted to raise awareness that many practices we now associate with 20th century or digital origins, in fact are much older. It’s just that we tend to forget we’re standing on many shoulders, taking a recent highly visible example as original source and our historic horizon. Increased historic awareness is however something different than stating there has been hardly any notable innovation in this space over the course of millennia. Because that leads to things like asking what then are the current adjacent possible innovations, what branches might be developed further?

It all starts with a question I have for Chris however: What are the innovations you were thinking of when you said that?

Below I list some of the things that I think are real innovations in the context of personal knowledge management, in roughly chronological order. This is a list from the top of my head and notes, plus some very brief search on whether what I regard as origin of a practice is actually a more recent incarnation. I have left out most of the things regarding oral traditions, as it is not the context of my practices.

  • Narration, prehistory
  • Songlines, prehistory
  • Writing, ending prehistory
  • Annotation, classical antiquity
  • Loci method, memory palaces, classical antiquity
  • Argument analysis, classical antiquity
  • Tagging, classical antiquity
  • Concept mapping, 3rd century
  • Indexes, Middle Ages
  • Letterpress printing, renaissance
  • Paper notebooks, renaissance
  • Commonplace books, renaissance
  • Singular snippets / slips, 16th century
  • Stammbuch/Album Amicorum, 16th century
  • Pre-printed notebooks, 19th century
  • Argument mapping, 19th century
  • Standard sized index cards, 19th century
  • Sociograms/social graphs, early 20th century
  • Linking, 20th century (predigital and digital)
  • Knowledge graphs, late 20th century (1980s)
  • Digital full text search, late 20th century

Chris, what would be your list of key innovations?


A pkm practitioner working on his notes. Erasmus as painted by Holbein, public domain image.

Afgelopen woensdag nam ik deel aan een RADIO webinar over persoonlijk kennismanagement (pkm). Ik vertelde over hoe ik sinds lange tijd voor mezelf aan PKM doe. Het webinar was getiteld ‘Word PKM kampioen’. Ik begon met te zeggen dat je geen PKM kampioen wordt ten opzichte van anderen, maar ten opzichte van jezelf. Je helpt jezelf sneller te leren en al eerder geformuleerde ideeën en inzichten in te zetten in je werk. Als je het resultaat van iemands PKM-systeem ziet dat al jaren bestaat, is het makkelijk te denken dat zo’n groot construct bouwen voor jezelf niet mogelijk is of te veel tijd kost. Het punt is echter dat geen enkel uitgebreid PKM-systeem op basis van een blauwdruk gebouwd is. Het is ontstaan uit langdurig toepassen van kleine handelingen. Kleine handelingen die al vanaf de eerste keer dat je ze doet waarde hebben voor jezelf. Een lange wandeling is volledig opgebouwd uit eenvoudige voetstappen.

Tijdens het webinar kwamen we net een beetje tijd tekort om dat punt nogmaals aan het eind te maken: klein en makkelijk beginnen is hoe je PKM vanaf het eerste moment waardevol maakt voor jezelf. Die kleine dingen maken je PKM kampioen. Je PKM systeem groeit vanzelf als het voor je leren en kenniswerk nuttig is.

Ik wil twee dingen aanwijzen uit een iets langer lijstje dat ik hieronder noem en waar ik mee had willen eindigen woensdag. Die twee dingen zijn volgens mij vanaf het eerste begin van belang en nut:

  1. Als je iets bewaart, noteer altijd in je eigen woorden waarom je het bewaarde interessant vindt, je eerste associatie of gedachte er bij, wat je er in verrast of aanspreekt. Dit maakt je een curator van informatie i.p.v. een verzamelaar of hoarder, het maakt het verschil tussen informatie-overvloed versus informatie-overload.
  2. Begin als je informatie zoekt altijd in je eigen notities en bewaarde spullen. (Je kunt bovendien in je eigen woorden zoeken, omdat je bij alles een eigen annotatie hebt opgenomen vanwege de actie hierboven.)

Door het eerste methodiekje help je je toekomstige zelf om te snappen wat je in de bewaarde info zag, en hoe die van nut kan zijn. Het tweede methodiekje zorgt dat je ook gebruikt wat je bewaard hebt. Zoals Lykle de Vries bij het webinar zei, het valideert je inspanning om dingen te bewaren.
Dit zijn twee heel praktische dingen die meteen uitvoerbaar zijn.

Het iets langere lijstje met tips bevat ook wat meer abstracte dingen die ook over houding en reflectie op je werkwijzen gaan:

  • Zie PKM als bron van autonomie, en kenniswerk als je ambacht
  • Heb een logische flow als systeem, begin klein
  • Maak toegang op 1 plek met je surprisal als annotatie makkelijk
  • Begin alles met zoeken in je notities, doe denkwerk in je notities
  • Laat structuur ontstaan, als verdienste van je denkwerk
  • Neem de juiste frictie weg
  • Maak delen makkelijk
  • Wees aardig voor je rommelige zelf: het is een feature.

Ter illustratie een paar afbeeldingen van hoe ik dingen die ik bewaar voor mezelf annoteer. Je ziet twee varianten. Er komen voorbeelden uit mijn notitie-tool (Obsidian), en voorbeelden uit de online annotatietool hypothes.is (die uiteindelijk ook automatisch in mijn notitietool terechtkomen). Je ziet hoe ik formuleer waarom ik iets bewaar, en je ziet ook links naar eerdere notities, trefwoorden of namen van auteurs. Die helpen allemaal bij het terugvinden als ik later een gerelateerde vraag heb. Zie ook hoe de omschrijving niet per se betekenis heeft voor iemand anders dan mijzelf. Het zijn mijn duidingen bij iets, en dat maakt het voor mij makkelijker terug te vinden. De P in PKM staat voor persoonlijk immers. Natuurlijk is die eerste annotatie in veel gevallen niet het eindstadium. Daarna volgt nog verwerking en til ik de interessantste dingen uit het bewaarde en maak daar losse notities in mijn eigen woorden van. Maar veel blijft ook gewoon in deze toestand tot ik het een keer vind op basis van een vraag of behoefte. Dat is dan kennelijk het juiste moment om de verdere verwerking van die informatie te doen.

Het hele verhaal dat ik in het webinar vertelde zal ik in een andere blogpost beschrijven.

On 22 and 23 March, roughly in a month, the first European personal knowledge management (pkm) summit will take place in Utrecht, the Netherlands.
Over two days a varied line-up of PKM practitioners will present, show and discuss how they shqpe their personal learning and information strategies.

Personal knowledge management is enjoying a wave of renewed attention due to a new group of note making tools that has emerged in the past few years (such Roam, Logseq, Obsidian, Notion et al). But personal knowledge management is way older. People generally notice things around them, and strive to make sense of the world they live in, wether on a highly practical level or a more abstract one. The urge behind PKM therefore is deeply human. The methods and availability of tools have changed over time, as has the perspective on what constitutes personal knowledge.

Over two days a long list of well known and less well known practitioners of personal knowledge management is lined up. I had the pleasure of finding and approaching people to participate as speaker or as workshop host. This includes experienced voices like Harold Jarche. Next to invited speakers and hosts, there will be ample time on the schedule to do your own impromptu session, unconference style. The program will be shaped and finalised in the coming week or so.

The event is organised by the Dutch community ‘Digital Fitness’, and a non-profit effort. There is space for at most 200 people, and there are still tickets available. Tickets are 200 Euro for the two day event. The venue is a short walk from Utrecht Central Station, at Seats2Meet.

I hope to see you there!

On the Obsidian forum I came across an intriguing post by Andy Matuschak. Matthew Siu and Andy have made an Obsidian plugin to help with sensemaking and they are looking for people with use cases to test it out.
I filled out the survey saying I had a large variety of notes about EU data law, digital ethics, and (data) governance, which I need to make sense of to guide public sector entities. They asked about online traces of me as well. Soon Matthew reached out and we decided on a time for a call.

And that is how I ended up working in Obsidian for an hour while Matthew and Andy were watching my shared screen. Sort-of how I once watched Andy work through his notes after reading a book. They’re on the US west coast, so with the nine hour time difference it was 22:00-23:00 hours here, which plus my cold meant I wasn’t as focused as earlier in the day. It also feels slightly odd to me having people watch me while doing actual work.

Because that was what I did, doing some actual work. Using notes from several conversations earlier this week, plus EU legal texts and EU work plans, and notes from workshop output from over a year ago, I was working towards the right scope of a workshop to be held early March.

The plugin I tried out is called the Obsidian reference plugin.
It allows you to select something in one note, and paste it in another. It links back to the source, is uneditable where you paste it, and marked where you copied it. When you hover over it, it can preview the snippet in its original context, when you click it the source opens/focuses in another tab. It seems a simple thing, and similar to block transclusion/references, yet still it had some interesting effects:

  • Initially I saw myself using it to cut and paste some things from different notes together in a new note. This is a bit like canvassing, but then solely in text, and focused on snippets rather than full notes.
  • The snippets you paste aren’t editable, and the idea is you can paraphrase them, rather than use them as-is like in block transclusion. I did a bit of that paraphrasing, but not a lot, it was more like gathering material only. Perhaps as I was bringing together parts of my own (conversation) notes. I can see that when going through the annotations of a source text, this can be a second step, thinking highlights and annotations through, remixing them, and come up with some notions to keep (see second to last bullet).
  • It was easy to bring in material from a range of notes, without it becoming hard to keep an overview of what came from where. This is I think key when comparing different inputs and existing own notes.
  • Once I was interacting with the collected material, my use of additional snippets from other notes shifted: I started to use them inline, as part of a sentence I was writing. This resembles how I currently use titles of my main notions, they’re sentences too that can be used inside another sentence, as a reference inside a flowing text rather than listed at the end. I often do this because it marks the exact spot where I think two notions link. This means using smaller snippets (part of a phrase), and it is possible because the reference to the source is kept, and its context immediately accessible through hovering over it.
  • Discussing this effect with Matthew and Andy I realised another use case for this plug-in is working with the content of the core of my conceptual notes (that I call Notions) inside project or work notes. Now that reference is only to Notions as a whole. Adding a snippet makes a qualitative difference I want to explore.
  • You can collapse the snippets you create, but I didn’t do that during the hour I let Matthew and Andy watch me work. I can imagine doing that if I’m working through a range of snippets to paraphrase or use. I can see this being useful when for instance collating results from in-depth interviews. For my World Bank data readiness assessments the report was based on snippets from some 70 (group) interviews. A lot of material that I would mine along the lines of ‘what was said about X across all these conversations’, or ‘what assumptions are voiced in these interviews regarding Y’.
  • I spent the hour working with notes mainly from conversations, which are often pseudo-verbatim with my associations and questions I had during the conversation mixed in. Reading old notes often allows me to be ‘transported’ back into the conversation, its setting etc in my memory. Being able to glance at a snippet’s context from conversation notes as I work with it, and getting transported back into a conversation, felt like a rich layer of potential meaning being made easily available.
  • What I created in the hour was something I otherwise likely wouldn’t have. I was able to build or at least start a line of detailed argumentation for both the scope of the workshop in March I was working on this hour, as well as a line of argumentation to be used within that workshop to show participants why taking EU developments into account is key when working on a regional or local issue with data. In a more explicit way and I think I might otherwise have come up with a ‘result’ rather than the path to that result. ‘Thinking on paper’ in short. Useful stuff.
  • Reflecting on all this afterwards before falling asleep, I realised that a key way to use this is connected to the video I linked to above in which Andy gathers his thoughts and notes after reading a book: reflecting on an article or book I just read. A key part of the work there is seeking out the friction with previous reading and Notions. Not just to work with the annotations from a book as-is, but also the messy juxtaposition and integration with earlier notes. Then bringing in snippets from here and there, paraphrasing them into some sort of synthesis (at least one in my mind) is valuable. Collapsing of snippets also plays a role here, as you work through multiple annotations and ‘confrontations’ in parallel, to temporarily remove them from consideration, or as a mark of them having been used ‘up’.
  • Once you delete a snippet, the marking at its source is also removed, so if a link to source is important enough to keep you need to do that purposefully, just as before.

Bookmarked A quick survey of academics, teachers, and researchers blogging about note taking practices and zettelkasten-based methods by Chris Aldrich

Chris Aldrich provides a nice who’s who around studying note taking practices. There are some names in here that I will add to my feeds. Also will need to go through the reading list, with an eye on practices that may fit with my way of working. Perhaps one or two names are relevant for the upcoming PKM summit in March too.

Chris actively collects historical examples of people using index card systems or other note taking practices for their personal learning and writing. Such as his recent find of Martin Luther King’s index of notes. If you’re interested in this, his Hypothes.is profile is a good place to follow for more examples and finds.

I thought I’d put together a quick list focusing on academic use-cases from my own notes

Chris Aldrich

In reply to Creating a custom GPT to learn about my blog (and about myself) by Peter Rukavina

It’s not surprising that GPT-4 doesn’t work like a search engine and has a hard time surfacing factual statements from source texts. Like one of the commenters I wonder what that means for the data analysis you also asked for. Perhaps those too are merely plausible, but not actually analysed. Especially the day of the week thing, as that wasn’t in the data, and I wouldn’t expect GPT to determine all weekdays for posts in the process of answering your prompt.

I am interested in doing what you did, but then with 25 years of notes and annotations. And rather with a different model with less ethical issues attached. To have a chat about my interests and links between things. Unlike the fact based questions he’s asked the tool that doesn’t necessarily need it to be correct, just plausible enough to surface associations. Such associations might prompt my own thinking and my own searches working with the same material.

Also makes me think if what Wolfram Alpha is doing these days gets a play in your own use of GPT+, as they are all about interpreting questions and then giving the answer directly. There’s a difference between things that face the general public, and things that are internal or even personal tools, like yours.

Have you asked it things based more on association yet? Like “based on the posts ingested what would be likely new interests for Peter to explore” e.g.? Can you use it to create new associations, help you generate new ideas in line with your writing/interests/activities shown in the posts?

So my early experiments show me that as a data analysis copilot, a custom GPT is a very helpful guide… In terms of the GPT’s ability to “understand” me from my blog, though, I stand unimpressed.

Peter Rukavina