On the Obsidian forum I came across an intriguing post by Andy Matuschak. Matthew Siu and Andy have made an Obsidian plugin to help with sensemaking and they are looking for people with use cases to test it out.
I filled out the survey saying I had a large variety of notes about EU data law, digital ethics, and (data) governance, which I need to make sense of to guide public sector entities. They asked about online traces of me as well. Soon Matthew reached out and we decided on a time for a call.

And that is how I ended up working in Obsidian for an hour while Matthew and Andy were watching my shared screen. Sort-of how I once watched Andy work through his notes after reading a book. They’re on the US west coast, so with the nine hour time difference it was 22:00-23:00 hours here, which plus my cold meant I wasn’t as focused as earlier in the day. It also feels slightly odd to me having people watch me while doing actual work.

Because that was what I did, doing some actual work. Using notes from several conversations earlier this week, plus EU legal texts and EU work plans, and notes from workshop output from over a year ago, I was working towards the right scope of a workshop to be held early March.

The plugin I tried out is called the Obsidian reference plugin.
It allows you to select something in one note, and paste it in another. It links back to the source, is uneditable where you paste it, and marked where you copied it. When you hover over it, it can preview the snippet in its original context, when you click it the source opens/focuses in another tab. It seems a simple thing, and similar to block transclusion/references, yet still it had some interesting effects:

  • Initially I saw myself using it to cut and paste some things from different notes together in a new note. This is a bit like canvassing, but then solely in text, and focused on snippets rather than full notes.
  • The snippets you paste aren’t editable, and the idea is you can paraphrase them, rather than use them as-is like in block transclusion. I did a bit of that paraphrasing, but not a lot, it was more like gathering material only. Perhaps as I was bringing together parts of my own (conversation) notes. I can see that when going through the annotations of a source text, this can be a second step, thinking highlights and annotations through, remixing them, and come up with some notions to keep (see second to last bullet).
  • It was easy to bring in material from a range of notes, without it becoming hard to keep an overview of what came from where. This is I think key when comparing different inputs and existing own notes.
  • Once I was interacting with the collected material, my use of additional snippets from other notes shifted: I started to use them inline, as part of a sentence I was writing. This resembles how I currently use titles of my main notions, they’re sentences too that can be used inside another sentence, as a reference inside a flowing text rather than listed at the end. I often do this because it marks the exact spot where I think two notions link. This means using smaller snippets (part of a phrase), and it is possible because the reference to the source is kept, and its context immediately accessible through hovering over it.
  • Discussing this effect with Matthew and Andy I realised another use case for this plug-in is working with the content of the core of my conceptual notes (that I call Notions) inside project or work notes. Now that reference is only to Notions as a whole. Adding a snippet makes a qualitative difference I want to explore.
  • You can collapse the snippets you create, but I didn’t do that during the hour I let Matthew and Andy watch me work. I can imagine doing that if I’m working through a range of snippets to paraphrase or use. I can see this being useful when for instance collating results from in-depth interviews. For my World Bank data readiness assessments the report was based on snippets from some 70 (group) interviews. A lot of material that I would mine along the lines of ‘what was said about X across all these conversations’, or ‘what assumptions are voiced in these interviews regarding Y’.
  • I spent the hour working with notes mainly from conversations, which are often pseudo-verbatim with my associations and questions I had during the conversation mixed in. Reading old notes often allows me to be ‘transported’ back into the conversation, its setting etc in my memory. Being able to glance at a snippet’s context from conversation notes as I work with it, and getting transported back into a conversation, felt like a rich layer of potential meaning being made easily available.
  • What I created in the hour was something I otherwise likely wouldn’t have. I was able to build or at least start a line of detailed argumentation for both the scope of the workshop in March I was working on this hour, as well as a line of argumentation to be used within that workshop to show participants why taking EU developments into account is key when working on a regional or local issue with data. In a more explicit way and I think I might otherwise have come up with a ‘result’ rather than the path to that result. ‘Thinking on paper’ in short. Useful stuff.
  • Reflecting on all this afterwards before falling asleep, I realised that a key way to use this is connected to the video I linked to above in which Andy gathers his thoughts and notes after reading a book: reflecting on an article or book I just read. A key part of the work there is seeking out the friction with previous reading and Notions. Not just to work with the annotations from a book as-is, but also the messy juxtaposition and integration with earlier notes. Then bringing in snippets from here and there, paraphrasing them into some sort of synthesis (at least one in my mind) is valuable. Collapsing of snippets also plays a role here, as you work through multiple annotations and ‘confrontations’ in parallel, to temporarily remove them from consideration, or as a mark of them having been used ‘up’.
  • Once you delete a snippet, the marking at its source is also removed, so if a link to source is important enough to keep you need to do that purposefully, just as before.

Bookmarked A quick survey of academics, teachers, and researchers blogging about note taking practices and zettelkasten-based methods by Chris Aldrich

Chris Aldrich provides a nice who’s who around studying note taking practices. There are some names in here that I will add to my feeds. Also will need to go through the reading list, with an eye on practices that may fit with my way of working. Perhaps one or two names are relevant for the upcoming PKM summit in March too.

Chris actively collects historical examples of people using index card systems or other note taking practices for their personal learning and writing. Such as his recent find of Martin Luther King’s index of notes. If you’re interested in this, his Hypothes.is profile is a good place to follow for more examples and finds.

I thought I’d put together a quick list focusing on academic use-cases from my own notes

Chris Aldrich

In reply to Creating a custom GPT to learn about my blog (and about myself) by Peter Rukavina

It’s not surprising that GPT-4 doesn’t work like a search engine and has a hard time surfacing factual statements from source texts. Like one of the commenters I wonder what that means for the data analysis you also asked for. Perhaps those too are merely plausible, but not actually analysed. Especially the day of the week thing, as that wasn’t in the data, and I wouldn’t expect GPT to determine all weekdays for posts in the process of answering your prompt.

I am interested in doing what you did, but then with 25 years of notes and annotations. And rather with a different model with less ethical issues attached. To have a chat about my interests and links between things. Unlike the fact based questions he’s asked the tool that doesn’t necessarily need it to be correct, just plausible enough to surface associations. Such associations might prompt my own thinking and my own searches working with the same material.

Also makes me think if what Wolfram Alpha is doing these days gets a play in your own use of GPT+, as they are all about interpreting questions and then giving the answer directly. There’s a difference between things that face the general public, and things that are internal or even personal tools, like yours.

Have you asked it things based more on association yet? Like “based on the posts ingested what would be likely new interests for Peter to explore” e.g.? Can you use it to create new associations, help you generate new ideas in line with your writing/interests/activities shown in the posts?

So my early experiments show me that as a data analysis copilot, a custom GPT is a very helpful guide… In terms of the GPT’s ability to “understand” me from my blog, though, I stand unimpressed.

Peter Rukavina

Over drie weken, op 18 december vindt de PKM / Obsidian gebruikers meet-up plaats! De Digitale Fitheid community en de PKM groep van de Nederlandse Vereniging van Informatieprofessionals (KNVI) zijn de gastgevers. Vanaf 19:00 duiken we bij S2M Utrecht in hoe we persoonlijk kennismanagement doen, en hoe ieder van ons dat al wel of nog niet in Obsidian implementeert. Elkaars werkwijzen zien en bespreken is altijd enorm inspirerend, en levert nieuwe ideeën op hoe je je eigen PKM-flow kunt tweaken en hoe je je Obsidian gereedschap scherper slijpt.

We willen veel tijd nemen om elkaar dingen te kunnen laten zien. We gebruiken daarom in ieder geval 2 projectieschermen naast elkaar, zodat we ook dingen kunnen vergelijken en we sneller meer mensen iets kunnen laten tonen.

Leidend is telkens de vraag “Hoe doe jij X in je PKM systeem, en hoe heb je dat geïmplementeerd in je Obsidian set-up?”
Waar X een onderwerp kan zijn als:

  • hoe zoek en vind je in jouw PKM systeem?
  • hoe begrens jij jouw PKM systeem, hoort productiviteit/GTD er bij, of alleen leren? Is jouw PKM flow generiek, of gericht op bepaalde thema’s?
  • naar welke outputs / resultaten werk je in je PKM toe (schrijven, vertellen, creatieve ideeën etc.)?

En daaromheen zijn er interessante thema’s als:

  • Welke overgangen van analoog naar digitaal en vice versa zitten er in jouw systematiek? Hoe speelt analoog een rol in je leren?
  • Welke visuele elementen spelen een rol in je PKM systeem?

Tot slot, omdat Obsidian werkt op lokale bestanden, is ook een onderwerp:

  • Hoe gebruik je vanuit andere programma’s of werkwijzen die lokale files buiten Obsidian om?

We vragen je iets dat jij graag wilt laten zien uit jouw PKM systeem in Obsidian voor te bereiden. Aan het begin kijken we wie graag bepaalde voorbeelden wil zien of laten zien. Vanuit dat overzicht van onderwerpen gaan we dan aan de slag.

Gratis aanmelden kan op Digitale Fitheid. Tot 18 december!

I realised I had an ical file of all my appointments from the period I used Google Calendar from January 2008 when I started as an independent consultant, until February 2020 when I switched to the calendar in my company’s NextCloud.
I never search through that file even though I sometimes wonder what I did at a certain moment in that period. After a nudge by Martijn Aslander who wrote on a community platform we both frequent about back filling his daily activities into Obisidian for instance based on his photos of a day through the years in his archive, I thought to import that ical file and turn it into day logs listing my appointments for a date.

I tried to find some ready made parsers or libraries I could use in PHP, but most I could find is aimed at importing a live calendar rather than an archived file, and none of them were aimed at creating an output of that file in a different format. Looking at the ical file I realised that making my own simple parser should be easy enough.

I write a small PHP script that reads the ical file line by line until it finds one that says BEGIN:VEVENT. Then it reads the lines until it finds the closing line END:VEVENT. It then interprets the information between those lines, lifting out the date, location, name and description, while ignoring any other information.
After each event it finds, it writes to a file ‘Daylog [date].md’ in a folder ./year/month (creating the file or appending the event as a new line if the file exists). It uses the format I use for my current Day logs.
Let it repeat until it processed all 4.714 events in my calendar from 2008 to 2020.

A screenshot of all the folders with Daylogs created from 2008-2020

Screenshot of the newly created Daylog for 20 February 2008, based on the appointments I had that day, taken from the calendar archive. This one mentions a preparatory meeting for the open GovCamp I helped organise that year in June, which kicked off my work in open data since then.

Received this yesterday. It’s a 1987 collection of interviews with German sociologist Niklas Luhmann, on the occasion of his 60th birthday. I bought it after I saw Chris Aldrich sharing some annotations.

Started exploring. The context is important to take into account. 1980s Germany, after ’68 and before the Wall came down, the contrast between Habermas and Luhmann while both being ‘famous’ in left and intellectual circles at the same time.