Gisteravond was de 6e meet-up van Nederlandstalige Obsidian gebruikers. Net als de editie van afgelopen december vond deze meet-up plaats onder de vlag van de Digitale Fitheid community en de KNVI (Koninklijke Nederlandse Vereniging van Informatieprofessionals).
De vorige keer was ik een van de facilitators, dit keer was de begeleiding in handen van Martijn Aslander en Lykle de Vries. Dat gaf mij gelegenheid meer inhoudelijk mee te doen, en dat was prettig.

Het recept was hetzelfde als wat Marieke van Vliet en ik de vorige keer improviseerden: aanwezigen droegen aan het begin een onderwerp aan, en vervolgens mocht iemand telkens iets kiezen uit de lijst (maar niet het eigen onderwerp). Zo komt een divers lijstje onderwerpen tot stand, en zorg je ervoor dat een bredere groep aan het woord komt.

Iemand vroeg of je Obsidian ook op een USB stick kunt draaien. Dat je je vault op een stick hebt, en dan op systemen waar Obsidian staat die kunt openen inclusief alle plugins etc. Ik stelde voor dat ter plekke te proberen, en het antwoord lijkt ja te zijn. Deed me denken aan de wiki-on-a-stick experimenten die ik lang geleden deed rondom het ‘patchwork portal‘, waarbij wiki’s met een lokale kleine webserver op een stick werden uitgedeeld waar anno 2005 nog geen of heel weinig internet verbinding was.

Ik was zelf benieuwd of mensen n.a.v. de PKM Summit in maart meer zijn gaan doen met de visuele technieken die Zsolt Viczián met zijn Excalidraw plugin toen liet zien. Met name was ik geïnteresseerd in of mensen bestanden tegelijkertijd als tekst en als visueel element gebruiken (hier uitgelegd door Nicole van der Hoeven). Twee deelnemers lieten het e.e.a. zien. Zelf heb ik een sneltoets voor het schakelen tussen tekst en visueel ingesteld, maar dat zag ik hen niet doen. Dat zegt me dat ze die switch weinig maken. Ik zal er zelf eens iets over schrijven in meer detail, met twee recente voorbeelden hoe dat waardevol voor me was en heel prettig en wrijvingsloos voelde.

Maarten den Braber was een van de aanwezigen die liet zien hoe hij bepaalde zaken in zijn workflow automatiseert, vanuit hetzelfde principe dat ik hanteer: geen dingen doen die uniek zijn voor Obsidian, je moet altijd ook met je platte tekst bestanden uit de voeten kunnen. Hij liet de PDF++ plugin zien, en die moet ik zeker eens onderzoeken en vergelijken met hoe ik momenteel Zotero gebruik.

Muhammed Kilic liet zien hoe hij over meerdere apps heen dezelfde tags, links en indexes gebruikt. Hij noemde daarbij hoe ik dat ook doe in mijn hypothes.is annotaties (links naar bestaande notes, taken, tags opnemen waardoor het in Obsidian meteen in context staat), maar liet zien dat hij dat ook in Zotero doet. Dat doe ik niet in mijn annotaties daar, en toen hij het liet zien vroeg ik me af waarom eigenlijk. Ik link wel vanuit Obsidian naar Zotero, maar in mijn annotaties verweef ik in Zotero mijn notes en tags veel minder. Eens over nadenken, en uitproberen.

Tot slot merkte ik dat het in een groepsgesprek als dit lastig is om min of meer standaard ook te laten zien wat je beschrijft. Je moet je dan maar voorstellen wat iemand daadwerkelijk doet, ipv het te zien. Voelen we ons kwetsbaar in het tonen van onze tools en werkwijzen? Het aantal malen dat ‘tell’ ook met ‘show’ werd ondersteund was daardoor beperkt, en dat is jammer vind ik. Voor een volgende keer zou het ook leuk zijn om in plaats van over aspecten te praten eens iets van ieders gehele implementatie te zien, en daar vragen over te stellen.

Bookmarked Project Tailwind by Steven Johnson

Author Steven Johnson has been working with Google and developed a prototype for Tailwind. Tailwind, an ‘AI first notebook’, is intended to bring an LLM to your own source material, and then you can use it to ask questions of the sources you give it. You point it to a set of resources in your Google Drive and what Tailwind generates will be based just on those resources. It shows you the specific source of the things it generates as well. Johnson explicitly places it in the Tools for Thought category. You can join a waiting list if you’re in the USA, and a beta should be available in the summer. Is the USA limit intended to reduce the number of applicants I wonder, or a sign that they’re still figuring things like GDPR for this tool? Tailwind is prototyped on PaLM API though, which is now generally available.

This, from its description, gets to where it becomes much more interesting to use LLM and GPT tools. A localised (not local though, it lives in your Google footprint) tool, where the user defines the corpus of sources used, and traceable results. As the quote below suggests a personal research assistant. Not just for my entire corpus of notes as I describe in that linked blogpost, but also on a subset of notes for a single topic or project. I think there will be more tools like these coming in the next months, some of which likely will be truly local and personal.

On the Tailwind team we’ve been referring to our general approach as source-grounded AI. Tailwind allows you to define a set of documents as trusted sources …, shaping all of the model’s interactions with you. … other types of sources as well, such as your research materials for a book or blog post. The idea here is to craft a role for the LLM that is … something closer to an efficient research assistant, helping you explore the information that matters most to you.

Steven Johnson

I’ve now added over 100 annotations using Hypothes.is (h.), almost all within the last month. This includes a few non-public ones. Two weeks ago I wrote down some early impressions, to which I’m now adding some additional observations.

  1. 100 annotations (in a month) don’t seem like a lot to me, if h. is a regular tool in one’s browsing habit. H. says they have 1 million users, that have made 40 million annotations to over 2 million articles (their API returns 2.187.262 results as I write this). H. has been in existence for a decade. These numbers average out to 20 annotations to 2 articles per user. This to me suggests that the mode is 1 annotation to 1 article by a user and then silence. My 100 annotations spread out over 30 articles, accumulated over a handful of weeks is then already well above average, even though I am a new and beginning user. My introduction to h. was through Chris Aldrich, whose stream of annotations I follow daily with interest. He recently passed 10.000 annotations! That’s 100 times as many as mine, and apparently also an outlier to the h. team itself: they sent him a congratulatory package. H.’s marketing director has 1348 public annotations over almost 6 years, its founder 1200 in a decade. Remi Kalir, co-author of the (readworthy!) Annotation book, has 800 in six years. That does not seem that much from what I would expect to be power users. My blogging friend Heinz has some 750 annotations in three years. Fellow IndieWeb netizen Maya some 1800 in a year and a half. Those last two numbers, even if they differ by a factor 5 or so in average annotations/month, feel like what I’d expect as a regular range for routine users.
  2. The book Annotation I mentioned makes a lot of social annotation, where distributed conversations result beyond the core interaction of an annotator with an author through an original text. Such social annotation requires sharing. H. provides that sharing functionality and positions itself explicitly as a social tool ("Annotate the web, with anyone, anywhere" "Engage your students with social annotation"). The numbers above show that such social interaction around an annotated text within h. will be very rare in the public facing part of h., in the closed (safer) surroundings of classroom use interaction might be much more prominent. Users like me, or Heinz, Maya and Chris whom I named/linked above, will then be motivated by something else than the social aspects of h. If and when such interaction does happen (as it tends to do if you mutually follow eachothers annotations) it is a pleasant addition, not h.’s central benefit.
  3. What is odd to me is that when you do indeed engage into social interaction on h., that interaction cannot be found through the web interface of my annotations. Once I comment, it disappears out of sight, unless I remember what I reacted to and go back to that annotation by another user directly, to find my comment underneath. It does show up in the RSS feed of my annotations, and my Hypothes.is-to-Obsidian plugin also captures them through the API. Just not in the web interface.
  4. Despite the social nature of h., discovery is very difficult. Purposefully ‘finding the others’ is mostly impossible. This is both an effect of the web-interface functionality, as well as I suspect because of the relatively sparse network of users (see observation 1). There’s no direct way of connecting or searching for users. The social object is the annotation, and you need to find others only through annotations you encounter. I’ve searched for tags and terms I am interested in, but those do not surface regular users easily. I’ve collated a list of a dozen currently active or somewhat active annotators, and half a dozen who used to be or are sporadically active. I also added annotations of my own blogposts to my blog, and I actively follow (through an RSS feed) any new annotation of my blogposts. If you use h., I’d be interested to hear about it.
  5. Annotations are the first step of getting useful insights into my notes. This makes it a prerequisite to be able to capture annotations in my note making tool Obsidian, otherwise Hypothes.is is just another silo you’re wasting time on. Luckily h. isn’t meant as a silo and has an API. Using the API and the Hypothes.is-to-Obsidian plugin all my annotations are available to me locally. However, what I do locally with those notes does not get reflected back to h., meaning that you can’t really work through annotations locally until you’ve annotated an entire article or paper in the browser, otherwise sync issues may occur. I also find that having the individual annotations (including the annotated text, in one file), not the full text (the stuff I didn’t annotate), feels impractical at times as it cuts away a lot of context. It’s easily retrievable by visiting the url now, but maybe not over time (so I save web archive links too as an annotation). I also grab a local markdown copy of full articles if they are of higher interest to me. Using h. in the browser creates another inbox in this regard (having to return to a thing to finish annotation or for context), and I obviously don’t need more inboxes to keep track of.
  6. In response to not saving entire articles in my notes environment, I have started marking online articles I haven’t annotated yet at least with a note that contains the motivation and first associations I normally save with a full article. This is in the same spot as where I add a web archive link, as page note. I’ve tried that in recent days and that seems to work well. That way I do have a general note in my local system that contains the motivation for looking in more detail at an article.
  7. The API also supports sending annotations and updates to h. from e.g. my local system. Would this be potentially better for my workflow? Firefox and the h. add-on don’t always work flawlessly, not all docs can be opened, or the form stops working until I restart Firefox. This too points in the direction of annotating locally and sending annotations to h. for sharing through the API. Is there anyone already doing this? Built their own client, or using h. ‘headless’? Is there anyone who runs their own h. instance locally? If I could send things through the API, that might also include the Kindle highlights I pull in to my local system.
  8. In the same category of integrating h. into my pkm workflows, falls the interaction between h. and Zotero, especially now that Zotero has its own storage of annotations of PDFs in my library. It might be of interest to be able to share those annotations, for a more complete overview of what I’m annotating. Either directly from Zotero, or by way of my notes in Obsidian (Zotero annotatins end up there in the end)
  9. These first 100 annotations I made in the browser, using an add-on. Annotating in the browser takes some getting used to, as I try to get myself out of my browser more usually. I don’t always fully realise I can return to an article for later annotation. Any time the sense I have to finish annotating an article surfaces, that is friction I can do without. Apart from that, it is a pleasant experience to annotate like this. And that pleasure is key to keep annotating. Being able to better integrate my h. use with Obsidian and Zotero would likely increase the pleasure of doing it.
  10. Another path of integration to think about is sharing annotated links from h. to my blog or the other way around. I blog links with a general annotation at times (example). These bloggable links I could grab from h. where I bookmark things in similar ways (example), usually to annotate further later on. I notice myself thinking I should do both, but unless I could do that simultaneously I won’t do such a thing twice.

Attempting to understand the ‘Noosphere’ and Subconscious tooling that Gordon Brander is developing results in several questions. Brander proposes a new ‘low level infrastructure’ (subconscious) for sharing stuff across the internet, which should result in us thinking together on a global scale (the noosphere).

I’ve followed the recent Render conference on ‘tools for thought’ where Gordon Brander presented Noosphere and Subconscious. In the wake of it I joined the Discord server around this topic, and read the ‘Noosphere Explainer‘. Brander’s Render talk roughly follows that same document.

Brander says: The internet is already a tool for thought, so we should make it better at it. The tools at our disposal to deal with this new voluminous information environment haven’t reached their potential yet. Learning to think together at planetary scale is a needed ingredient to address global issues. There are many interesting tools out there, but they’re all silos of SaaS. They’re silos because of same origin policy which prevents cross-site/host/domain/port sourcing of material. Subconscious is meant to solve that by providing a ‘protocol for thoughts’.

This leaves me with a range of questions.

  • Subconscious is meant to solve same origin policy. SOP however seems to be a client (i.e. browser) enforced thing, focused on (java)scripts, and otherwise e.g. ignores HTML. Apps are/can be viewers like browsers are viewers. So why isn’t the web suitable, with the app or a tweaked browser on top? Why a whole new ‘infrastructure’ over the internet? That sounds like it wants to solve a whole lot more than same origin to remove the bias towards silos. What are those additional things?
  • The intended target is to make the internet a better tool for thought. Such thoughts are text based it seems so what does Subsoncsious do in contrast to current text based thoughts shared that e.g. the web doesn’t?
  • Assuming Subconscious does what it intends, how do we get from a ‘low level infrastructure’ to the stated overarching aim of thinking together globally? I see texts, that may or not be expressed thoughts, being linked and shared like web resources, how do we get to ‘thinking together’ from there? The talk at Render paid tribute to that at the beginning but doesn’t show how it would be done (and the invocation of the Xanadu project at the start might well be meaningful in that sense), not even in any ‘and then the magic happens to get to the finish line’ fashion. Is the magic supposed to be emergent, like I and others assumed the web and social software would do 20-30 years ago? Is it enough to merely have a ‘protocol for thoughts’? What about non-infrastructure type decision and consensus building tools like Liquid Feedback or Audrey Tang’s quadratic voting in vTaiwan? Those are geared to action, and seem more immediately useful towards solving global issues, don’t they?

I’ll be hanging out in the Discord server, you can too (invite link), and going through Gordon Brander’s earlier postings, to see if I can better understand what this is about.