I’ve created a small Alfred workflow to find a note name (folderpath/filename.md) in my Obsidian vault, and put it in markdown as link ([[filename]]) on my clipboard. It’s available on github now. If you use both Alfred and Obsidian (or some other markdown tool), it may be of use.

The use case is this: I annotate webarticles in hypothes.is. Those annotations end up in my file system in markdown where I use them in Obsidian. Therefore it is useful to me to link to existing Obsidian notes while I’m annotating, and not leave that until I encounter the annotation in Obsidian again (see an example). That way an annotation is already linked to some existing material, and not a lonely note in my collection of clippings. Having links in annotations at the same time increases the likelihood I will encounter them again naturally as I interact with my notions.

Until now I did this by searching in Obsidian and then copy-pasting that into hypothes.is adding the [[ ]] mark-up for an internal link.

This morning Frank mailed me to ask about my links in annotations. While I answered his mail I realised it would be easy to put the search and paste into an Alfred workflow, so that I don’t need to leave the browser/hypothes.is interface to add a link to an Obsidian note. Thank you for the nudge Frank!

John Caswell writes about the role of conversation, saying "conversation is an art form we’re mostly pretty rubbish at". New tools that employ LLM’s, such as GPT-3 can only be used by those learning to prompt them effectively. Essentially we’re learning to have a conversation with LLMs so that its outputs are usable for the prompter. (As I’m writing this my feedreader updates to show a follow-up post about prompting by John.)

Last August I wrote about articles by Henrik Olaf Karlsson and Matt Webb that discuss prompting as a skill with newly increasing importance.

Prompting to get a certain type of output instrumentalises a conversation partner, which is fine for using LLM’s, but not for conversations with people. In human conversation the prompting is less to ensure output that is useful to the prompter but to assist the other to express themselves as best as they can (meaning usefulness will be a guaranteed side effect if you are interested in your conversational counterparts). In human conversation the other is another conscious actor in the same social system (the conversation) as you are.

John takes the need for us to learn to better prompt LLM’s and asks whether we’ll also learn how to better prompt conversations with other people. That would be great. Many conversations take the form of the listener listening less to the content of what others say and more listening for the right time to jump in with what they themselves want to say. Broadcast driven versus curiosity driven. Me and you, we all do this. Getting consciously better at avoiding that common pattern is a win for all.

In parallel Donald Clark wrote that the race to innovate services on top of LLM’s is on, spurred by OpenAI’s public release of Chat-GPT in November. The race is indeed on, although I wonder whether those getting in the race all have an actual sense of what they’re racing and are racing towards. The generic use of LLM’s currently in the eye of public discussion I think might be less promising than gearing it towards specific contexts. Back in August I mentioned Elicit that helps you kick-off literature search based on a research question for instance. And other niche applications are sure to be interesting too.

The generic models are definitely capable to hallucinate in ways that reinforce our tendency towards anthropomorphism (which needs little reinforcement already). Very very ELIZA. Even if on occasion it creeps you out when Bing’s implementation of GPT declares its love for you and starts suggesting you don’t really love your life partner.

I associated what Karlsson wrote with the way one can interact with one’s personal knowledge management system the way Luhmann described his note cards as a communication partner. Luhmann talks about the value of being surprised by whatever person or system you’re communicating with. (The anthropomorphism kicks in if we based on that surprisal then ascribe intention to the system we’re communicating with).

Being good at prompting is relevant in my work where change in complex environments is often the focus. Getting better at prompting machines may lift all boats.

I wonder if as part of the race that Donald Clark mentions, we will see LLM’s applied as personal tools. Where I feed a more open LLM like BLOOM my blog archive and my notes, running it as a personal instance (for which the full BLOOM model is too big, I know), and then use it to have conversations with myself. Prompting that system to have exchanges about the things I previously wrote down in my own words. With results that phrase things in my own idiom and style. Now that would be very interesting to experiment with. What valuable results and insight progression would it yield? Can I have a salon with myself and my system and/or with perhaps a few others and their systems? What pathways into the uncanny valley will it open up? For instance, is there a way to radicalise (like social media can) yourself by the feedback loops of association between your various notes, notions and follow-up questions/prompts?



An image generate with Stable Diffusion with the prompt “A group of fasionable people having a conversation over coffee in a salon, in the style of an oil on canvas painting”, public domain

David Libeau keeps a list of small Mastodon tools. One of them is a scheduler for Mastodon posts. I can schedule posts for Mastodon inside this site, by scheduling them here in WordPress and autoposting to my Mastodon account. However Libeau’s tool is a tiny tool one can self-host. I installed it locally on a webserver on my laptop and tested its use.

It works by making API calls to my (own) Mastodon instance. In your Mastodon settings, hit developers and add an application, and copy its API key. Take that API key to login to your Mastodon instance in the interface of the scheduler. That’s all. It currently doesn’t support adding media to posts, nor having multiple Mastodon profiles to post to. Because of the latter I have two copies in my laptop’s webserver, one for my personal account and one for my work account.


A screenshot of the scheduler’s interface, showing a scheduled post.


A screenshot of a post made with the scheduler, the same as shown in the image above.

In reply to highlight.js, an extension to highlight text on web pages by James G.

Nice project, James! I’m not sure I get the distinction you make between this and an annotation extension, as highlighting is annotation too and the pop up box even calls the highlights annotations. One question: do you apply the W3C Web Annotation Data Model recommendation? That would make highlighting with this potentially interoperable with e.g. Hypothes.is. Or allow interaction with the Hypothes.is API further down the line.

I don’t presently have plans to expand this into an annotation extension, as I believe that purpose is served by Hypothesis. For now, I see this extension as a useful way for me to save highlights, share specific pieces of information on my website, and enable other people to do the same.

James G.

I added chatGPT to Obsidian following these steps to install an experimental plugin. I also set up a pay as you go account for OpenAI as I’ve used up my trial period for both Dall-E and GPT-3.

At first glance using the GPT3 plugin works fine, although what seems to be missing is the actual chat part, the back and forth where you build on a response with further prompts. The power of chatGPT as a tool is in that iterative prompting I think.
You can still iterate by prompting the chatGPT with the entirety of the previous exchange but that is slightly cumbersome (it means you’d go into a note delete the previous exchange, and then have the plugin re-add it plus the next generated part).

I think it will be more fruitful to do the entire exchange with chatGPT in the browser and manually grab the content if I want to use it in Obsidian. The MarkDownload browser extension capably grabs the entire exchange with chatGPT and stores it as markdown in my Obsidian notes as well.

In reply to How can my posts integrate better with ActivityPub? by Chris Aldrich

I’m trying to add AP to my site here to be able to provide streams to approved followers of otherwise unlisted content in my site. E.g. travel plans like Dopplr did, or Swarm style check-ins (normally posted to WP with micropub, e.g. here). Both those activities exist in ActivityStreams and thus in AP. That would be possible to follow with various existing AP clients. If more people do it, it might be useful to create a client surface to combine the various travel plan streams of others I follow and show crossing paths etc.