Bookmarked Using GPT-3 to augment human intelligence: Learning through open-ended conversations with large language models by Henrik Olof Karlsson

Wow, this essay comes with a bunch of examples of using the GPT-3 language model in such fascinating ways. Have it stage a discussion between two famous innovators and duke it out over a fundamental question, run your ideas by an impersonation of Steve Jobs, use it to first explore a new domain to you (while being aware that GPT-3 will likely confabulate a bunch of nonsense). Just wow.
Some immediate points:

  • Karlsson talks about prompt engineering, to make the model spit out what you want more closely. Prompt design is an important feature in large scale listening, to tap into a rich interpreted stream of narrated experiences. I can do prompt design to get people to share their experiences, and it would be fascinating to try that experience out on GPT-3.
  • He mentions Matt Webbs 2020 post about prompting, quoting “it’s down to the human user to interview GPT-3“. This morning I’ve started reading Luhmann’s Communicating with Slip Boxes with a view to annotation. Luhmann talks about the need for his notes collection to be thematically open ended, and the factual status or not of information to be a result of the moment of communication. GPT-3 is trained with the internet, and it hallucinates. Now here we are communicating with it, interviewing it, to elicit new thoughts, ideas and perspectives, similar to what Luhmann evocatively describes as communication with his notes. That GPT-3 results can be totally bogus is much less relevant as it’s the interaction that leads to new notions within yourself, and you’re not after using GPT-3s output as fact or as a finished result.
  • Are all of us building notes collections, especially those mimicking Luhmann as if it was the originator of such systems of note taking, actually better off learning to prompt and interrogate GPT-3?
  • Karlsson writes about treating GPT-3 as an interface to the internet, which allows using GPT-3 as a research assistant. In a much more specific way than he describes this is what the tool Elicit I just mentioned here does based on GPT-3 too. You give Elicit your research question as a prompt and it will come up with relevant papers that may help answer it.

On first reading this is like opening a treasure trove, albeit a boobytrapped one. Need to go through this in much more detail and follow up on sources and associations.

Some people already do most of their learning by prompting GPT-3 to write custom-made essays about things they are trying to understand. I’ve talked to people who prompt GPT-3 to give them legal advice and diagnose their illnesses. I’ve talked to men who let their five-year-olds hang out with GPT-3, treating it as an eternally patient uncle, answering questions, while dad gets on with work.

Henrik Olof Karlsson

Yesterday, musing about traversing my social graph through blogrolls, I suggested using OPML’s include attribute as a way of adding the blogrolls of the blogs I follow in my own blogroll. Ideally using a spec compliant OPML reader, you’d be able to seamlessly navigate from my blogroll, through the blogroll of one of the blogs I follow, to the blogroll of someone they follow, and presumably back to me at some point.
It does require having an OPML version of such blogrolls available. Peter publishes his blogroll as OPML as I do, allowing a first simple experiment: do includes get correctly parsed in some of the Outliner tools I have?

Adding an include into my OPML file

This little experiment starts with adding to my list of RSS feeds I follow a reference to Peter’s own OPML file of feeds he follows. I already follow two of Peter’s RSS feeds (blogposts and favourites) which I now placed in their own subfolder and to which I added an outline node of the include type, with the URL of Peter’s OPML file.


Screenshot of my OPML file listing the RSS feeds I follow. Click to enlarge. On line 22 you see the line that includes Peter’s OPML file by mentioning its URL.

Trying three outliners

Cloud Outliner (which I in the past used to first create outlines that could then be sent to Evernote) does not parse OPML includes correctly upon import. It also doesn’t maintain any additional attributes from OPML outline nodes, just the text attribute.


Screenshot of Cloud Outliner showing incorrect import of my OPML file. Click to enlarge.

Tinderbox like Cloud Outliner fails to load OPML includes as per spec. It does load some of the attributes (web url, and description, next to the standard text attribute), but not any others (such as the feed url for instance, the crucial element in a list of RSS feeds). It looks like it only picks up on attributes that are directly mappable on pre-existing default attributes within Tinderbox itself.


Screenshot of how Tinderbox imports my OPML file. It keeps some attributes but ignores most, and for includes just mentions the URL

Electric Drummer does correctly import the entire OPML outline. As Dave Winer is both the original creator of the OPML specification and more recently of the Electric Drummer app, this is consistent. Electric Drummer picks up on all attributes in an imported OPML file. Upon import it also fetches the external OPML files listed as includes from their URLs, and fully incorporates them into the imported outline.


Screenshot of Drummer, which incorporates the content of Peter’s OPML file I linked to in my OPML file. Click to enlarge.

Opening up options for tinkering

So at least there is 1 general outliner tool that can work with includes. It probably also means that Dave’s OPML package can do the same, which allows me to tinker at script level with this. One candidate for tinkering is, where a blogger has a blogroll, just not in OPML, to use the OPML package to convert scraped HTML to OPML, and include it locally. That allows me to traverse sets of blogrolls and see the overlap, closed triangles, feedback loops etc. I could also extend my own published blogroll by referencing all the published blogrolls of the bloggers I follow. For you my blogroll would then support exploration and discovery one step further outwards in the network. In parallel I can do something similar for federated bookshelves (both in terms of books as in terms of lists of people who’s booklists and their lists of people you follow)

Favorited dev Notes for Markdown in RSS by Dave Winer

As part of celebrating twenty years of RSS, Dave Winer adds the ability to incorporate markdown in RSS feeds. Essentially this was always possible, but there was no way to tell a RSS reader that something was to be interpreted not as HTML but as Markdown. Doing this makes it possible to provide both HTML and Markdown in the same feed, if Markdown is e.g. the way you’ve written a posting and want to be able to also edit it again in Markdown, and not in HTML.

After my hiatus I think this is worth an experiment to see if I can generate an RSS feed directly from my markdown notes on my local system. Just like I already can generate OPML feeds and blogposts or website pages from my notes. Chris Aldrich recently asked about using WordPress and Webmention as a way of publishing your own notes with the capability of linking them to other peoples notes. Could RSS play a role there too? Could I provide selected RSS feeds for specific topics directly from my notes? Or for specific people? For them to read along? Is there something here that can play a role in social sharing of annotations, such as Hypothes.is provides? I need to play with this thought. RSS is well understood an broadly used, providing not just HTML but also Markdown through it sounds like a step worth exploring.

Web pages are predictably untrustworthy to remain online as they were when you dropped by to see them. They can change while their address remains the same, can redirect an address to someplace else entirely, and entire websites can disappear meaning there’s no response when attempting to visit an address.

This means that when I link to something here, there’s no guarantee at all that when you click such a link that you actually get to see what I thought I’ve linked to. And no, screenshots don’t help: fake is easy and we need to back up our words with hyperlinks.

The same is true for stuff I don’t link to here, but save to my archive. That’s why I don’t just save URLs but the entire article for future reference to my markdown notes. That still means the actual source might disappear, without me having a way of proving what I saved is what I saw. This not only is relevant to the content itself, but also for instance for licensing information. There are photos in this blog that were openly licensed when I used them, but no longer, leaving it impossible for me to prove I still can use the image because of the license at the time.

This makes an archiving service useful, like Archive.org. I can use that to store URLs I find interesting and I do that with some regularity. It is why I am a monthly donor to the Web Archive, I’d like it to remain a more robust reference point on the web.

Currently I have one way of adding web pages to an archive, using the Wayback Machine add-on in my browser. The same add-on helps me find previous versions of a page already archived, tweets about that page, and annotations made by others. Very useful, during browsing.


The Web Archive browser add-on bookmarklet

Writing blogposts and saving webpages as markdown in my local notes, or starting to annotate a page in Hypothes.is are another matter however. There I’d like to automate getting or creating an archive link.

In all cases it would need to be an archive link next to the original. If I link to something in a blogpost here I want to still send WebMentions to the linked site, and that requires the link to the original to be in my posting. Similarly for my notes, I want to have the original url as well, although it would be reconstructable from the archive link. For online social annotations in Hypothes.is, the original link is needed because that is how you find other people’s annotations alongside your own. The last one is probably easiest, by using the browser add-on manually and adding the result as a first annotation for instance.


An archive link as first Hypothes.is annotation

On the page in the IndieWeb wiki about using the Internet Archive there are some code snippets to be found to use with the Archive’s API, or using the basic string to save something https://web.archive.org/save/urlhere. It also mentions bloggers who either send the URLs they mention, or their own postings, or both (e.g. when sending a WebMention) to the Internet Archive.
When posting to my blog from my local markdown notes I could potentially add a function to the markdown-to-html parser I use where it detects external links, runs them through the Archive and writes the html for both the direct and the archived link.

For saving web articles as local notes in markdown there are several options to explore:

  • When saving from the browser using the Markdownload add-on, first saving and copying the saved url using the Achive add-on, then pasting that archive link in the dialog box.
  • Adding [Opslaan in Internet Archive](https://web.archive.org/save/{baseURI}) to the Markdownload template so I can directly save a URL from within the local note later, if wanted. I added this experimentally, to see if I would actually use it like this.
  • When saving to notes from my microsub feedreader I could add a function to the html-to-markdown parser I use there to run external links through the Archive and write the Archive link in markdown after the original link.

Chris Aldrich’s Hypothes.is feed pointed me to this resource on Personal Knowledge Graphs (which he co-authors/-ed). In it Martynas Jusevičius mentions as an issue with markdown being used for personal knowledge graphs is their incompatibility with the RDF ecosystem as there is no support for typed links in markdown. I disagree with the statement that that would turn markdown knowledge collections into a ‘walled garden’. There’s no garden there. It does hinder interoperability with more complex environments like RDF / semantic web, and potential connections and interactions with other graphs and people.

A question I think is whether the burden of arranging interoperability lies with the least complex or with the most complex part. Probably the latter, which points away from plaintext and markdown, to at least having a parser that adds the additional complexity. Markdown always needs a parser, it’s not intended for other people to see other than the author(s).

On the other hand, being able to characterise links sounds like at least a somewhat doable step in a markdown text environment. In terms of publishing such notes to my blog that should translate into microformats in html, but parsing it to xml is a similar step.

A quick search came up with this post ‘Semantic Markdown‘.
It shows the issue of adding more complexity to markdown well I think. As an authoring tool you don’t want to make writing more complicated, nor make reading back what you’ve written less easy for human eyes. This is why I e.g. generally avoid frontmatter in my markdown notes in Obsidian, as it reduces the ease of reading for myself. Inline data fields used sparingly are less disruptive to me.

The article also provides imo mostly unconvincing examples, like labeling Berlin as a geographic place, or an event name as an event. Especially given the effort involved marking them up as city or event. I’d like to form more meaningful triples around external links (a reply to b, a part of b), also between notes (‘a counterexample of b’) where the information is in the type of relationship between two points. There’s some association here with one OPML file embedding other OPML outlines by pointing to them, and the branching in a classic Zettelkasten or even code repo’s.
Don’t want to just add links to mundane things (this word refers to a city): I e.g. dislike a lot of Wikipedia links inside Wikipedia lemma’s where they link to the lemma of a thing in general, but not to its meaning within the context of the link placement, while the phrasing suggests the link will provide more context. That’s just like adding every single word in a sentence to its dictionary page to me.

In the comments underneath that Semantic Markdown link, similar things are mentioned, alongside links to a semantic markup language bridging xml and markdown, and another one describing the same.

(I’m not sure what this type of post is, I find myself writing several in parallel yesterday and today, triggered by reading streams of public Hypothes.is annotations. I’ve dubbed it Jottings. It’s an attempt at blogging something in the stage between just bookmarking a url with some motivation and something more well formed based on an actual experience or exploration. More grasping at first connections, less formed opinion, but also not just annotation as it is rather somewhat removed from the source text and not anchored to it. More holding questions than providing insights or answers.)

The interwebs have been full off AI generated imagery. The AI script used is OpenAI’s Dall-E 2 (Wall-E & Dali). Images are created based on a textual prompt (e.g. Michelangelo’s David with sunglasses on the beach), natural language interpretation is then used to make a composite image. Some of the examples going ’round were quite impressive (See OpenAI’s site e.g., and the Kermit in [Movie Title Here] overview was much fun too).


One of the images resulting from when I entered the prompt ‘Lego Movie with Kermit’ using Dall-E Mini. I consider this a Public Domain image as the image does not pass the ‘creativity involved’ threshold which generally presupposes a human creator, for copyright to apply (meaning neither AI nor macaques).

OpenAI hasn’t released the Dall-E algorithm for others to play with, but there is a Dall-E mini available, seemingly trained on a much smaller data set.

I played around with it a little bit. My experimentation leads to the conclusion that either Dall-E mini suffers from “stereotypes in gives you stereotypes out”, with its clear bias towards Netherlands’ more basic icons of windmills (renewable energy ftw!) and tulip fields. That, or it means whatever happens in the coming decades we here in the Rhine delta won’t see much change.

Except for Thai flags, we’ll be waving those, apparently.

The past of Holland:

Holland now:

The innovation of Holland:

The future of Holland:

Four sets of images resulting from prompts entered by me into the Dall-E mini algorithm. The prompts were The past of Holland, Hollond, the innovation of Holland, the future of Holland. All result in windmills and tulip fields. Note in the bottom left of the future of Holland that Thai flags will be waved. I consider these as Public Domain images as they do not pass the ‘creativity involved’ threshold which generally presupposes a human creator, for copyright to apply. Their arrangement in this blog post does carry copyright though, and the Creative Commons license top-right applies to the arrangement. IANAL.