I presented during the 2022 Netherlands WordCamp edition in Arnhem on turning all WordPress sites into fully IndieWeb enabled sites. Meaning turning well over a third of the web into the open social web. Outside all the silos.

The slides are available in my self-hosted Slideshare replacement for embed and download, and shown below.

I have been blogging a long time, and can tinker a bit with code (like a home cook). I want my site to be the center of how I read and write the web. Its purpose is to create conversations with others, who write in their own spaces on the web. The IndieWeb community supports that with a number of technical building blocks that allow me a set of pretty cool things. But all that IndieWeb offers has a high threshold for entry.

The key parts of IndieWeb to me, the parts that make interaction between websites possible, that allow any site to be an active part of many conversations, are much simpler though:

  • Microformats2 so that computers know how to interpret our blogposts,
  • some class declarations, so computers know why we link to some other web page,
  • and WebMention, the protocol that lets a web page know another page is linking to them.

Making interaction possible between site authors, across sites, just by writing as they already do, is both the simplest to arrange and the most impactful. It’s not something that site authors should have to deal with though, it should be in your website’s engine. WordPress in my case, and an enormous amount of other websites.
Ensuring that WordPress Themes, and Gutenberg blocks would support and could handle Microformats2 and classes correctly therefore will have a huge impact.

Over 40% of the open web would then with a single stroke be the open social web. No need for data hungry silo’s, no place for algorithmic timelines designed to keep you hooked.

WordPress wants to be the Operating System for the Web. That OS is missing social features, and it’s not a big leap to add them with existing web protocols. No website owner would have to be a coder, be it home cooking style or professional, to use those social features and create conversations. It would just be there.

If you build WP Themes, if you create Gutenberg blocks, you’re invited to help make this happen.

(also posted to Indienews)

Al in maart had ik in Utrecht een leuk gesprek met Martijn Aslander en Lykle de Vries als onderdeel van hun podcast-serie Digitale Fitheid. Digitale Fitheid is een platform over, ja precies dat, de digitale fitheid voor de kenniswerker.

In het gesprek hadden we het over persoonlijk kennismanagement (pkm) en de lange historie daarvan, en de omgang met digitale gereedschappen en de macht om die tools zelf vorm te geven. Maar ook over mijn werk, verantwoord datagebruik, de Europese datastrategie, Obsidian meet-ups, en ethiek. Er kwam aan het begin zelfs met veel kabaal een AWACS voorbij.

Een gesprek van een uur dat zo voorbij was. Achteraf denk je dan, heb ik wel coherente dingen gezegd? Terugluisterend nu bij publicatie, valt dat mee.

Mijn gesprek in de Digitale Fitheid podcast staat nu online. Kijk vooral ook even naar de andere gesprekken, die zijn zeker de moeite waard.

In the past weeks I’ve been playing at times in OpenAI’s GPT-3 playground. You give GPT-3 a prompt and then it will provide some generated text. You could use that as prompt again, or add something yourself and let GPT-3 continue.

This morning, I read Liam Porr’s 2020 posting about a GPT-3 generated blog he kept for a few weeks. (I think keeping it up longer and positioning it like a personal blog would mean certain detection as generated or inauthentic. Unlike when Porr did his experiment, GPT-3 is now available to many, and more people would also be suspicious of texts as potentially generated to begin with.)

For fun I prompted the GPT-3 playground with the title and first few lines of my last posting on my recent use of the Hypothes.is social annotation tool.

My post remade by GPT-3

Below is the result GPT-3 gave me. After pasting the block-quoted part from the original as prompt, it generated the grey coded part.

10 Thoughts After Making 100 Annotations in Hypothes.is

I’ve now added over 100 annotations using Hypothes.is (h.), almost all within the last month. This includes a few non-public ones. Two weeks ago I wrote down some early impressions, to which I’m now adding some additional observations.

1) 100 annotations (in a month) don’t seem like a lot to me, if h. is a regular tool in one’s browsing habit.

2) The act of annotation feels good, even empowering. It’s a way of taking control of the web, or at least of one’s web experience.

3) Annotations can be very short, even just a single word. This is both good (e.g. for quickly annotating a lot of text) and bad (e.g. when trying to make a more nuanced point).

4) The annotation interface could be improved. In particular, it would be nice to have an easy way to see all annotations on a page, without having to open each one individually.

5) Annotations can be a great way to start a conversation. I’ve had a few annotations that have generated discussion, which is gratifying.

6) Annotations can also be a great way to keep track of thoughts and ideas. I’ve found myself using h. as a kind of “digital scratchpad”, jotting down things that I want to remember or follow up on later.

7) The fact that annotations are public by default is both good and bad. On the one hand, it encourages people to be thoughtful and considerate in their annotations. On the other hand, it can be intimidating, and I’ve found myself self-censoring on occasion.

8) The fact that annotations are tied to a specific URL can be both good and bad. On the one hand, it ensures that annotations are always relevant to the context in which they were made. On the other hand, it can be frustrating when the context changes (e.g. an article is updated) and the annotation is no longer relevant.

9) The fact that annotations are stored centrally (on h.'s servers) is both good and bad. On the one hand, it makes it easy to access and share annotations. On the other hand, it raises privacy and security concerns.

10) Overall, I’m very happy with h. and find it to be a valuable tool. I’m looking forward to using it more in the future.

Some first reactions to that GPT-3 text

  • The title (10 things about…) and partly adding the first numbered element, created the expected 10 point list. It’s a common enough marketing/personal branding type format for GPT-3 to pick up on. (Why did I use it yesterday? Because I was too lazy to categorize my observations, so I just listed and numbered them as I wrote them.)
  • Ignoring obvious things like the repetitiveness of the text, after reading you’re left with feeling like you’ve read a bunch of words but you remain unsure if you read anything of substance. The text suggests there’s substance though, the first person perspective, mentioning specific aspects. Yet then doesn’t really follow up. It reads like marketing text, content farmed text. It mentions things but doesn’t answer questions, nor pose them. Not a surprise, but it does show how much empty stuff there’s already out there (on which GPT-3 was trained in the first place). I recognise it as low-info text but not necessarily as generated text.
  • No links! No links, other than sporadic internal links, is the default in the media, I know. Yet hyperlinks are the strands the Web is made of. It allows pointing to side paths of relevance, to the history and context of which the posting itself is a result, the conversation it is intended to be part of and situated in. Its absence, the pretense that the artefact is a stand alone and self contained thing, is a tell. It’s also a weakness in other online texts, or any text, as books and journals can be filled with links in the shape of footnotes, references and mentions in the text itself)
  • No proof of work (to borrow a term) other than that the words have been written is conveyed by the text. No world behind the text, of which the text is a resulting expression. No examples that suggest or proof the author tried things out, looked things up. Compare that to the actual posting that in point 1 talks about social connections around the topic, links to other h. user profiles as data points for comparison, and elsewhere points to examples of behaviour, lists of h. users found created and shared, references other tools (Zotero, Obsidian) and larger scope (PKM workflows) outside the topic at hand, and experimental changes in the site it is published on itself. That all tells of some exploration, of which the posting is the annotation. This also goes back to my earlier remark of using a 10 point list as laziness in the face of categorising things as I’ve done in other posts (see what I did there? No links, cause lazy).

I think that’s the biggest thing that I take from this: any text should at least hint at the rich tapestry of things it is resulting from, if not directly discuss it or link to it. A tapestry not just made from other texts, but other actions taken (things created, data collected, tools made or adapted), and people (whose thoughts you build on, whose behaviour you observe and adopt, who you interact with outside of the given text). Whether it’s been GPT-3 generated or not, that holds.

I’ve now added over 100 annotations using Hypothes.is (h.), almost all within the last month. This includes a few non-public ones. Two weeks ago I wrote down some early impressions, to which I’m now adding some additional observations.

  1. 100 annotations (in a month) don’t seem like a lot to me, if h. is a regular tool in one’s browsing habit. H. says they have 1 million users, that have made 40 million annotations to over 2 million articles (their API returns 2.187.262 results as I write this). H. has been in existence for a decade. These numbers average out to 20 annotations to 2 articles per user. This to me suggests that the mode is 1 annotation to 1 article by a user and then silence. My 100 annotations spread out over 30 articles, accumulated over a handful of weeks is then already well above average, even though I am a new and beginning user. My introduction to h. was through Chris Aldrich, whose stream of annotations I follow daily with interest. He recently passed 10.000 annotations! That’s 100 times as many as mine, and apparently also an outlier to the h. team itself: they sent him a congratulatory package. H.’s marketing director has 1348 public annotations over almost 6 years, its founder 1200 in a decade. Remi Kalir, co-author of the (readworthy!) Annotation book, has 800 in six years. That does not seem that much from what I would expect to be power users. My blogging friend Heinz has some 750 annotations in three years. Fellow IndieWeb netizen Maya some 1800 in a year and a half. Those last two numbers, even if they differ by a factor 5 or so in average annotations/month, feel like what I’d expect as a regular range for routine users.
  2. The book Annotation I mentioned makes a lot of social annotation, where distributed conversations result beyond the core interaction of an annotator with an author through an original text. Such social annotation requires sharing. H. provides that sharing functionality and positions itself explicitly as a social tool ("Annotate the web, with anyone, anywhere" "Engage your students with social annotation"). The numbers above show that such social interaction around an annotated text within h. will be very rare in the public facing part of h., in the closed (safer) surroundings of classroom use interaction might be much more prominent. Users like me, or Heinz, Maya and Chris whom I named/linked above, will then be motivated by something else than the social aspects of h. If and when such interaction does happen (as it tends to do if you mutually follow eachothers annotations) it is a pleasant addition, not h.’s central benefit.
  3. What is odd to me is that when you do indeed engage into social interaction on h., that interaction cannot be found through the web interface of my annotations. Once I comment, it disappears out of sight, unless I remember what I reacted to and go back to that annotation by another user directly, to find my comment underneath. It does show up in the RSS feed of my annotations, and my Hypothes.is-to-Obsidian plugin also captures them through the API. Just not in the web interface.
  4. Despite the social nature of h., discovery is very difficult. Purposefully ‘finding the others’ is mostly impossible. This is both an effect of the web-interface functionality, as well as I suspect because of the relatively sparse network of users (see observation 1). There’s no direct way of connecting or searching for users. The social object is the annotation, and you need to find others only through annotations you encounter. I’ve searched for tags and terms I am interested in, but those do not surface regular users easily. I’ve collated a list of a dozen currently active or somewhat active annotators, and half a dozen who used to be or are sporadically active. I also added annotations of my own blogposts to my blog, and I actively follow (through an RSS feed) any new annotation of my blogposts. If you use h., I’d be interested to hear about it.
  5. Annotations are the first step of getting useful insights into my notes. This makes it a prerequisite to be able to capture annotations in my note making tool Obsidian, otherwise Hypothes.is is just another silo you’re wasting time on. Luckily h. isn’t meant as a silo and has an API. Using the API and the Hypothes.is-to-Obsidian plugin all my annotations are available to me locally. However, what I do locally with those notes does not get reflected back to h., meaning that you can’t really work through annotations locally until you’ve annotated an entire article or paper in the browser, otherwise sync issues may occur. I also find that having the individual annotations (including the annotated text, in one file), not the full text (the stuff I didn’t annotate), feels impractical at times as it cuts away a lot of context. It’s easily retrievable by visiting the url now, but maybe not over time (so I save web archive links too as an annotation). I also grab a local markdown copy of full articles if they are of higher interest to me. Using h. in the browser creates another inbox in this regard (having to return to a thing to finish annotation or for context), and I obviously don’t need more inboxes to keep track of.
  6. In response to not saving entire articles in my notes environment, I have started marking online articles I haven’t annotated yet at least with a note that contains the motivation and first associations I normally save with a full article. This is in the same spot as where I add a web archive link, as page note. I’ve tried that in recent days and that seems to work well. That way I do have a general note in my local system that contains the motivation for looking in more detail at an article.
  7. The API also supports sending annotations and updates to h. from e.g. my local system. Would this be potentially better for my workflow? Firefox and the h. add-on don’t always work flawlessly, not all docs can be opened, or the form stops working until I restart Firefox. This too points in the direction of annotating locally and sending annotations to h. for sharing through the API. Is there anyone already doing this? Built their own client, or using h. ‘headless’? Is there anyone who runs their own h. instance locally? If I could send things through the API, that might also include the Kindle highlights I pull in to my local system.
  8. In the same category of integrating h. into my pkm workflows, falls the interaction between h. and Zotero, especially now that Zotero has its own storage of annotations of PDFs in my library. It might be of interest to be able to share those annotations, for a more complete overview of what I’m annotating. Either directly from Zotero, or by way of my notes in Obsidian (Zotero annotatins end up there in the end)
  9. These first 100 annotations I made in the browser, using an add-on. Annotating in the browser takes some getting used to, as I try to get myself out of my browser more usually. I don’t always fully realise I can return to an article for later annotation. Any time the sense I have to finish annotating an article surfaces, that is friction I can do without. Apart from that, it is a pleasant experience to annotate like this. And that pleasure is key to keep annotating. Being able to better integrate my h. use with Obsidian and Zotero would likely increase the pleasure of doing it.
  10. Another path of integration to think about is sharing annotated links from h. to my blog or the other way around. I blog links with a general annotation at times (example). These bloggable links I could grab from h. where I bookmark things in similar ways (example), usually to annotate further later on. I notice myself thinking I should do both, but unless I could do that simultaneously I won’t do such a thing twice.

Attempting to understand the ‘Noosphere’ and Subconscious tooling that Gordon Brander is developing results in several questions. Brander proposes a new ‘low level infrastructure’ (subconscious) for sharing stuff across the internet, which should result in us thinking together on a global scale (the noosphere).

I’ve followed the recent Render conference on ‘tools for thought’ where Gordon Brander presented Noosphere and Subconscious. In the wake of it I joined the Discord server around this topic, and read the ‘Noosphere Explainer‘. Brander’s Render talk roughly follows that same document.

Brander says: The internet is already a tool for thought, so we should make it better at it. The tools at our disposal to deal with this new voluminous information environment haven’t reached their potential yet. Learning to think together at planetary scale is a needed ingredient to address global issues. There are many interesting tools out there, but they’re all silos of SaaS. They’re silos because of same origin policy which prevents cross-site/host/domain/port sourcing of material. Subconscious is meant to solve that by providing a ‘protocol for thoughts’.

This leaves me with a range of questions.

  • Subconscious is meant to solve same origin policy. SOP however seems to be a client (i.e. browser) enforced thing, focused on (java)scripts, and otherwise e.g. ignores HTML. Apps are/can be viewers like browsers are viewers. So why isn’t the web suitable, with the app or a tweaked browser on top? Why a whole new ‘infrastructure’ over the internet? That sounds like it wants to solve a whole lot more than same origin to remove the bias towards silos. What are those additional things?
  • The intended target is to make the internet a better tool for thought. Such thoughts are text based it seems so what does Subsoncsious do in contrast to current text based thoughts shared that e.g. the web doesn’t?
  • Assuming Subconscious does what it intends, how do we get from a ‘low level infrastructure’ to the stated overarching aim of thinking together globally? I see texts, that may or not be expressed thoughts, being linked and shared like web resources, how do we get to ‘thinking together’ from there? The talk at Render paid tribute to that at the beginning but doesn’t show how it would be done (and the invocation of the Xanadu project at the start might well be meaningful in that sense), not even in any ‘and then the magic happens to get to the finish line’ fashion. Is the magic supposed to be emergent, like I and others assumed the web and social software would do 20-30 years ago? Is it enough to merely have a ‘protocol for thoughts’? What about non-infrastructure type decision and consensus building tools like Liquid Feedback or Audrey Tang’s quadratic voting in vTaiwan? Those are geared to action, and seem more immediately useful towards solving global issues, don’t they?

I’ll be hanging out in the Discord server, you can too (invite link), and going through Gordon Brander’s earlier postings, to see if I can better understand what this is about.

Bookmarked Using GPT-3 to augment human intelligence: Learning through open-ended conversations with large language models by Henrik Olof Karlsson

Wow, this essay comes with a bunch of examples of using the GPT-3 language model in such fascinating ways. Have it stage a discussion between two famous innovators and duke it out over a fundamental question, run your ideas by an impersonation of Steve Jobs, use it to first explore a new domain to you (while being aware that GPT-3 will likely confabulate a bunch of nonsense). Just wow.
Some immediate points:

  • Karlsson talks about prompt engineering, to make the model spit out what you want more closely. Prompt design is an important feature in large scale listening, to tap into a rich interpreted stream of narrated experiences. I can do prompt design to get people to share their experiences, and it would be fascinating to try that experience out on GPT-3.
  • He mentions Matt Webbs 2020 post about prompting, quoting “it’s down to the human user to interview GPT-3“. This morning I’ve started reading Luhmann’s Communicating with Slip Boxes with a view to annotation. Luhmann talks about the need for his notes collection to be thematically open ended, and the factual status or not of information to be a result of the moment of communication. GPT-3 is trained with the internet, and it hallucinates. Now here we are communicating with it, interviewing it, to elicit new thoughts, ideas and perspectives, similar to what Luhmann evocatively describes as communication with his notes. That GPT-3 results can be totally bogus is much less relevant as it’s the interaction that leads to new notions within yourself, and you’re not after using GPT-3s output as fact or as a finished result.
  • Are all of us building notes collections, especially those mimicking Luhmann as if it was the originator of such systems of note taking, actually better off learning to prompt and interrogate GPT-3?
  • Karlsson writes about treating GPT-3 as an interface to the internet, which allows using GPT-3 as a research assistant. In a much more specific way than he describes this is what the tool Elicit I just mentioned here does based on GPT-3 too. You give Elicit your research question as a prompt and it will come up with relevant papers that may help answer it.

On first reading this is like opening a treasure trove, albeit a boobytrapped one. Need to go through this in much more detail and follow up on sources and associations.

Some people already do most of their learning by prompting GPT-3 to write custom-made essays about things they are trying to understand. I’ve talked to people who prompt GPT-3 to give them legal advice and diagnose their illnesses. I’ve talked to men who let their five-year-olds hang out with GPT-3, treating it as an eternally patient uncle, answering questions, while dad gets on with work.

Henrik Olof Karlsson