Nicole van der Hoeven published one of her videos on using Obsidian on the topic of the ExcaliBrain plugin. The plugin is made by Zsolt Viczián, the same creator as the Excalidraw plugin which brings easy visualisation to Obsidian. I use Excalidraw within Obsidian with some regularity (I’m mostly text oriented).

It’s not mentioned in the video, but the ExcaliBrain plugin is clearly inspired by The Brain software, both in terms of types of links between notes, and how it shows them (even the placement of the little circles where links attach). The name suggests so too, and the plugin author names The Brain as source of inspiration in the github reposository. I used The Brain as desktop interface from 1997 until 2004-ish, and this plugin seems to bring The Brain as a visualisation layer to my notes. That alone is enough to try it out.

The plugin can both infer relationships between notes, through existing links, much as the general graph view in Obsidian does, but does so in a more navigable style. This I hope allows it to be used as a visual navigation interface to notes, something the graph view does not meaningfully, as The Brain so usefully did for me for a number of years.

You can also set explicit relationships by adding named links to your notes, for which it uses the inline data fields (yourfieldname::) that the DataView plugin makes possible. I already use that plugin so that’s not an extra step for me.
I disagree with Nicole van der Hoeven on her suggestion to comment out explicit relationships so that the plugin will visualise them but the note won’t show the links, except in edit mode.
The notes should always show all links I explicitly set, that’s the whole point of links.
Machine inferred links are a different matter, which deserve a toggle as they are suggestions made to me.
Links are my own and real work in my notes.

Setting explicit links (parent, child, friends ExcaliBrain calls them) is similar to how I already create links. When I write a new note I aim to link other notes in the way Soren Bjornstadt describes in a video of his touring his Zettelkasten. I make three links, if possible, from a new note. One to a higher level of abstraction note, one to a lower level of abstraction but more concrete note, and one to a related note at the same level. This creates ‘chains’ of 4 notes with a content-based implied order.

For example: a note on the role of public transport might link to urban mobility and the liveability of car-free city centers as higher abstration concepts, to a note on urban rail systems or bus networks as a lower abstraction level, to the German 2022 summer reduced fare scheme as an example, and to another communal public service like urban public internet as a same level but different type of note.

I strongly dislike the parent-child-sibling(-friend) vocabulary Excalibrain introduces though, as it implies an order of creation. Parents exist first, children from parents. This means for the way I described creating links in notes that abstract concepts come first. This is not how it mostly works for me. Abstract notions are often created from, intuited from, less abstratct ones. The scaffolding created by less abstract notes and concrete examples is what leads to them. Overarching concepts and insights emerge from linking lower level items. Thankfully the terms you actually use to denote such connections between notes can be freely chosen in the plugin settings. That is a design choice by Zsolt Viczián I greatly appreciate.

Nicole van der Hoeven in her run-through of ExcaliBrain also talks about this implied hierarchy, and mentions a higher level type of use, which is adding more semantics to links using the renaming options in the plugin settings. For instance to express lines of argumentation, and how material reflects on eachother (e.g. Note A reinforces / contradicts Note B). This is the type of linking that Tinderbox allows you to do visually too, which I’ve used a lot. She hasn’t used it that way herself yet she says, but suggests it’s likely the most valuable use case. I think that rings true. It’s where linking becomes the work you have to do yourself again, as opposed to lazy or automatic linking between notes.

I very much want to experiment with the ExcaliBrain plugin.


A screenshot after activating ExcaliBrain of the vicinity of a single note

Bookmarked AI Liability Directive (PDF) (by the European Commission)

This should be interesting to compare with the proposed AI Regulation. The AI Regulation specifies under which conditions an AI product or service, or the output of one will or won’t be admitted on the EU market (literally a CE mark for AI), based on a risk assessment that includes risks to civic rights, equipment safety and critical infrastructure. That text is still very much under negotiation, but is a building block of the combined EU digital and data strategies. The EU in parallel is modernising their product liability rules, and now include damage caused by AI related products and services within that scope. Both are listed on the EC’s page concerning AI so some integration is to be expected. Is the proposal already anticipating parts of the AI Regulation, or does it try to replicate some of it within this Directive? Is it fully aligned with the proposed AI Regulation or are there surprising contrasts? As this proposal is a Directive (which needs to be translated into national law in each Member State), and the AI Regulation becomes law without such national translation that too is a dynamic of interest, in the sense that this Directive builds on existing national legal instruments. There was a consultation on this Directive late 2021. Which DG created this proposal, DG Just?

The problems this proposal aims to address, in particular legal uncertainty and legal fragmentation, hinder the development of the internal market and thus amount to significant obstacles to cross-border trade in AI-enabled products and services.

The proposal addresses obstacles stemming from the fact that businesses that want to produce, disseminate and operate AI-enabled products and services across borders are uncertain whether and how existing liability regimes apply to damage caused by AI. … In a cross-border context, the law applicable to a non-contractual liability arising out of a tort or delict is by default the law of the country in which the damage occurs. For these businesses, it is essential to know the relevant liability risks and to be able to insure themselves against them.

In addition, there are concrete signs that a number of Member States are considering unilateral legislative measures to address the specific challenges posed by AI with respect to liability. … Given the large divergence between Member States’ existing civil liability rules, it is likely that any national AI-specific measure on liability would follow existing different national approaches and therefore increase fragmentation. Therefore, adaptations of liability rules taken on a purely national basis would increase the barriers to the rollout of AI-enabled products and services across the internal market and contribute further to fragmentation.

European Commission

Back in 2014 the FabLab in Donetsk, Ukraine, was closed at gunpoint at the command of someone who was a user of the space until then, and my acquaintance K who ran the Donetsk FabLab had to flee. In the past months I have regularly wondered about Ukrainian makers and small production companies. I’m sure all have been struggling.

During the pandemic lockdowns I ordered online with independent book stores and artisans in several European cities, to contribute something to their continuity.

After the summer I started looking for Ukraine based makers to similarly order some useful things, but I found it hard to find such makers. Then a FT journalist posted a number of links to Ukrainian brands in a Twitter thread, which I looked at.

I’ve ordered a blanket with the printed street pattern of Kyiv, which is a nice open data touch, from woolkrafts. A glass artefact from Olga Turetska, and two copies of an English language book on innovation in Ukraine from Osnovy Publishing. In a conversation with one of the staff at the publishing house, I was pointed to a newly launched online platform where you can find Ukrainian products. It’s called Made with Bravery. The list of makers selling through the platform is slowly growing.

My packages are making their way to me, and one has reached Dutch customs. If you order something too, do allow for a longish shipping period. There are other logistical issues way more important in Ukraine at the moment than delivery of commercial packages.

I presented during the 2022 Netherlands WordCamp edition in Arnhem on turning all WordPress sites into fully IndieWeb enabled sites. Meaning turning well over a third of the web into the open social web. Outside all the silos.

The slides are available in my self-hosted Slideshare replacement for embed and download, and shown below.

I have been blogging a long time, and can tinker a bit with code (like a home cook). I want my site to be the center of how I read and write the web. Its purpose is to create conversations with others, who write in their own spaces on the web. The IndieWeb community supports that with a number of technical building blocks that allow me a set of pretty cool things. But all that IndieWeb offers has a high threshold for entry.

The key parts of IndieWeb to me, the parts that make interaction between websites possible, that allow any site to be an active part of many conversations, are much simpler though:

  • Microformats2 so that computers know how to interpret our blogposts,
  • some class declarations, so computers know why we link to some other web page,
  • and WebMention, the protocol that lets a web page know another page is linking to them.

Making interaction possible between site authors, across sites, just by writing as they already do, is both the simplest to arrange and the most impactful. It’s not something that site authors should have to deal with though, it should be in your website’s engine. WordPress in my case, and an enormous amount of other websites.
Ensuring that WordPress Themes, and Gutenberg blocks would support and could handle Microformats2 and classes correctly therefore will have a huge impact.

Over 40% of the open web would then with a single stroke be the open social web. No need for data hungry silo’s, no place for algorithmic timelines designed to keep you hooked.

WordPress wants to be the Operating System for the Web. That OS is missing social features, and it’s not a big leap to add them with existing web protocols. No website owner would have to be a coder, be it home cooking style or professional, to use those social features and create conversations. It would just be there.

If you build WP Themes, if you create Gutenberg blocks, you’re invited to help make this happen.

(also posted to Indienews)

Al in maart had ik in Utrecht een leuk gesprek met Martijn Aslander en Lykle de Vries als onderdeel van hun podcast-serie Digitale Fitheid. Digitale Fitheid is een platform over, ja precies dat, de digitale fitheid voor de kenniswerker.

In het gesprek hadden we het over persoonlijk kennismanagement (pkm) en de lange historie daarvan, en de omgang met digitale gereedschappen en de macht om die tools zelf vorm te geven. Maar ook over mijn werk, verantwoord datagebruik, de Europese datastrategie, Obsidian meet-ups, en ethiek. Er kwam aan het begin zelfs met veel kabaal een AWACS voorbij.

Een gesprek van een uur dat zo voorbij was. Achteraf denk je dan, heb ik wel coherente dingen gezegd? Terugluisterend nu bij publicatie, valt dat mee.

Mijn gesprek in de Digitale Fitheid podcast staat nu online. Kijk vooral ook even naar de andere gesprekken, die zijn zeker de moeite waard.

In the past weeks I’ve been playing at times in OpenAI’s GPT-3 playground. You give GPT-3 a prompt and then it will provide some generated text. You could use that as prompt again, or add something yourself and let GPT-3 continue.

This morning, I read Liam Porr’s 2020 posting about a GPT-3 generated blog he kept for a few weeks. (I think keeping it up longer and positioning it like a personal blog would mean certain detection as generated or inauthentic. Unlike when Porr did his experiment, GPT-3 is now available to many, and more people would also be suspicious of texts as potentially generated to begin with.)

For fun I prompted the GPT-3 playground with the title and first few lines of my last posting on my recent use of the Hypothes.is social annotation tool.

My post remade by GPT-3

Below is the result GPT-3 gave me. After pasting the block-quoted part from the original as prompt, it generated the grey coded part.

10 Thoughts After Making 100 Annotations in Hypothes.is

I’ve now added over 100 annotations using Hypothes.is (h.), almost all within the last month. This includes a few non-public ones. Two weeks ago I wrote down some early impressions, to which I’m now adding some additional observations.

1) 100 annotations (in a month) don’t seem like a lot to me, if h. is a regular tool in one’s browsing habit.

2) The act of annotation feels good, even empowering. It’s a way of taking control of the web, or at least of one’s web experience.

3) Annotations can be very short, even just a single word. This is both good (e.g. for quickly annotating a lot of text) and bad (e.g. when trying to make a more nuanced point).

4) The annotation interface could be improved. In particular, it would be nice to have an easy way to see all annotations on a page, without having to open each one individually.

5) Annotations can be a great way to start a conversation. I’ve had a few annotations that have generated discussion, which is gratifying.

6) Annotations can also be a great way to keep track of thoughts and ideas. I’ve found myself using h. as a kind of “digital scratchpad”, jotting down things that I want to remember or follow up on later.

7) The fact that annotations are public by default is both good and bad. On the one hand, it encourages people to be thoughtful and considerate in their annotations. On the other hand, it can be intimidating, and I’ve found myself self-censoring on occasion.

8) The fact that annotations are tied to a specific URL can be both good and bad. On the one hand, it ensures that annotations are always relevant to the context in which they were made. On the other hand, it can be frustrating when the context changes (e.g. an article is updated) and the annotation is no longer relevant.

9) The fact that annotations are stored centrally (on h.'s servers) is both good and bad. On the one hand, it makes it easy to access and share annotations. On the other hand, it raises privacy and security concerns.

10) Overall, I’m very happy with h. and find it to be a valuable tool. I’m looking forward to using it more in the future.

Some first reactions to that GPT-3 text

  • The title (10 things about…) and partly adding the first numbered element, created the expected 10 point list. It’s a common enough marketing/personal branding type format for GPT-3 to pick up on. (Why did I use it yesterday? Because I was too lazy to categorize my observations, so I just listed and numbered them as I wrote them.)
  • Ignoring obvious things like the repetitiveness of the text, after reading you’re left with feeling like you’ve read a bunch of words but you remain unsure if you read anything of substance. The text suggests there’s substance though, the first person perspective, mentioning specific aspects. Yet then doesn’t really follow up. It reads like marketing text, content farmed text. It mentions things but doesn’t answer questions, nor pose them. Not a surprise, but it does show how much empty stuff there’s already out there (on which GPT-3 was trained in the first place). I recognise it as low-info text but not necessarily as generated text.
  • No links! No links, other than sporadic internal links, is the default in the media, I know. Yet hyperlinks are the strands the Web is made of. It allows pointing to side paths of relevance, to the history and context of which the posting itself is a result, the conversation it is intended to be part of and situated in. Its absence, the pretense that the artefact is a stand alone and self contained thing, is a tell. It’s also a weakness in other online texts, or any text, as books and journals can be filled with links in the shape of footnotes, references and mentions in the text itself)
  • No proof of work (to borrow a term) other than that the words have been written is conveyed by the text. No world behind the text, of which the text is a resulting expression. No examples that suggest or proof the author tried things out, looked things up. Compare that to the actual posting that in point 1 talks about social connections around the topic, links to other h. user profiles as data points for comparison, and elsewhere points to examples of behaviour, lists of h. users found created and shared, references other tools (Zotero, Obsidian) and larger scope (PKM workflows) outside the topic at hand, and experimental changes in the site it is published on itself. That all tells of some exploration, of which the posting is the annotation. This also goes back to my earlier remark of using a 10 point list as laziness in the face of categorising things as I’ve done in other posts (see what I did there? No links, cause lazy).

I think that’s the biggest thing that I take from this: any text should at least hint at the rich tapestry of things it is resulting from, if not directly discuss it or link to it. A tapestry not just made from other texts, but other actions taken (things created, data collected, tools made or adapted), and people (whose thoughts you build on, whose behaviour you observe and adopt, who you interact with outside of the given text). Whether it’s been GPT-3 generated or not, that holds.