I and my team at The Green Land are looking for a self-hosted version of event organisation tools like MeetUp.com or Eventbrite. Both for small scale events as part of projects, such as meet-ups of citizen scientists, as well as for ourselves, such as small gatherings we organise around AI ethics with our professional peer network.

We don’t want to use Meetup.com or things like Eventbrite because we don’t want personal data to be handed over to US based entities, nor require the participants to do so just because they want to attend a local event. We also notice a strong hesitancy amongst participants of events when it is needed to create yet another account on yet another service just to let us know they will be joining us for something.

Nevertheless we do want an easy way to announce events, track registrations, and have a place to share material before, during and afterwards. And I know that events are hard in terms of discovery, because although there are a plethora of events, for most participants as well as event organisers they’re incidents (years ago I came across a blogpost describing this Events Paradox well.). Additionally, for us as professionals it is usually more logical to host our own events than find one that fits our needs.
So we need a way to announce events where we can assure participants there’s no need to hand over personal information, and where material can be shared.

There seem to be two FOSS offerings in this space. Mobilizon by Framasoft and Gettogether. In the past weeks my colleague S and I tried to test Mobilizon.

Mobilizon is ActivityPub based, and there’s a Yunohost version which I installed on our VPS early last month. Mobilizon promises several strong points:

  • Fully self-hosted, and able to federate with other instances. There aren’t many visible instances out there, but one NGO we frequently encounter in our network does run its own instance.
  • you can maintain different profiles in your account, so that for different parts of your life you can subscribe to events, without e.g. your historical re-enactment events showing up amongst your professional events in a public profile.
  • People can register for an event without needing an account or profile (using e-mail confirmation)

Working with Mobilizon turned out less than ideal at a very basic level. Accounts couldn’t log in after creation. As an administrator I could not force password resets for users (that couldn’t log in anymore). Not being able to do user admin (other than suspending accounts) seems to be a deliberate design choice.
I still had access through my Yunohost admin account, but after an update yesterday of the Mobilizon app that stopped working too. So now both instance admins were locked out. Existing documentation wasn’t much help in understanding what exactly is going on.

I also came across an announcement dat Framasoft intends to shift development resources away from Mobilizon by the end of the year, and thusfar there’s little momentum in the developer community to pick up where they intend to leave off.

For now I have uninstalled Mobilizon. I will reach out to the mentioned NGO to hear how their experiences are. And will look at the other tool, although no Yunohost version of it exists.

I’m open te hear about other alternatives that might be good to try.

Bookmarked Project Tailwind by Steven Johnson

Author Steven Johnson has been working with Google and developed a prototype for Tailwind. Tailwind, an ‘AI first notebook’, is intended to bring an LLM to your own source material, and then you can use it to ask questions of the sources you give it. You point it to a set of resources in your Google Drive and what Tailwind generates will be based just on those resources. It shows you the specific source of the things it generates as well. Johnson explicitly places it in the Tools for Thought category. You can join a waiting list if you’re in the USA, and a beta should be available in the summer. Is the USA limit intended to reduce the number of applicants I wonder, or a sign that they’re still figuring things like GDPR for this tool? Tailwind is prototyped on PaLM API though, which is now generally available.

This, from its description, gets to where it becomes much more interesting to use LLM and GPT tools. A localised (not local though, it lives in your Google footprint) tool, where the user defines the corpus of sources used, and traceable results. As the quote below suggests a personal research assistant. Not just for my entire corpus of notes as I describe in that linked blogpost, but also on a subset of notes for a single topic or project. I think there will be more tools like these coming in the next months, some of which likely will be truly local and personal.

On the Tailwind team we’ve been referring to our general approach as source-grounded AI. Tailwind allows you to define a set of documents as trusted sources …, shaping all of the model’s interactions with you. … other types of sources as well, such as your research materials for a book or blog post. The idea here is to craft a role for the LLM that is … something closer to an efficient research assistant, helping you explore the information that matters most to you.

Steven Johnson

I can now share an article directly from my feed reader to my Hypothes.is account, annotated with a few remarks.

One of the things I often do when feed reading is opening some articles up in the browser with the purpose of possibly saving them to Hypothes.is for (later) annotation. You know how it goes with open tabs in browsers, hundreds will be opened up and then neglected, until you give up and quite the entire session.

My annotation of things I read starts with saving the article to Hypothes.is, and provide a single annotation for the entire page that includes a web archive link to the article and a brief motivation or some first thoughts about why I think it is of interest to me. Later I may go through the article in more detail and add more annotations, which end up in my notes. (I also do this outside of Hypothes.is, saving an entire article directly to my notes in markdown, when I don’t want to read the article in browser.)

Until now this forces me to leave my feed reader to store an article in Hypothes.is. However, in my personal feed reader I have already the opportunity to post directly from there to my websites or to my personal notes collection in Obsidian.
Hypothes.is has an API, which much like how I post to my sites from my feed reader can make it possible to directly share to Hypothes.is from inside my feed reader. This way I can continue to read, while leaving breadcrumbs in Hypothes.is (which always also end up in the inbox of my notes).

The Hypothes.is API is documented and expects JSON payloads. To read public material through the API is possible for anyone, to post you need an API key that is connected to your account (find it when logged in).

I use JSON payloads to post from my feedreader (and from inside my notes) to this site, so I copied and adapted the script to talk to the Hypotes.is API.
The result is an extremely basic and barebones script that can do only a single thing: post a page wide annotation (so no highlights, no updates etc). For now this is enough as it is precisely my usual starting point for annotation.

The script expects to receive 4 things: a URL, the title of the article, an array of tags, and my remarks. That is sent to the Hypothes.is API. In response I will get the information about the annotation I just made (ID etc.) but I disregard any response.

To the webform I use in my feedreader I added an option to send the information to Hypothes.is, rather than my websites through MicroPub, or my local notes through the filesystem. That option is what ensures the little script gets called with the right variables.

It now looks like this:

In my feed reader I have the usual form I use to post replies and bookmarks, now with an additional radio button to select ‘H.’ for Hypothes.is

Submitting the form above gets it posted to my Hypothes.is account

I have installed AutoGPT and started playing with it. AutoGPT is a locally installed and run piece of software (in a terminal window) that you theoretically can set a result to achieve and then let run to achieve it. It’s experimental so it is good advice to actually follow its steps along and approve individual actions it suggests doing.
It interacts with different generative AI tools (through your own API keys) and can initiate different actions, including online searches as well as spawning new interactions with LLM’s like GPT4 and using the results in its ongoing process. It chains these prompts and interactions together to get to a result (‘prompt chaining’).

I had to tweak some of the script a little bit (it calls python and pip but it needs to call python3 and pip3 on my machine) but then it works.

Initially I have it set up with OpenAI’s API, as the online guide I found were using that. However in the settings file I noticed I can also choose to use other LLM’s like the publicly available models through Huggingface, as well as image generating AIs.

I first attempted to let it write scripts to interact with the hypothes.is API. It ended up in a loop about needing to read the API documentation but not finding it. At that time I did not yet provide my own interventions (such as supplying the link to the API documentation). When I did so later it couldn’t come up with next steps, or not ingesting the full API documentation (only the first few lines) which also led to empty next steps.

Then I tried a simpler thing: give me a list of all email addresses of the people in my company.
It did a google search for my company’s website, and then looked at it. The site is in Dutch which it didn’t notice, and it concluded there wasn’t a page listing our team. I then provided it with the link to the team’s page, and it did parse that correctly ending up with a list of email addresses saved to file, while also neatly summarising what we do and what our expertise is.
While this second experiment was successfully concluded, it did require my own intervention, and the set task was relatively simple (scrape something from this here webpage). This was of limited usefulness, although it did require less time than me doing it myself. It points to the need of having a pretty clear picture of what one wants to achieve and how to achieve it, so you can provide feedback and input at the right steps in the process.

As with other generative AI tools, doing the right prompting is key, and the burden of learning effective prompting lies with the human tool user, the tool itself does not provide any guidance in this.

I appreciate it’s an early effort, but I can’t reproduce the enthusiastic results others claim. My first estimation is that those claims I’ve seen are based on hypothetical things used as prompts and then being enthusiastic about the plausible outcomes. Whereas if you try an actual issue where you know the desired result it easily falls flat. Similar to how ChatGPT can provide plausible texts except when the prompter knows what good quality output looks like for a given prompt.

It is tempting to play with this thing nevertheless, because of its positioning as a personal tool, as potential step to what I dubbed narrow band digital personal assistants earlier. I will continue to explore, first by latching onto the APIs of more open models for generative AI than OpenAI’s.

I installed the StableDiffusion image generator locally on my Mac, using this straightforward instruction. To run the generator I need to start the tool in Terminal, and then can access its interface in the browser on
It uses the 2.1 model, which was over 5GB to download.

Running an image generator locally is slower than using Hugging Face’s online demo. As far as I can tell I need to restart if I want to run a fresh prompt, otherwise it will treat it as an iteration on the previous prompt.

I feel completely illiterate w.r.t. the interface, so there’s a lot to learn before I actually will be capable to somewhat effectively use this tool.

Below the local interface with its various settings to learn about.

A screenshot of the local Stable Diffusion interface. Click to enlarge.

I have a little over 25 years worth of various notes and writings, and a little over 20 years of blogposts. A corpus that reflects my life, interests, attitude, thoughts, interactions and work over most of my adult life. Wouldn’t it be interesting to run that personal archive as my own chatbot, to specialise a LLM for my own use?

Generally I’ve been interested in using algorithms as personal or group tools for a number of years.

For algorithms to help, like any tool, they need to be ‘smaller’ than us, as I wrote in my networked agency manifesto. We need to be able to control its settings, tinker with it, deploy it and stop it as we see fit.
Me, April 2018, in Algorithms That Work For Me, Not Commoditise Me

Most if not all of our exposure to algorithms online however treats us as a means to manipulate our engagement. I see them as potentially very valuable tools in working with lots of information. But not in their current common incarnations.

Going back to a less algorithmic way of dealing with information isn’t an option, nor something to desire I think. But we do need algorithms that really serve us, perform to our information needs. We need less algorithms that purport to aid us in dealing with the daily river of newsy stuff, but really commodotise us at the back-end.
Me, April 2018, in Algorithms That Work For Me, Not Commoditise Me

Some of the things I’d like my ideal RSS reader to be able to do are along such lines, e.g. to signal new patterns among the people I interact with, or outliers in their writings. Basically to signal social eddies and shifts among my network’s online sharing.

LLMs are highly interesting in that regard too, as in contrast to the engagement optimising social media algorithms, they are focused on large corpora of text and generation thereof, and not on emergent social behaviour around texts. Once trained on a large enough generic corpus, one could potentially tune it with a specific corpus. Specific to a certain niche topic, or to the interests of a single person, small group of people or community of practice. Such as all of my own material. Decades worth of writings, presentations, notes, e-mails etc. The mirror image of me as expressed in all my archived files.

Doing so with a personal corpus, for me has a few prerequisites:

  • It would need to be a separate instance of whatever tech it uses. If possible self-hosted.
  • There should be no feedback to the underlying generic and publicly available model, there should be no bleed-over into other people’s interactions with that model.
  • The separate instance needs an off-switch under my control, where off means none of my inputs are available for use someplace else.

Running your own Stable Diffusion image generator set-up as E currently does complies with this for instance.

Doing so with a LLM text generator would create a way of chatting with my own PKM material, ChatPKM, a way to interact (differently than through search and links, as I do now) with my Avatar (not just my blog though, all my notes). It might adopt my personal style and phrasing in its outputs. When (not if) it hallucinates it would be my own trip so to speak. It would be clear what inputs are in play, w.r.t. the specialisation, so verification and references should be easier to follow up on. It would be a personal prompting tool, to communicate with your own pet stochastic parrot.

Current attempts at chatbots in this style seem to focus on things like customer interaction. Feed it your product manual, have it chat to customers with questions about the product. A fancy version of ‘have you tried switching it off and back on?‘ These services allow you to input one or a handful of docs or sources, and then chat about its contents.
One of those is Chatbase, another is ChatThing by Pixelhop. The last one has the option of continuously adding source material to presumably the same chatbot(s), but more or less on a per file and per URL basis and limited in number of words per month. That’s not like starting out with half a GB in markdown text of notes and writings covering several decades, let alone tens of GBs of e-mail interactions for instance.

Pixelhop is currently working with Dave Winer however to do some of what I mention above: use Dave’s entire blog archives as input. Dave has been blogging since the mid 1990s, so there’s quite a lot of material there.
Checking out ChatThing suggests that they built on OpenAI’s ChatGPT 3.5 through its API. So it wouldn’t qualify per the prerequisites I mentioned. Yet, purposely feeding it a specific online blog archive is less problematic than including my own notes as all the source material involved is public anyway.
The resulting Scripting News bot is a fascinating experiment, the work around which you can follow on GitHub. (As part of that Dave also shared a markdown version of his complete blog archives (33MB), which for fun I loaded into Obsidian to search through. Also for comparison with the generated outputs from the chatbot, such as the question Dave asked the bot when he first wrote about the iPhone on his blog.)

Looking forward to more experiments by Dave and Pixelhop. Meanwhile I’ve joined Pixelhop’s Discord to follow their developments.