My grandfather Klaas Zijlstra (1905-1993) was a farmer and cattle raiser. He grew up in Fryslân and always wanted to be a farmhand it seems (his father was a housepainter). There was ambition too, from leaving school at 12 and moving out on his 16th, he sought out farmers to work for that had a reputation in cattle raising. In his early twenties he had a choice of job offers to run a cattle farm in Argentina and to run a cattle farm in Twente, in the eastern part of the Netherlands. His mother wanted to be able to visit him by train, so the Argentina offer was refused. He worked on the farm Stepelerveld near Haaksbergen, Twente, since its founding in 1928, which was meant as a model farm. It already had mechanised milking from the start for instance. The farm’s owner, Ebs van Heek, son of textile barons, and my grandfather had a strong interest in cattle raising, trying to increase milk production per cow. Before the farm was constructed in 1928 (now a national monument) work had already been underway to bring together and raise cattle for it on a nearby farm. I don’t know when my grandfather was hired exactly, he may already have had some role before the farm’s construction. Cattle was my grandfathers passion. After the farm was sold in 1963 and my grandparents retired to the nearby village Boekelo, there were photos of us grandchildren on the living room dresser right next to similarly framed photos of price winning cows. Central on the mantel piece was a photo of a bull. It remained there for over 30 years.

It may have been the same bull he took a train trip with.

The farm had a locally famous bull, named Adolf (this was the 1920s, so no stigma attached to that name yet). There was a cattle fair in The Hague, on the other side of the country. My grandfather walked the bull to the station, and joined it inside a cattle car, hired for the purpose, for the train ride to The Hague. When he arrived he sent a postcard to the farm, saying ‘gakz’, meaning ‘goed aangekomen, Klaas Zijlstra‘, arrived well. Postage was based on the number of words. This kept it to half a cent. Then he spent three days at the cattle fair on the Malieveld (the largest field in The Hague, used for fairs and demonstrations for some 400 years), where he shared straw with the bull to sleep on in the open air. The bull won first prize. He walked back to the station boarded a cattle car again with the bull for the trip home, and showed up on foot with the bull and a victory cup at the farm.

In the story, the station was sometimes Haaksbergen (the nearest, about an hour’s walk from the farm) sometimes Hengelo station (a 3 hour walk). Although Haaksbergen connected to Hengelo, it was a different station from the one on the line towards The Hague, so it may have been easier to go to Hengelo as they’d otherwise had needed two cattle cars, one for each line. Still, as the railroad company for the Haaksbergen-Hengelo connection was founded and owned by the same textile barons, to connect the factories, it may well have been Haaksbergen, or the also nearby Boekelo on the same line.

As a child I heard the story repeatedly but never really knew when that happened. Thanks to digitised archives I now have more details.

Earlier this week I came across a version of this story online, written by the farm owner’s daughter, and she placed it in 1929. Having a year I then searched the digitised news paper archives for cattle fairs in The Hague, and found it was actually 1928.
In 1928 the Netherlands hosted the Olympics in Amsterdam, from 28 July to 12 August. It was the first edition to be called ‘the summer olympics’. The national cattle fair and exhibition took place just before, from 23 to 25 July, and was dubbed the ‘Olympic cattle fair’ in the press. It was a big event (I found 230 paper articles across the country about it for that week). Opened by two government ministers giving speeches, visited by members of the royal family on each day, the queen mother and the prince consort, though not the queen herself. Prizes were awarded for many different categories of cows, horses, pigs and goats. A special mention in the press talks about a new ‘contraption to measure the pulling strength of a horse’ being demonstrated. Amidst all that was my grandfather, two months before his 23rd birthday, with bull Adolf on a leash. And won first prize.

Which fact ended up in the papers with a photo:

Klaas Zijlstra and the bull, Malieveld 25 July 1928, published in the Utrecht Daily on 27 July 1928, photographer and copyright unknown.

Look at that enormous and muscled beast, coming to shoulder height of my grandfather. And then imagine traveling and sleeping next to it for 5 days!

Favorited AI Policy and Human.json by Claudine Chionh
Favorited Adding human.json to WordPress by Terence Eden

Claudine Chionh and Terence Eden both mention human.json, a data file that lists people and sites you know are written by humans, as opposed to generated by AI. A rekindling of FOAF?

In these days of needing to assume anything you encounter is machine generated unless proven to be human made, we continuously have to apply a Reverse Turing test: do I have enough indications to assume something was created by a human.

When I first wrote a Reverse Turing page I mentioned much the same things as Terence Eden does about vouching for other people to be human authors.

Not sure if having a machine readable file makes the right point here though, ironic as it is. Blogrolls, webrings come to mind too, because Long Live the Author.

One element I think we’d need to contemplate is to not just list, but also provide URI’s to some supporting evidence. Expose the depth of a connection. Only met at a vouching party countersigning your credentials, or two decades of in person and online encounters and proof thereof are different in depth and quality, and may well impact how the Reverse Turing test turns out for others perusing your human.json file.

Favorited I used AI. It worked. I hated it. by Michael Taggart

An excellent post by Michael Taggart on how it felt to him to make a much needed bit of code with the help of Claude Code. The results worked, but he hated how it made him feel. He explores those opposing outcomes without trying to resolve the tension. Much in here that I recognise from my own experiences, as well as what I see others do and how they talk about it. Towards the end he talks about ‘the real monster’ here, and I think that is the right frame: we have created a technology monster once more, and Smits’ monster theory (2003) is a tool to bring to bear again. Where will we adapt the monster to our tastes? Where will we shift our cultural understanding of ourselves and the world to make room for the monster? Once we’re done embracing it until the bubble bursts, or rejecting it outright no matter what.

I hated writing software this way. Forget the output for a moment; the process was excruciating. Most of my time was spent reading proposed code changes and pressing the 1 key to accept the changes, which I almost always did. I was basically Homer’s drinking bird.

Michael Taggart

My website is now part of the web archive in the Dutch Royal Library. It took some experimenting to get it in there. Blogs will be blogs and the amount of links in mine choked the harvester it seems.

Since 2007 the Royal Library has been archiving websites, and now stores some 25.000 websites. My blog, even though it is one of the oldest still maintained in the Netherlands, never was part of that effort. Mostly because it’s not very visible as a Dutch blog, as it is mostly written in English and resides on a .org domain (when I registered zylstra.org, private persons could not yet register .nl domains, only companies could). At an Internet Archive event organised by the Royal Library last year September I asked about archiving and they told me how to suggest my website for archiving.

Late last January I received a message that my website would be included in their archives from now on.

What followed were several test-runs with their harvester Heritrix, which is also used by the Internet Archive. I wondered about how some of my website’s peculiarities would be dealt with by the harvester. Not every posting is listed on my site for instance, although each does have a direct URL. The years’ worth of weekly notes for instance are not listed in this site. Also many postings are never shown on the front page, and if you page through postings on the front page you will never encounter them. This is true for categories of posts like books, photos, and day to day topics. I discussed this with the web-archivist, who ran some tests. My week notes seemed to be included, but the pagination of the category of day to day stalled out at 180 pages, although there were more still.

To my surprise they also ran into volume limits. Apparently because of ‘bycatch’, things they archive from other sites because I reference them or embed them. In the past few years I have stopped embedding things, like photos, except for my slides, which are hosted on a separate domain I have registered. While it was normal that a site’s additional catch is larger than the site itself, for my site it was very different from what they were used to.
First they limited bycatch to 20GB in a test, and they ran out of space, then they set it at 40GB in a test, and still ran out of space. Raising the limits further did not help. In the end they decided to harvest just what is on my zylstra.org domain and not include any bycatch at all. Which is completely fine by me, precisely because I’ve made the effort to bring all kinds of external content ‘home’ to this domain.

Nevertheless it did surprise me that bycatch turned out to be a problem, as they are using a tool the Internet Archive itself uses too. I asked for some examples of the bycatch. They told me it wasn’t even possible to dump a URL list from the bycatch into a spreadsheet as it hit the maximum number of rows (around 65k iirc). I did get some of the URLs that contributed bigger volumes of bycatch. To my surprise I did not even recognise the links, except one.

One was obvious, 2800 attempts to harvest a page on live.staticflickr.com, as I link a lot to my Flickr hosted images, although I no longer embed them but have local versions on this domain.
Others were not obvious to me at all, theguardian.tv, vp.nyt.com and various content delivery networks. I link to none of them in this site. I do link to The Guardian, about 100 times, and to the NYT about 40 times, and I suppose if the harvester follows those links it will find additional material there that explains the bycatch more fully, if it harvests all the targets I link to too.

If that is the case, that it harvests everything I’ve linked to, then it is the long history of this blog that is the issue and makes the harvester hit its limits.

There are some 20.000 external links in this blog’s articles, as far as I can quickly estimate based on a full content export I made this week.
It basically means that if the harvester attempts to harvest all those links and what resources they include, it adds a number of pages to the archive, roughly equivalent to the current archive itself.

A weblog embraces what the world wide web is, a bunch of links to other websites. The name weblog says it. A web-log is a curation hub for web readers, pointing out other interesting stuff, and not trying to keep you here too long. Over 23 years of blogging yielded some 20.000 links to other websites. In terms of linking a blog becomes the web itself as much as it becomes its author’s avatar in terms of its content given enough time.

From now on my site will be updated in the Royal Library’s archives every year on March 5th.


The facade of the Royal Library in The Hague, photo by Ferdi de Gier, license CC-BY-SA

At PKM Summit this weekend one thing that stood out was that many have started creating their own tools, and were using vibecoding to create them.

While the term agency turned out to be unknown to almost all participants, that is of course what such tools create. The ability to do things, individually or as a group, in this case by creating your own tools to get there.
The power of finding new agency was felt and expressed by quite a few, and played a role in a good number of sessions too.

When I first encountered computers, in the early 1980s, creating your own stuff was the norm. It was almost the only option. Making the machine work for myself. Like software to keep my ham radio logs and print QSL cards.
These days I run a good many smaller and larger personal pieces of tooling on my laptop. Things like making it easy to search by date in my photos on Flickr, or posting to my website from my internal notes, or from within my feedreader.
Things that reduce friction, speed things up, reduce dependency on external systems.

Vibecoding, and especially the Claude Code style of vibe coding, is bringing people to create their own tools, who weren’t able to do so before. A pool of latent needs they can now tap into on their own.

Some I know are really now learning how a computer works under the hood through their vibe coding. Testing the limits of their machines, finding out how fast local stuff can be. Discovering the power of APIs, the utility of cron jobs, and learning how to run their own VPS or local servers.
Others are creating little tools that work the way they want. An app to present books from their collection in that one specific way just so. A mobile app for public transport built on your own existing commute patterns and nothing else. Apps pulling in data from several sources and presenting them in one interface that likely only makes sense to themselves.

Tools built by people realising they are pretty predictable to themselves, and that such highly localised and specifically contextualised predictability now lends itself to automation by the intended user themself.
Tools, in short, where, access to and control over data lies fully with the user, where applications are views on that data (and multiple apps use the same data), and interfaces queries on the data. Along the lines of Ruben Verborgh’s 2017 article “Paradigm Shifts for the Decentralised Web“ but then way more personal. The decoupling that is possible between data, applications and interfaces is even more powerful when you can do them all three for yourself. And then mash them up in any which way you want.

Vibecoding is allowing people to jump the barriers to entry to that. And judging by the stories they share, it feels like pole vaulting over them, not just clearing the barriers. That energy then propels them on to do more.

Over the past months I’ve also heard regularly how people are cancelling paid subscriptions to various online services, and switching to their personal tools that fit their use case much more precisely.

There are many ethical, political, and societal issues with much of the gen AI world, and how models come about, and how corporate vendors exploit and leverage their power.
Yet, where these things are not just consumed but used locally as a leg-up to a different level of self-reliance, it looks quite different. Something is brewing it feels like.
A shift, and I’d love to see more people explore and extend their own agency with such tools.

At the European PKM Summit the past two days, Frank Meeuwsen ran a continuous atelier where people could make their own ‘zines and lino cuts. A welcoming space to make something by hand at an event full of inspiring but abstract conversations and talks.

A simple ‘zine folded from an A4 paper provides six small pages, including the front and back. That forces you to be to the point.

I thought of a posting I wrote a little over a year ago, about how personal knowledge management is personal in three ways, and that generally you should always take the P in PKM even more personal than you’re already doing.
Three points to bring across sounded short enough to lend itself for a message in a zine.


The P in PKM is 3 fold personal. (jouw = yours in Dutch)


First, it’s your personal system. You take it with you. It enables and anchors your personal autonomy, and allows you to own your own learning path


Second, it’s your personal knowledge, building on your own curiosity and interests, with your associations, in your language. Your personal network of meaning.


Third, it’s your personal system. Your emergent structures, following your logic, stemming from your personal methods and workflows.


Personal KM is way more personal than you think. And still more.