We’re scouting to replace our car (a 2006 Volvo V50). The challenge is finding one that has similar luggage space. The V50 has a box shaped boot, whereas other cars have either very sloping backs or lack depth in the boot. For newer Volvo’s this holds true as well. Cars have generally become bigger on the outside, but that has been used for padding and for the passenger compartment it seems, not for luggage space which seems to have actually shrunk for compact models like the V50. This ensures our camping gear won’t fit a car, where it does fit our V50.

E booked two test drives for this morning, both Toyota (a 2019 Corolla and a 2018 Rav4). Part of the test was an attempt at loading our camping gear in front of our house. The Corolla failed due to its sloping back, the Rav4 passed the packing test as its back is more boxy. The Rav4 is however hardly compact.

Reminds me of when we were looking for a car in 2004, and one of our key criteria was whether it could comfortably fit the rabbit cage that E’s rabbit lived in. Which is how we ended up driving a Citroen Xsara Picasso, after trying a much wider range of models.

For our current car we didn’t have such rabbit based criteria when we bought it early 2013. After Y’s birth in 2016 and getting a bigger tent for camping, everything still fit through adding a rooftop box.

What currently fits our V50’s boot must however also fit the replacement car (while we assume the rooftop box will remain a necessity for summer holiday travel). And it turns out that is a difficult requirement for other cars to fulfill.

Thanks ChatGPT!
Commenting is open on this website, and that means being engaged in a permanent asymmetric battle against spam. Asymmetric in the sense that like on any social media platform it is multiple orders of magnitude easier to automatically create and send out spam, falsehoods and hate speech in extremely large volumes, than it is for actual people to weed those out of their timelines and websites.
Most of incoming spam filtering is automated away these days, but always some and especially novel types are left for me te moderate myself, as the arms race continues.

A new entrant in the spam battle are AI generated spam comments that have clearly been fed the content of the actual blogpost that is being commented. Like other spam they stand out due to their blandness, what they link to and that the same things get submitted multiple times from different origins, but they are building on the content itself. I guess I should feel flattered.

It is also logical, as both spam and AI generated material are based on the exact same asymmetry. ‘Efficiency’ gains through AI generated text, are at best only that at the generation end of things (now see me generate oodles of text in seconds!), yet increases the effort needed at the receiving end to read it, see through the veil of plausibility, verify it and judge it inadequate.


Two examples of AI generated spam comments using the content of the actual blog posts (here a recent week notes posting, and one about donating money for ebooks rather than spending it at Amazon.) One commenter giving ‘undetectable AI’ as their name is a bit of a give-away though.

Any comments on this site already are subject to a Reverse Turing test, with all received material deemed generated until determined created by a person. Clearly this is no longer just a precaution resulting from tongue-in-cheek cleverness, but a must-have part of my toolkit for online interaction.

Bought an ebook through Rakuten (a Japanese company) for the first time just now.

Tiny Experiments by Anne Laure Le Cunff, has just been released.
I bought it at the Rakuten website, as it was the only non-US channel listed at the publisher’s page.

It was a few Euro cheaper there than on Bol.com, the Dutch platform that uses Rakuten’s Kobo ecosystem for their ebook sales. However I could use my Bol.com credentials to pay at Rakuten. The book showed up in my Bol.com library immediately despite not having been bought there. The file carries Adobe DRM. I downloaded it and added it to my Calibre library tool.

TIL: compare prices for ebooks between Rakuten and Bol, as they are interchangeable channels, and purchases end up in the same place.
I should probably keep a page here in this site listing the purchase options other than Amazon.

Yesterday my new colleague J. successfully defended their PhD thesis at Delft University. It was fun to attend. It’s been a while since I was last at a PhD defense. The location was the Senate haal in the brutalist Aula building. Visiting the Aula while I was orienting myself on which university to attend as a teenager was a rather depressing experience, and one of the elements that made me decide against Delft.


The university logo woven into the carpet of the Senate hall where PhD defenses take place.

Also had a pleasant lunch in Delft with colleague P beforehand.

Watching the recorded session about the use of LLMs of the personal knowledge management course I am following this fall provided an interesting question.

Fellow participant H asked different models questions about a paper he uploaded (and also wrote, so he knows what’s in it). One question was to give a summary, one was a highly targeted question for a specific fact in the paper.

He did so first in GPT4All both with local and with online models (ChatGPT etc.). The local models were Llama and Phi.
Here the local models summarised ok but failed the specific question. The online models in contrast did succeed at the targeted question.

He then did the same in LM Studio, and with the same local models got a different result. Both local models now performed well both on the summary and at the targeted question.

So same LLM, same uploaded paper, but a marked difference in output between GPT4All and LM Studio. What would make the difference? The tokenizer that processed the uploaded paper? Other reasons?