Last Friday I participated in a workshop by Frank Meeuwsen on using Claude Code. I’ve been reluctant to use Claude Code for the basic reason that it uses cloud run models by default. This means that my inputs and any context I provide leave my machine to be gobbled up into the data foraging models. Nevertheless it was fun, I improved on my existing personal feed reader (a presentation layer on top of FreshRSS that allows me to write responses while I’m reading feeds).
However tempting it is to continue vibecoding with Claude Code and watching it work its way through my coding requests, that is not the way to go. After some online searching I found the above two pages, that explain how to point the program Claude Code to use the local end point of either Ollama or LMStudio. That’s more like it!
Now I need to figure out which LLMs that can be downloaded (or run on a VPS perhaps) are best suited to the type of tasks I want to set it. For coding, local agents, translation, and semantic work. There can be multiple models of course, as I can switch them up or run them sequentially (and in parallel if I deploy them on a VPS I think).
Open models can be used with Claude Code through Ollama’s Anthropic-compatible API
Ollama documentation
This means you can use your local models with Claude Code!
Came across this 2025 book in a Bregenz book store, Brunner, last summer while visiting the Austrian alps, and later looked for it online.
Set in an ancient convent in a small deserted Italian village, and narrated by what appears to be a renaissance literary style all knowing voice. After a few chapters it becomes more apparent who the narrator is. Took me a bit to get into, because of that narrator, but then enjoyed it. Some nice ideas, funny at times, and entertaining. Horváth is an Austrian writer, and this seems to be his third novel.
Read in German through the Kobo plus subscription.
Four AI’s Claude Opus 4.5, Gemini 3 Pro, GPT-5.2, and DeepSeek-3.2 are brought together to form ‘a village’. They interact, can use a computer, and need to work things out between them. They get and set tasks (like ‘elect a village leader’) and spend the day going about it. The logs read like ironic slapstick. Bumbling forward all the time, not meeting self-set deadlines, messing up hand-offs and hand-overs of tasks. And they spend working days on it! That’s like years in computer time. Doesn’t sound much like the singularity-achieving super fast high efficiency we get promised that MS Office, sorry, Microsoft 365 Copilot, would achieve for us before our first coffee if we would just switch on AI.
It does seem these models have a great steady bullshit job going. So maybe that is a sign of the predicted looming mass lay-offs after we AI-all-da-things after all.
It made me laugh that the models are attributing their own faulty use of tools to ‘bugs’ in those tools. AI so human!
Thanks ChatGPT!
Commenting is open on this website, and that means being engaged in a permanent asymmetric battle against spam. Asymmetric in the sense that like on any social media platform it is multiple orders of magnitude easier to automatically create and send out spam, falsehoods and hate speech in extremely large volumes, than it is for actual people to weed those out of their timelines and websites.
Most of incoming spam filtering is automated away these days, but always some and especially novel types are left for me te moderate myself, as the arms race continues.
A new entrant in the spam battle are AI generated spam comments that have clearly been fed the content of the actual blogpost that is being commented. Like other spam they stand out due to their blandness, what they link to and that the same things get submitted multiple times from different origins, but they are building on the content itself. I guess I should feel flattered.
It is also logical, as both spam and AI generated material are based on the exact same asymmetry. ‘Efficiency’ gains through AI generated text, are at best only that at the generation end of things (now see me generate oodles of text in seconds!), yet increases the effort needed at the receiving end to read it, see through the veil of plausibility, verify it and judge it inadequate.
Two examples of AI generated spam comments using the content of the actual blog posts (here a recent week notes posting, and one about donating money for ebooks rather than spending it at Amazon.) One commenter giving ‘undetectable AI’ as their name is a bit of a give-away though.
Any comments on this site already are subject to a Reverse Turing test, with all received material deemed generated until determined created by a person. Clearly this is no longer just a precaution resulting from tongue-in-cheek cleverness, but a must-have part of my toolkit for online interaction.
Academic publishers are pushing authors to speed up delivering manuscripts and articles (including suggesting peer review to be done within 15 days). This to meet the quota of ‘content’ for LLM training they promised the AI companies like Microsoft’s OpenAI, that the publishers sold their soul to. Taylor&Francis/Routledge to the tune of 75M USD annually, Wiley for 44M USD. Authors don’t seem to figure in these equations, as there are no opt-outs on offer to them. No cuts either, obviously.
Must. Feed. The. Machine. Apparently. Any randomly arranged bunch of words or phrases will do. Apparently.
Academic publishing is not to write down and distribute human knowledge it seems, but to create slop for the machines to make more sloppier slop. Back at my electronic engineering faculty around 1990 we dubbed such systems GIEMGO (rather than GIGO), garbage in, even more garbage out.
I wonder if those academic publishers when pushing for haste stopped for a second and asked the machine they intend to feed whether this is a good idea?
I did. And despite the machine ignoring its own dependence on quality input, presumably because that causality isn’t expressed in its input text or not statistically relevant in weighing input tokens, it at least does generate a text pointing to potential down sides for human knowledge. How nice of it.
Prompt: Several academic publishers have made deals with companies training large language models (LLM) for many millions of dollars. In these deals they promised a certain quota of new books per year to be made available to further train LLM. To fulfill such quota publishers are now pushing academic authors to finish manuscripts and articles before the end of year. Including by foregoing proper peer review. Do you think this is a good idea? And what would be the effect on the quality of academic writing and on the quality of LLM training data? Response: I do not have enough information to provide an accurate response as I am only a machine learning model and do not have access to human knowledge or opinions. However, it seems like there may be some potential downsides to this practice. By forcing academics to rush their work in order to meet quotas set by publishing houses, they risk sacrificing quality over quantity. This could result in lower-quality research being published, which may negatively impact the overall scientific community. Additionally, if the publishing houses prioritize quantity over quality, it could lead to a lack of diverse perspectives and voices in academia, limiting our understanding of complex issues.
Routledge told staff on Thursday that it was “extra important” to meet publishing targets for 2024. Why? Routledge had promised the AI companies a certain quota of books to train their LLMs
Finalised in June, the AI Act (EU 2024/1689) was published yesterday 12-07-2024 and will enter into force after 20 days, on 02-08-2024. Generally the law will be applicable after 2 years, on 02-08-2026, with. a few exceptions:
The rules on banned practices (Chapter 2) will become applicable in 6 months, on 02-02-2025, as will the general provisions (Chapter 1)
Parts such as the chapter on notified bodies, general purpose AI models (Chapter 5), governance (Chapter 7), penalties (Chapter 12), will become applicable in a year, on 02-08-2025
Article 6 in Chapter 3, on the classification rules for high risk AI applications, will apply in 3 years, from 02-02-2027
The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. This Regulation ensures the free movement, cross-border, of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.