Favorited I used AI. It worked. I hated it. by Michael Taggart

An excellent post by Michael Taggart on how it felt to him to make a much needed bit of code with the help of Claude Code. The results worked, but he hated how it made him feel. He explores those opposing outcomes without trying to resolve the tension. Much in here that I recognise from my own experiences, as well as what I see others do and how they talk about it. Towards the end he talks about ‘the real monster’ here, and I think that is the right frame: we have created a technology monster once more, and Smits’ monster theory (2003) is a tool to bring to bear again. Where will we adapt the monster to our tastes? Where will we shift our cultural understanding of ourselves and the world to make room for the monster? Once we’re done embracing it until the bubble bursts, or rejecting it outright no matter what.

I hated writing software this way. Forget the output for a moment; the process was excruciating. Most of my time was spent reading proposed code changes and pressing the 1 key to accept the changes, which I almost always did. I was basically Homer’s drinking bird.

Michael Taggart

Favorited Ollama Claude Code integration by Ollama
Favorited LM Studio Claude Code integration by LM Studio blog

Last Friday I participated in a workshop by Frank Meeuwsen on using Claude Code. I’ve been reluctant to use Claude Code for the basic reason that it uses cloud run models by default. This means that my inputs and any context I provide leave my machine to be gobbled up into the data foraging models. Nevertheless it was fun, I improved on my existing personal feed reader (a presentation layer on top of FreshRSS that allows me to write responses while I’m reading feeds).

However tempting it is to continue vibecoding with Claude Code and watching it work its way through my coding requests, that is not the way to go. After some online searching I found the above two pages, that explain how to point the program Claude Code to use the local end point of either Ollama or LMStudio. That’s more like it!

Now I need to figure out which LLMs that can be downloaded (or run on a VPS perhaps) are best suited to the type of tasks I want to set it. For coding, local agents, translation, and semantic work. There can be multiple models of course, as I can switch them up or run them sequentially (and in parallel if I deploy them on a VPS I think).

Open models can be used with Claude Code through Ollama’s Anthropic-compatible API

Ollama documentation

This means you can use your local models with Claude Code!

LM Studio blog