Favorited Ollama Claude Code integration by Ollama
Favorited LM Studio Claude Code integration by LM Studio blog

Last Friday I participated in a workshop by Frank Meeuwsen on using Claude Code. I’ve been reluctant to use Claude Code for the basic reason that it uses cloud run models by default. This means that my inputs and any context I provide leave my machine to be gobbled up into the data foraging models. Nevertheless it was fun, I improved on my existing personal feed reader (a presentation layer on top of FreshRSS that allows me to write responses while I’m reading feeds).

However tempting it is to continue vibecoding with Claude Code and watching it work its way through my coding requests, that is not the way to go. After some online searching I found the above two pages, that explain how to point the program Claude Code to use the local end point of either Ollama or LMStudio. That’s more like it!

Now I need to figure out which LLMs that can be downloaded (or run on a VPS perhaps) are best suited to the type of tasks I want to set it. For coding, local agents, translation, and semantic work. There can be multiple models of course, as I can switch them up or run them sequentially (and in parallel if I deploy them on a VPS I think).

Open models can be used with Claude Code through Ollama’s Anthropic-compatible API

Ollama documentation

This means you can use your local models with Claude Code!

LM Studio blog