Giving a second life to old hardware: From AI failure to Space Science

Introduction In my previous article about LLM Tool development I tried to repurpose my 2011 Mac Mini to run local agents with Ollama. Unfortunately, I discovered too late that, while the machine could launch the models and use Ollama’s chat feature, it couldn’t execute my custom model inside the application. This was disappointing — I wanted a better excuse for keeping the computer running 24/7 — and ultimately, it was unfit for the task due to thermal constraints. So I started looking for something different to do and it ended up being quite a pleasant little journey. ...

February 11, 2026