Posts about llm
- Distilled Reasoning on Strix Halo: Running a Claude-Trained Thinking Model Locally
- An LLM Clean Room Z80 Emulator: Building from Specifications, Not Source Code
- Partial LLM Loading: Running Models Too Big for VRAM
- Running vLLM in Docker with AMD ROCm and the Continue.dev CLI
- Persistent Conversation Context Over Stateless Messaging APIs
- A Bespoke LLM Code Scanner