So, following a blog post[1] from @webology and some guides about #VSCode plugins online, I set up #ollama with a number of models and connected that to the Continue plugin.
My goal: see if local-laptop #llm code assistants are viable.
My results: staggeringly underwhelming, mostly in terms of speed. I tried gemma3, qwen2.5, and deepseek-r1; none of them performed fast enough to be a true help for coding.