Run Your Own Private Coding Assistant with OpenCode on Leafcloud
OpenCode is an open-source alternative to Claude Code and Copilot that lets you connect to your own model provider. We've put together a Terraform setup that gets you running in minutes, complete with VLLM, Qwen3-Coder, and A100 GPU power.
By
Published on
If you’ve been keeping an eye on AI-assisted coding tools, you’ve probably noticed the buzz around OpenCode. With 100K+ GitHub stars, 700+ contributors, and over 2.5 million monthly developers, it’s become the go-to open-source alternative to Claude Code and Copilot.
Why Developers Actually Like It
OpenCode is 100% open source (MIT licensed) and completely provider-agnostic. You can point it at Claude, OpenAI, Google, or your own self-hosted models. Built by the team behind SST (Serverless Stack), it’s terminal-native with a proper TUI that doesn’t feel like an afterthought.
A few things that make it stand out:
- LSP support out of the box — it actually understands your codebase structure
- Two built-in agents —
buildfor full development work,planfor read-only analysis and exploration - Client/server architecture — run the backend on a remote machine, drive it from wherever
- Not locked to any provider — swap models without changing your workflow
That provider flexibility is what makes this interesting for us. It means you can run a private coding assistant on European infrastructure, keeping your code away from Big Tech’s servers entirely.
The Setup
We’ve created a Terraform configuration that spins up everything you need: VLLM serving a quantized Qwen3-Coder model on an A100 GPU. Infrastructure-as-code means you can deploy, tear down, and redeploy without clicking through dashboards or trying to remember what you configured last time.
The whole thing runs on Leafcloud, so your AI coding buddy is also heating someone’s apartment while it helps you debug that regex.
What’s in the Box
The stack is straightforward:
- VLLM for efficient model serving
- Qwen3-Coder 30B AWQ as the brain (4-bit quantized, so it runs comfortably on a single A100)
- A100 GPU for the muscle
- Terraform to tie it all together
No vendor lock-in. No subscriptions to manage. Just your infrastructure, your models, your data.
Get Started
- Public repo: github.com/leafcloudhq/tf-leafcloud-opencode
- Server setup docs: docs.leaf.cloud/en/latest/private-llm/team-opencode-vllm/
- Client setup guide: docs.leaf.cloud/en/latest/private-llm/opencode-client-setup/
Fork it, tweak it, make it yours. If you build something cool with it, let us know. Open source contributions are always welcome.
Hat Tip
Big thanks to the OpenCode team for building this in the open. The project has over 700 contributors now, and it shows. The tooling is solid, the docs are actually useful, and new releases ship regularly. This is what open source is supposed to look like.
Ready to Run Your Own Private Coding Assistant?
Sign up for Leafcloud and deploy in minutes, or schedule a call to talk through your setup. Your code stays in Europe, your models stay under your control, and your compute heats homes instead of warming the atmosphere.