Poolside Breaks Silence With Open-Weight Laguna XS.2 and M.1 Coding Models

Poolside, the well-funded but quiet AI coding lab, has finally shipped — and broken its three-year vow of silence with two open-weight models: Laguna XS.2 (1.4B parameters) and Laguna M.1 (8B). Both are aimed at coding tasks, both are released under a permissive license, and the benchmarks suggest the company has been doing real work behind the closed doors.
Laguna M.1 outperforms DeepSeek Coder V2 Lite and matches Qwen 2.5 Coder 7B on HumanEval+ and SWE-bench Verified — the two metrics that actually correlate with developer experience. Laguna XS.2 at 1.4B is the more interesting release: it runs on a Raspberry Pi 5 and beats every open coding model under 3B parameters by a significant margin.
Why Poolside is releasing weights at all
Poolside raised $500M in 2024 specifically not to be another open-source AI lab. Eiso Kant, the founder, was emphatic that closed weights and proprietary fine-tuning data were the moat. So why ship Laguna with open weights now?
The answer is the coding-AI market segmented faster than anyone expected. GitHub Copilot owns the top of the funnel. Cursor and Windsurf own the IDE workflow. Devin and Replit Agents own the autonomous-agent layer. There's no oxygen left for a vertically integrated proprietary coding model — Poolside is shipping open weights to seed an ecosystem they can sell premium services around, the same playbook Mistral ran in 2024.
What's actually new about the Laguna architecture
The technical paper accompanying the release describes "step-conditioned training" — a fine-tuning approach where the model is taught to produce intermediate program states rather than just final code. The result is models that handle multi-file edits and refactors much better than equivalent-sized competitors trained on next-token prediction alone.
The training data is also different. Poolside leaned heavily on synthetic execution traces — real code with real runtime states — rather than pure GitHub corpus. That's expensive to generate but produces models that don't hallucinate APIs as much. Bench numbers on "function-calls-with-real-libraries" tasks back this up.
Where Laguna fits in the open coding model stack
The open coding model space is crowded but messy. DeepSeek Coder V2 dominated 2024 but the recent V2.5 update underwhelmed. Qwen 2.5 Coder is reliable but bland. Llama 4 Coder hasn't materialized. Code Llama is dead. StarCoder 3 is rumored but unreleased.
Laguna lands at exactly the right time, with the right size variants, and with a permissive license. M.1 will likely become the default fine-tune base for self-hosted coding assistants in regulated environments — banks, defense, healthcare — where Copilot's data residency story doesn't fly. XS.2 unlocks a genuinely new use case: edge IDE assistants on developer laptops without internet.
My Take
Poolside spent three years and half a billion dollars building proprietary coding AI, and the public end product is two open-weight models that are good but not category-defining. That's not the strategy reversal it sounds like — they kept the actually-frontier models proprietary and shipped the ones that already had a free competitor. The real story is what's coming next: Poolside has clearly been building agentic capabilities in private, and Laguna is the gateway drug. Adopt the open weights, fine-tune them, then upgrade to Poolside's hosted agent stack when your team needs more. It's the AWS playbook for AI coding. I think it works.
FAQ
Where can I download Laguna? Hugging Face under the Poolside organization. Both XS.2 and M.1 are live with quantizations.
Is it really better than DeepSeek Coder V2 Lite? On most benchmarks, yes — particularly multi-file editing tasks. Pure single-function code completion is closer to a tie.
Will Poolside open-source larger models? Eiso Kant explicitly said no in the release post. Anything above 8B stays proprietary.
The Bottom Line
Poolside breaks its silence with a calculated open-source release that boxes in DeepSeek and Qwen while keeping its real moat invisible. The Laguna models are good. The strategy behind them is better.
Related Articles
- DeepMind David Silver No-Human-Data AI
- OpenAI's 5-Principle AGI Framework
- OpenAI Misses Growth Targets