Poolside Just Gave Builders a Free Open Source Model for Agentic Coding
Poolside just released Laguna XS.2, a free open-source model built for agentic coding that runs locally. For indie builders watching API costs climb, a capable local model changes the math on what you can afford to ship

The Closed-Source Rally Finally Gets an American Response
Lately the AI release cycle has felt like a tennis match. Anthropic ships a pricey state-of-the-art model. OpenAI fires back a week later. Both are proprietary, hosted, and metered. Meanwhile, Chinese labs have been playing a different game. DeepSeek and Xiaomi keep releasing open, cheap, capable weights that enterprises can actually modify and run themselves. American builders have watched from the sidelines, wondering when a U.S. team would step up with a serious open-source contender. Today, that happened.
Your Laptop Just Became More Interesting
Poolside, a startup based in the U.S., released Laguna XS.2. It is a compact, high-performing large language model available under the MIT license. That means you can download it, fine-tune it, bake it into a commercial product, and never ask permission. Poolside built it specifically for agentic coding, the kind of multi-step workflow where a model writes code, runs it, reads the error, and tries again without a human in the loop.
Poolside designed Laguna XS.2 to run locally. For indie hackers and small teams, this is a bigger deal than benchmark scores. Every call to a cloud API is a line item on a bill that scales with usage. When your side project hits the front page of Hacker News, that beautiful spike in traffic can trigger a beautiful spike in your OpenAI tab. A local model caps your costs at electricity and hardware you already own. Poolside designed this model to punch above its size class, so decent consumer hardware can handle it without summoning a server farm.
The performance claims matter, but the licensing matters more. MIT is the most permissive, enterprise-friendly license in common use. There are no commercial-use clauses, no attribution screens, no dual-licensing traps. You can host it inside your Botflow app, wrap it into a desktop tool, or ship it on-device to mobile users. The weights are yours.
Agentic Coding Needs More Than a Chatbot
The market optimizes most models for conversation. They answer questions, draft emails, and summarize PDFs. Agentic coding is messier. It requires structured tool calling, persistence across long sessions, and tolerance for the noise of real software development. Poolside tuned Laguna XS.2 for this. Poolside trained it on the kind of tasks that actual coding agents face: reading repositories, editing files, executing commands, and iterating until tests pass. This is not a generalist pretending to code. It is a specialist.
This distinction matters for vibe-coding platforms and builder tools. When you tell an AI to build a full-stack app, the weakest link is usually the model's ability to hold context across files and manage side effects. A model built for agentic workflows reduces the random failures that make vibe-coding feel like gambling. You still review everything, but you spend less time undoing hallucinated imports and broken syntax.
At Botflow, we ship open-source software because we believe your stack should not be held hostage by another company's pricing committee. Poolside's release fits that philosophy. A builder using Botflow can already generate full-stack web and mobile apps through natural language, ship to their own GitHub repo, and deploy anywhere. Adding a local model like Laguna XS.2 into that loop means the brain of your agent can live on hardware you control, talking to the reactive Convex backend that powers your app. You own the frontend, the backend, and now the reasoning layer.
The release also signals a shift in where value accrues in the AI stack. For the last two years, the assumption has been that frontier capability requires frontier budgets and closed doors. Open releases from DeepSeek, Xiaomi, and now Poolside are eroding that assumption. The gap between open weights and private APIs is narrowing faster than the incumbents want to admit. Smart builders are already designing for a hybrid future: cloud models for the heavy lifting, local models for the fast, private, cheap work that runs in a loop.
If you have not experimented with local models this year, Laguna XS.2 is a low-risk place to start. Download the weights from Hugging Face, run it through your preferred inference engine, and point your coding agent at it. Measure the latency, the quality, and the bill at the end of the month. You might find that the best model for your product is not the one with the biggest marketing budget. It is the one you can run yourself.