Vibe encoding starts Windsurf starts internal AI model

On Thursday, Windsurf, a startup that develops popular AI tools for software engineers, announced the launch of its first AI software engineering model series, SWE-1. The startup said it trained its new family of AI models (SWE-1, SWE-LITE and SWE-1-MINI) to optimize for the "whole software engineering process" rather than just coding.

Openai reportedly has completed a $3 billion deal to acquire Windsurf, and the internal AI model of Windsurf may be a little shocked. However, the model release shows that Windsurf is trying to scale more than just developing applications, but also developing models that power them.

According to Windsurf, Swe-1 is the largest and most powerful AI model, competing with the Claude 3.5 sonnet, GPT-4.1 and Gemini 2.5 Pro to play in-house programming benchmarks. However, SWE-1 seems to lack Frontier AI models for software engineering tasks, such as Claude 3.7 sonnets.

Windsurf said its SWE-LITE and SWE-1-MINI models will provide free or paid users to all users on its platform. Meanwhile, SWE-1 is only suitable for paid users. Windsurf did not immediately announce pricing for its SWE-1 model, but claimed it was cheaper than the Claude 3.5 sonnet.

Windsurf is known for allowing software engineers to talk to AI chatbots, a practice called “Vibe encoding” to write and edit code. Other popular Vibe encoding startups include the largest cursor in the space as well as cute cursors. Most of these startups, including Windsurf, have traditionally relied on Openai, Anthropic, and Google’s AI models to power their applications.

In a video announcing the SWE model, a comment from Windsurf research director Nicholas Moy highlights Windsurf's latest efforts to differentiate its approach. "Today's border models have been optimized for coding, and over the past few years, they have made tremendous progress," Moy said. "But they are not enough for us... coding is not software engineering."

Windsurf notes in a blog post that while other models are good at writing code, they are difficult to work between multiple surfaces (as programmers often do), such as terminals, IDEs, and the Internet. The startup said SWE-1 was trained using new data models and "encapsulated incomplete states, long-term tasks and training recipes for multiple surfaces."

The startup described SWE-1 as its "original proof of concept", which suggests that more AI models may be released in the future.