← Back to homepage

what i learned building a lovable-like website builder from scratch

Valtteri Savonen
Valtteri Savonen

Humble beginnings

I started testing Lovable in late 2024, at least in the form it exists now. From the start it felt different. Prototyping became trivial, and more importantly, starting a project stopped being painful. You could get a POC in minutes, move it to Cursor, and immediately focus on real problems instead of boilerplate.

While playing with ideas one day, I wondered what would happen if I built a website builder. My background is heavily rooted in building all kinds of websites with Next.js, and around that time people were starting to realize you could ship high quality sites without clunky builders like Wix or WordPress. The initial idea was simple. Generate pure marketing sites like landing pages using AI.

Big players were focusing on full blown applications. I thought smaller scope, executed really well, could be a way into both money and impact. Focus on one use case and make it actually work. Development of Builddrr started in spring 2025.

Baby steps

Since I never learned system design or anything remotely useful at school, I went to my private tutor for help. Grok.

It was obvious this wasn’t a weekend project. If I wanted to go from zero to one, I needed a real plan. I started by asking how Lovable handled their infrastructure, especially given how limited AI models were at the time, UI wise at least.

During those discussions it became clear that this type of product is conceptually very simple. Creating a site boils down to four steps.

  1. The user describes their site.

  2. The AI response stream is parsed with a custom parser and changes are deployed to a running Firecracker VM as they happen.

  3. A URL is returned to the frontend and embedded in an iframe so the user can see the generated site within minutes.

  4. The user can prompt changes, or deploy the site to the cloud if they’re happy.

That’s it. No magic. Simple ideas are often the most deceptive ones though. Implementing this in reality was an absolute pain in the ass. A large part of that pain came from overthinking the problem, trying to explore every possible way to do things better. Every optimization felt like it could become a competitive edge.

Reality hits

Turning this design into something real meant writing a custom AI output parser, building a full preview container system, implementing an AI chat interface, and creating a deployment pipeline. All of this had to be done while keeping costs extremely low and token usage under control.

Parser

The parser was actually one of the easier parts. The key is strict instructions. The model must output a very specific structure using well defined XML tags so files can be parsed, reconstructed, and uploaded to the correct locations.

From there it’s just streaming, catching response chunks, caching them, and assembling valid files on the fly. This part is surprisingly straightforward.

Here’s a code example what AI would output and what we would try to catch.

### layout.tsx (Next.js app)
<builddrr-code>
<builddrr-write file="/app/layout.tsx">
import "./globals.css";
import type { Metadata } from "next";
import { Inter } from "next/font/google";

const inter = Inter({ subsets: ["latin"] });

export const metadata: Metadata = {
  title: "Cafe Aroma - Freshly Brewed Coffee",
  description: "Enjoy the best coffee in town with our freshly roasted beans and warm atmosphere.",
  openGraph: {
    title: "Cafe Aroma - Freshly Brewed Coffee",
    description: "Enjoy the best coffee in town with our freshly roasted beans and warm atmosphere.",
    url: "https://cafearoma.com",
    siteName: "Cafe Aroma",
    images: [{ url: "https://cafearoma.com/og-image.jpg", width: 1200, height: 630 }],
    locale: "en_US",
    type: "website",
  },
  twitter: {
    card: "summary_large_image",
    title: "Cafe Aroma - Freshly Brewed Coffee",
    description: "Enjoy the best coffee in town with our freshly roasted beans and warm atmosphere.",
    images: ["https://cafearoma.com/twitter-image.jpg"],
  },
};

export default function RootLayout({ children }: { children: React.ReactNode }) {
  return (
    <html lang="en">
      <body className={inter.className}>{children}</body>
    </html>
  );
}
</builddrr-write>
</builddrr-code>

Preview system

This was hell.

At the time, sandbox and container solutions were not designed for running AI generated code on demand and instantly returning a live URL for an iframe preview. Nothing fit the requirements out of the box.

The constraints were brutal. Cold starts needed to be under 250 milliseconds. Files had to be writable without restarting or redeploying anything. I initially tried Docker, which in hindsight is hilarious. That idea died fast.

Eventually I landed on Firecracker microVMs. Some providers offered them, but they were either insanely expensive or had unusable SDKs. Fly.io machines ended up being the best option at the time. They gave me enough control to build what I needed, so I built an entire service layer around them and got a working demo running.

I honestly don’t remember why I eventually moved away from that solution, because it did work. What I do remember is discovering Vercel’s sandbox system and realizing it solved the problem almost perfectly. Their attention to detail showed. It was exactly what I needed, and it was the same infrastructure they were already using internally for v0 previews.

AI chat

Building a chat app is easy now. Shadcn UI gives you polished components for free. Vercel’s AI SDK lets you route nearly any model through a single endpoint. Tooling is genuinely good.

That doesn’t mean there weren’t issues. I was using Anthropic models, and their rate limits were killing me. Lovable style applications rely heavily on tool calls for actions like deploying previews or mutating state. Tools burn tokens aggressively. Tools eat tokens the way a blue whale eats krill.

Keeping usage under control while still delivering fast feedback loops was a constant battle.

Deployment

Deployment was the easiest part. By the time I got there, I had no tears left anyway.

User projects were deployed to Cloudflare. Fair pricing, solid Workers product, and an actual contact to CF’s product manager I could complain to about documentation and design decisions. Most of those issues are gone now. I’d still recommend them without hesitation.

Conclusion

In the end, I had a working product. Something real. Something usable. Something that could, in theory, compete with the big players.

Or so I thought.

By that point I had already decided to pivot, and to do it fast. Building a product like this solo, with zero funding, is a bad idea. Vercel has hundreds of employees and effectively unlimited resources working on v0 and its surrounding ecosystem. Lovable raised $200 million dollars and hired some of the best people in the industry.

I couldn’t compete with that, even with a narrow niche.

During fall 2025, while I was still working on this, new startups with identical ideas appeared every week. Most of them were backed by millions in VC money. That was the final signal.

The project wasn’t a commercial success, but it was still worth it. The bar is much higher when you build something people are actually supposed to use. Secure, scalable software with good UX is hard. It takes time. Ngl it hurts.

Here’s a link to the project