Why I Moved a Project Off Cloudflare Workers Before It Even Launched
I built Notary Atlas on Cloudflare Workers. Seemed like the right call at the time. Workers promised zero cold starts, global edge deployment, and D1 for a managed SQLite database. Everything a directory app needs. No server to maintain.
Except that's not how it played out.
The project hasn't launched yet. I'm still building it. And after 120 commits, 22 of them debugging a single feature, I decided to rip the entire Cloudflare stack out and start over on Bunny Magic Containers. Same app. Different runtime. Node.js, not edge.
Here's what happened.
The OG Image Saga
The whole thing started with Open Graph images. Every page on Notary Atlas needs a social sharing image. The standard approach is to generate them at build time or request time using Satori (a library that turns JSX into SVGs, which become PNGs).
Satori works fine in Node.js. Cloudflare Workers, not so much.
Workers run on V8 isolates, not full Node.js. No node:fs. No local file access. No native font loading. The way Satori needs fonts is fundamentally incompatible with how Workers handle files.
My first attempt used @vercel/og, which wraps Satori. It worked locally. It failed in production. Fonts wouldn't load. Memory constraints kicked in. The image generation would hang or produce corrupted output.
I spent 8 commits trying to fix it. Different font loading strategies. Bundling fonts as base64. Trying alternative image libraries. Nothing worked reliably on the edge runtime.
Eventually I gave up. I removed dynamic OG generation entirely and went with static placeholder images. Then I rebuilt the OG generator as a completely separate Cloudflare Worker with its own deployment pipeline, its own D1 binding, its own rate limiter. Two deployments for one app. Two database connections. Two of everything, because the main app couldn't do what any standard Node.js server does out of the box.
That one feature is what broke the camel's back. Not because it was the only problem, but because it was the most absurd.
The Other Friction Points
OG images were the loudest problem, but they weren't the only one.
LibSQL client incompatibility. The @libsql/client library couldn't bundle into the edge worker. I had to use dynamic imports and conditional logic to prevent it from being included in the worker bundle. Then I had to create a wrapper component that dynamically loaded the search feature client-side only, because the database driver didn't work with server-side rendering on Workers.
No getCloudflareContext() was clean. Every API route needed to call getCloudflareContext() to access the D1 binding and other CF-specific bindings. This meant as any type casts scattered throughout the codebase because the context types didn't match what Drizzle ORM expected. I had 4+ places where the database was cast to any just to make queries work.
ISR didn't work. I set up Incremental Static Regeneration, the standard Next.js caching strategy. It caused cache staleness issues with stale data showing on directory pages. I had to add revalidatePath() calls in webhooks, then self-fetch hacks to warm the cache. Eventually I ripped it all out. Every page hit D1 on every request. No caching at all.
The adapter layer. The entire app ran through @opennextjs/cloudflare, a compatibility shim that translates Next.js server features into Cloudflare Workers. It added build complexity, a .open-next/ directory full of generated artifacts, and its own config file. Another moving part between my code and production.
None of these were dealbreakers on their own. Together, they added up to a codebase full of workarounds and defensive coding. The app worked, but it was held together with duct tape.
The Decision to Move
What made the decision easy was timing. The project hasn't launched. There are no users. The database has almost no real records. This is the cheapest possible time to change infrastructure.
I'd already built NotaryStyle on Bunny Magic Containers. That project uses Bunny's container hosting with a managed libSQL database. Same database protocol as Cloudflare D1. Same ORM (Drizzle). Same everything on the data layer.
The difference is the runtime. Bunny containers run standard Node.js. Not an edge runtime with restrictions. Not a V8 isolate pretending to be a server. Actual Node.js, with node:fs, native font loading, node-cron, and every npm package working exactly as documented.
What the Migration Looked Like
The actual migration was straightforward. Phase 1 was removing everything Cloudflare-specific.
I deleted wrangler.toml. Removed @opennextjs/cloudflare and @cloudflare/workers-types from package dependencies. Deleted the open-next.config.ts adapter config. Removed all getCloudflareContext() calls and replaced them with process.env. Deleted the DirectorySearchWrapper.tsx component that existed only to work around the LibSQL bundling issue. Set output: 'standalone' in the Next.js config, the standard setting for Docker deployments.
The database connection went from a multi-line conditional block (LibSQL for local dev, D1 for production, with CF context indirection) to three lines. Same @libsql/client, same connection string, same everywhere.
Phase 2 was containerizing. Dockerfile with three stages (deps, build, runner), same pattern as NotaryStyle. A .dockerignore. An entrypoint.sh script for future cron jobs.
Then I moved the OG image generator into the main app. One route handler at /api/og using @vercel/og. It worked on the first try. The separate OG worker became unnecessary. One deployment instead of two.
What Changed in Practice
| Before | After |
|---|---|
getCloudflareContext() for DB access | process.env like any Node app |
as any casts in 4+ places | Clean types throughout |
| Dynamic imports to prevent edge bundling | Standard imports everywhere |
| Separate OG image worker deployment | Single route in main app |
OpenNext adapter + .open-next/ artifacts | Standard next build |
wrangler.toml + CF-specific env vars | .env files |
| No cron support | node-cron available |
| No caching (ISR removed) | Can re-enable native ISR |
The Tradeoff
Bunny containers aren't perfect. Cold starts are a real thing. When a container sits idle, Bunny spins it down. The next request has to wait for it to start up. For a directory app that might not get constant traffic, this means occasional slow first requests.
Cloudflare Workers don't have this problem. They scale to zero with virtually no cold start penalty. That's genuinely impressive.
But for my use case, a directory app that isn't launched yet and will have modest traffic at launch, a cold start of a few hundred milliseconds is fine. The developer experience improvement was worth it.
The other consideration is that Bunny Database is still in public preview. The API could change. But it's built on libSQL, which is stable. The risk is thin.
The Lesson
Edge runtimes are impressive technology. But they come with real constraints. If your app needs full Node.js features, file system access, native libraries, or anything beyond HTTP request handling, the edge runtime will fight you.
For a simple API or a middleware layer, Workers are great. I'd use them again for that. But for a Next.js app with database connections, image generation, and background jobs, standard Node.js on containers is just simpler.
Sometimes the cutting-edge option isn't the best option. Especially when the boring option works on the first try.
Building something that needs to actually work? Let's talk.