Beating the distance tax: why I picked Cloudflare R2 over S3 for my SaaS

Written in

by

At some point, every system hits physics.

You can optimize queries, tune your infrastructure, throw more compute at the problem. But the moment you depend on moving data across the world, you’re bounded by distance.

And distance is expensive.

Not just in time, but in complexity.

I ran into this building a global, multi-tenant SaaS for OTA updates. The problem showed up immediately, and it wasn’t subtle.

Uploads.

We tend to think storage is a solved problem. It isn’t.

Reads are solved. CDNs handle that. Everything feels fast.

Writes are where things break.

If a developer in Tokyo uploads a 200MB bundle to a bucket in us-east-1, that request is doing a full transcontinental trip. You’re dealing with TCP setup, packet loss, retries. Suddenly what should feel instant becomes slow and unpredictable.

That’s the distance tax.

And once you see it, you can’t ignore it.


The wrong solutions are obvious

You have a few options.

You can introduce regional buckets and build replication. Now you’re maintaining consistency, dealing with edge cases, and writing infrastructure you never wanted to own.

Or you can use S3 Transfer Acceleration. Which works. But now your costs scale with geography in a way that’s hard to predict. That’s not something I’m willing to accept in a SaaS.

Neither option made sense.

What actually matters

The problem isn’t storage. The problem is where the write happens.

Once you reframe it that way, the solution becomes clearer.

Why R2 changed the equation

Cloudflare R2, with Local Uploads enabled, does something simple but fundamental: It removes geography from the write path.

Uploads terminate at the nearest PoP. Not where your bucket lives. Where your user is.

So a developer in Tokyo uploads locally. The system handles the rest:

  • Metadata is immediately available globally
  • The file is replicated asynchronously
  • Consistency is preserved

And most importantly: Your system doesn’t need to care.


What this means in practice

Before this, cross-region uploads were:

  • slow
  • inconsistent
  • sensitive to network conditions

After:

  • ~4x faster in real scenarios
  • stable
  • predictable

But the real gain wasn’t latency. It was removing an entire class of problems from the system.

The part I actually care about

I didn’t switch to R2 because it’s “better storage”. I switched because it let me not think about this problem anymore.

No regional orchestration. No replication logic. No custom edge routing. And no hidden cost model tied to geography.


Implementation

Nothing changed. That’s the point.

const storage = new PostgresStorage({
client: pgPool,
s3: new S3Client({
region: 'auto',
endpoint: process.env.R2_ENDPOINT,
bucket: 'release-bundles-prod'
})
});

Enable Local Uploads:

npx wrangler r2 bucket local-uploads enable release-bundles-prod

That’s it.

If a change requires you to rethink your entire system, it’s not a good abstraction. This didn’t.

The real takeaway

Most systems try to work around distance. R2 removes it from the critical path. That’s a very different approach.

Right now, a developer in Tokyo uploads a bundle with the same experience as someone sitting next to the primary region.

And for a platform like this, that’s not a performance improvement.

That’s table stakes.

Leave a comment

The Stack Overflow of My Mind

Debugging life, one post at a time