Appearance
Cloudflare Workers Pricing & Architecture Guide
Architecture Fundamentals
Data Center Network
- 330+ data centers globally receive client requests
- Access to
req.cf.colo(330+ possible values for specific data center location) - Durable Objects support 9 location hints: enam, wnam, sam, weur, eeur, me, afr, apac, oc
- When a datacenter receives a request, it can access a Durable Object located in the nearest data center that supports DOs
- Durable Objects stay in their assigned location permanently (no automatic migration)
- Latency depends on: which Durable Object connects to which location
Durable Object Fundamentals
- Once created, a DO is located in the region closest to the initial request
- Throughput: Maximum 1000 requests/second per DO
- Runtime: Maximum 30 seconds of active runtime per request
- Memory: 128MB RAM allocation (fixed, regardless of actual usage)
- Communication: RPC method calls are the standard way to interact with DOs
- Eviction: Can be evicted immediately upon code deployment, or after 10 seconds after last client disconnect
- In-Memory State: Can store states as class members for fast reads, but will be lost on eviction
- D1 vs DO: DO with SQLite storage is preferred over D1; DO replicas can be managed manually
Durable Object Pricing
Pricing Breakdown
| Resource | Cost |
|---|---|
| Requests | $0.15/million |
| Duration | $12.50/million GB-s |
| Rows scanned | $0.001/million rows |
| Rows written | $1.00/million rows |
| SQL Stored data | $0.20/GB-month |
Important Notes
Duration Billing:
- Charges for active wall-clock time (when actually running code)
- Always bills for the full 128 MB allocation regardless of actual memory usage
- Calling
accept()on a WebSocket incurs duration charges for the entire connection time - Use WebSocket Hibernation API to avoid duration charges after event handlers finish
- DO remains active for 10 seconds after the last client disconnects
Rows Written:
- Includes row writes and index writes
setAlarm()counts as 1 write operation
SQL Storage Limits:
- Maximum columns per table: 100
- Maximum string, BLOB, or table row size: 2MB
- Maximum arguments per SQL function: 32
- Maximum characters (bytes) in LIKE or GLOB pattern: 50 bytes
Cost Examples
24/7 Active Durable Object
Continuously running DO (e.g., price collector with recurring alarms):
Duration Cost Calculation:
• Memory allocation: 128MB = 0.128GB
• Time period: 24 hours × 30 days × 3,600 seconds = 2,592,000 seconds
• GB-seconds: 0.128GB × 2,592,000s = 331,776 GB-seconds
• Cost: 331,776 / 1,000,000 × $12.50 = $4.15/monthDuration cost per 24/7 DO: $4.15/month
PriceCollector DO (9 Instances)
Real-world example from this project (9 location hints):
Duration Cost (assuming 0.2s active time per second due to API fetch):
• 9 DOs × 24h × 3600s × 30 days × 0.2s × 0.128 GB × $12.5 / 1,000,000
• = 9 × 2,592,000 × 0.2 × 0.128 × $12.5 / 1,000,000
• = $7.46/monthWrite Cost (1 row write for all assets every 10 seconds):
• 9 DOs × 24h × 3600s × 30 days / 10 × $1.00 / 1,000,000
• = 9 × 2,592,000 / 10 / 1,000,000
• = $2.33/monthTotal estimated cost for PriceCollector: ~$9.79/month
KV (Key-Value Store) Pricing
Pricing Breakdown
| Operation | Cost |
|---|---|
| Keys written | $5.00/million |
| Keys deleted | $5.00/million |
| List requests | $5.00/million |
| Keys read | $0.50/million |
| Stored data | $0.50/GB-month |
Important Characteristics
- Read Throughput: Virtually unlimited
- Write Rate Limit: 1 write per second per key
- Consistency: Eventually consistent with 60-second TTL per datacenter
- Writing in datacenter A may take up to 60s to propagate to other datacenters
- Stale data is possible for up to 60 seconds globally
- Not Suitable For: Mutable states that update frequently (e.g., every second)
Queue Pricing
Pricing Breakdown
| Resource | Cost |
|---|---|
| Operations | $0.40/million operations |
Important Notes
Message Delivery: Typically requires 3 operations
- 1 write
- 1 read
- 1 delete
Operation Counting:
- 1 operation per 64 KB of data written, read, or deleted
- A 65 KB message incurs 2 operation charges
- A 127 KB message incurs 2 operation charges
- KB defined as 1,000 bytes
- Each message includes ~100 bytes of internal metadata
Container Pricing
Pricing Breakdown
| Resource | Cost |
|---|---|
| Memory | $0.0000025/GiB-second |
| CPU | $0.000020/vCPU-second |
| Disk | $0.00000007/GB-second |
Instance Types
| Name | Memory | CPU | Disk |
|---|---|---|---|
| dev | 256 MiB | 1/16 vCPU | 2 GB |
| basic | 1 GiB | 1/4 vCPU | 4 GB |
| standard | 4 GiB | 1/2 vCPU | 4 GB |
Important Notes
Request Handling: Incoming requests are proxied through your Worker
State: Each container has its own Durable Object
Billing: You are billed for both Workers and Durable Objects usage
Disk Lifecycle: All disk is ephemeral
- When a container sleeps, the next start has a fresh disk from the container image
- No persistent disk storage between restarts
Instance Lifecycle:
- Cloudflare does not actively shut off containers after a specific time
- Without
sleepAftersetting, containers run indefinitely (unless host restarted) - Host restarts happen irregularly; no guarantee of instance uptime
- Manual stop via
stop()or deployment will terminate instances
Graceful Shutdown:
- Instance receives
SIGTERMsignal before shutdown SIGKILLsent after 15 minutes- Perform cleanup within 15 minutes for graceful shutdown
- Container reboots elsewhere shortly after termination
- Instance receives
Usage Example
index.ts:
typescript
import { Container, getContainer } from "@cloudflare/containers";
export class MyContainer extends Container {
defaultPort = 4000; // Port the container is listening on
sleepAfter = "10m"; // Stop if no requests for 10 minutes
}
async fetch(request, env) {
const { "session-id": sessionId } = await request.json();
// Get container instance for session ID
const containerInstance = getContainer(env.MY_CONTAINER, sessionId)
// Pass request to container on its default port
return containerInstance.fetch(request);
}wrangler.toml:
toml
[[containers]]
class_name = "MyContainer"
image = "./Dockerfile"
max_instances = 10
instance_type = "basic" # Optional, defaults to "dev"
image_vars = { FOO = "BAR" }
[[durable_objects.bindings]]
class_name = "MyContainer"
name = "MY_CONTAINER"
[[migrations]]
new_sqlite_classes = [ "MyContainer" ]
tag = "v1"Note: You must define a Durable Object to communicate with your Container via Workers.
Optional - durable-object.ts:
typescript
export class MyDurableObject extends DurableObject {
constructor(ctx: DurableObjectState, env: Env) {
super(ctx, env);
this.ctx.container.start({
env: { FOO: "bar" },
enableInternet: false,
entrypoint: ["node", "server.js"],
});
}
async someWork() {
const port = this.ctx.container.getTcpPort(8080);
const res = await port.fetch("http://container/awesomeAction", {
method: "POST",
});
}
}Example Worker Script
Basic Durable Object + Worker pattern:
typescript
import { DurableObject } from "cloudflare:workers";
export interface Env {
MY_DURABLE_OBJECT: DurableObjectNamespace<MyDurableObject>;
}
// Durable Object
export class MyDurableObject extends DurableObject {
sql: SqlStorage;
constructor(ctx: DurableObjectState, env: Env) {
super(ctx, env);
this.sql = ctx.storage.sql;
this.sql.exec(`
CREATE TABLE IF NOT EXISTS my_table (
id INTEGER PRIMARY KEY,
data TEXT
);
`);
}
async sayHello(): Promise<string> {
return "Hello, World!";
}
}
// Worker
export default {
async fetch(request, env) {
const id = env.MY_DURABLE_OBJECT.idFromName("foo");
const stub = env.MY_DURABLE_OBJECT.get(id);
const rpcResponse = await stub.sayHello();
return new Response(rpcResponse);
},
} satisfies ExportedHandler<Env>;Cost Optimization Tips
Minimize Duration:
- Use WebSocket Hibernation API for long-lived connections
- Batch operations to reduce active time
- Cache frequently accessed data in memory
Optimize Writes:
- Batch SQL writes when possible (e.g., write all price data every 10s instead of every 1s)
- Use transactions to group related writes
- Be mindful that
setAlarm()counts as a write
Storage Strategy:
- Optimize storage format for time-series data (this project uses binary shard format)
- Archive old data to cheaper storage after 7+ days
- Consider data retention policies (30-day retention in this project)
Request Optimization:
- Cache recent data in memory (60s cache in this project)
- Use RPC calls instead of HTTP requests between DOs
- Implement health checks to avoid unnecessary DO activations