Skip to content

Planning: Lead processing with Cloudflare Queues

Status: Future / optional. Not implemented. Use this document when planning a queue-based lead pipeline.

References: Cloudflare Queues, Delivery guarantees, Dead Letter Queues, Batching and retries.


Goal

Decouple “accept a lead” from “persist and forward” so that:

  1. Leads are never lost when D1, Make.com, or Analytics Engine are temporarily unavailable.
  2. Failures are visible in one place (Dead Letter Queue) for alerting and replay.
  3. Request path stays fast — the form handler only validates and enqueues, then returns.

High-level flow

Form submit


┌─────────────────────────────────────────────────────────────┐
│  Request path (existing Worker)                             │
│  • Parse & validate body                                     │
│  • Enrich (geo, scoring, suspicion, duplicate check* etc.)  │
│  • Publish message to Cloudflare Queue (lead payload + meta) │
│  • Return 200 / redirect immediately                         │
└─────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│  Queue: e.g. lead-submissions                               │
│  (guaranteed delivery, retries, optional batching)           │
└─────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│  Consumer Worker (queue subscriber)                          │
│  • For each message (or batch):                              │
│    - D1: duplicate check, insert Leads / LeadSubmissions     │
│    - Make.com: send webhook (with retry)                      │
│    - Analytics Engine: write data point                       │
│  • Ack only after success (or after chosen “success” def.)   │
│  • On repeated failure → message goes to Dead Letter Queue   │
└─────────────────────────────────────────────────────────────┘

    ▼ (on max retries)
┌─────────────────────────────────────────────────────────────┐
│  Dead Letter Queue (DLQ)                                     │
│  • Holds failed messages for inspection and replay           │
│  • Alert when messages land here                              │
└─────────────────────────────────────────────────────────────┘

* Duplicate check could stay in the request path (e.g. KV or a quick D1 read) to avoid enqueueing obvious duplicates, or move entirely into the consumer.


Benefits

BenefitReason
Leads never blocked by storageRequest path only enqueues. D1/Make.com/Analytics Engine run in the consumer; if they are down, the queue retries and eventually the message goes to the DLQ.
Clear failure visibilityFailed leads end up in the DLQ instead of ad-hoc KV keys; you can count, inspect, and replay.
Guaranteed deliveryQueues provide delivery guarantees and retries; no lead is dropped when the consumer temporarily fails.
Backpressure and batchingYou can batch and tune concurrency so a burst of form submits does not overload D1 or Make.com.
Separation of concerns“Accept lead” vs “persist and forward” are separate; you can change or add consumers without touching the form handler.

Design choices (to decide when implementing)

1. When to acknowledge (ack) a message

  • Option A: Consumer acks only after D1 + Make.com (and optionally Analytics Engine) all succeed. Any failure triggers queue retries and eventually DLQ.
  • Option B: Consumer acks after D1 and fires Make.com in the background (e.g. waitUntil). Queue retries only on D1 failure; Make.com failures are handled by the existing webhook dead-letter (KV) or a separate DLQ.

2. Duplicate detection

  • In request path: Optional quick check (e.g. KV “recently seen” or a single D1 read) to avoid enqueueing obvious duplicates; reduces queue volume.
  • In consumer: Full duplicate logic (project + visitor_id / email / phone) as today; duplicate messages result in update or no-op and ack.

3. Ordering

  • Queues do not guarantee strict ordering. For lead capture, “last write wins” in D1 is usually enough. If ordering per user/project is required, handle it in the consumer (e.g. by lead key and timestamps).

4. Message payload

  • Store in each message: full lead payload (as sent to Make.com today), request metadata (timestamp, request id, referer), and any flags (e.g. is_test_lead, bot signal). Keep message size within Queue limits.

5. Wrangler configuration

  • Producer: Form handler Worker has a Queue binding (e.g. LEAD_QUEUE) and calls env.LEAD_QUEUE.send(...).
  • Consumer: Same or another Worker is configured as a consumer for that queue (event-driven or pull).
  • DLQ: Configure a Dead Letter Queue for the lead queue so failed messages are redirected after max retries.

Comparison with current approach

AspectCurrent (try/catch + KV)With Queues
D1 downStill send to Make.com; log lead.persistence_failed; store summary in KV (lead_fail_d1:*).Message retried by queue; then DLQ. Consumer can retry later or replay from DLQ.
Make.com downWebhook retries + KV dead letter (webhook payload).Consumer can retry whole message (D1 + webhook); or ack after D1 and handle webhook failure separately.
Failure visibilityLog events + KV keys with custom prefix.Single place: DLQ. Count, inspect, replay without custom KV schema.
Request latencyDepends on D1 (and optional Analytics Engine write).Lower: only validate + enqueue.

When to consider implementing

  • You want one canonical place (DLQ) for failed leads and replay, instead of KV keys and log search.
  • You need stronger delivery guarantees and retries managed by the platform.
  • You want to offload all persistence and webhook work from the request path to improve latency and resilience under load.