Lead Stats & Marketing Campaign Optimization — Future Plan & Phases
Status: Planned (start next)
Priority: Important project
Source: lead-stats-marketing-campaigns.md (goals, data model, Analytics Engine strategy, AI use cases)
This doc divides the work into phases so progress can be tracked. Each phase has clear deliverables and success criteria.
Objective
Deliver a single source of lead statistics (time-bucketed volume, breakdown by type and by dimension) so marketing can:
- See when leads arrived (30/60 min or day-wise) and how many by type (actual, follow-up, bot, test, VPN, suspicious, hot).
- Slice stats project-wise, country-wise, city-wise, source-wise (and referer, CTA, UTM, engagement).
- Use the data for AI-assisted team, campaign, and focus-area optimization (see planning doc §7).
All reporting must run from Analytics Engine so D1 is not loaded by analytics queries.
Phase 1: Analytics Engine schema & complete event stream
Goal: Every “received” event (primary, follow-up, test, blocked, bot) is written to the lead_submissions dataset with dimensions needed for project/country/city/source breakdown. No D1 queries needed for stats.
1.1 Add event_type and write test + blocked/bot events
- [x] Extend
LeadSubmissionDataPointandwriteLeadSubmissionToAnalyticsEnginewith a new dimension event_type (e.g. blob20): valuesprimary|follow_up|test|blocked|bot. - [x] Normal path: set
event_typetoprimaryorfollow_upfrom existingisPrimaryLead; keep existing blob count (add blob20 to the array). - [x] Test-lead path: after writing to
TestingLeads, call the Analytics Engine writer once withevent_type = 'test'and minimal fields (project, source, referer from request; rest empty/default). - [x] Blocked/bot path: when rejecting (blocked IP or honeypot), after writing to
BlockedIPLeads, call the Analytics Engine writer once withevent_type = 'blocked'or'bot'and minimal fields. - [x] Update
docs/analytics/analytics-engine.mdwith blob20 = event_type.
Success criteria: New test or blocked/bot submissions produce a row in Analytics Engine with correct event_type; existing primary/follow-up writes include event_type. One query can count by event_type.
1.2 Add city (and optionally region) to Analytics Engine
- [x] Add
userCityand optionallyuserRegiontoLeadSubmissionDataPoint; extend the blob array (e.g. blob21 = city, blob22 = region). Ensure order is consistent with schema doc. - [x] In
src/lead.ts, passpayload.user_cityandpayload.user_regioninto the analytics point when callingwriteLeadSubmissionToAnalyticsEngine. - [x] Update
docs/analytics/analytics-engine.mdwith blob21 (city), blob22 (region if added).
Success criteria: Accepted lead submissions in Analytics Engine have non-empty city (and region) when CF geo provides them; city-wise/region-wise grouping works in SQL.
1.3 Documentation and verification
- [x] Document final Analytics Engine schema (all blobs/doubles and meaning) in
docs/analytics/analytics-engine.md. - [ ] Verify with a few test submissions (primary, follow-up, test, blocked) that data appears in the dataset with correct event_type and dimensions.
Phase 2: Lead stats API
Goal: API that returns time-bucketed counts and breakdown by type and by dimension, reading only from Analytics Engine.
2.1 Lead-stats query service and endpoints
- [x] Add a lead-stats service (e.g. in
src/tracking/services/orsrc/lib/) that builds Analytics Engine SQL for: time bucket (30 min / 60 min / day), filters (date range, project, source, country, city), and groupings (by type, by dimension). - [x] Expose GET endpoint(s), e.g.
GET /analytics/lead-statsorGET /analytics/engine/lead-stats, with query params:days,bucket(30min | 60min | day),project,source,country,city,group_by(type | project | source | country | city | none). - [x] Response shape: time buckets with
total,primary,follow_up,bot,test,blocked, and optionallyvpn_count,suspicious_count,hot_count(from doubles/blob15); or grouped by dimension (e.g. per project, per country) with same counts. - [x] Auth: reuse existing analytics API key auth (admin or marketing as per current rules).
- [x] Optional:
GET /analytics/lead-stats/summaryreturning last 7 days KPIs (total, actual %, hot %, top N projects, top N countries) for AI or dashboard overview.
2.2 Timezone and bucket behaviour
- [x] Decide and document reporting timezone (e.g. IST for “day” and hour buckets). Implement in SQL or in post-processing (e.g. convert
timestampto IST before grouping). - [x] Support 30-min and 60-min buckets via Analytics Engine SQL (e.g. floor timestamp to 30 or 60 min).
Success criteria: Calling the API with a date range and optional filters returns JSON with correct buckets and counts; no D1 is queried for this API.
Phase 3: Dashboard and reporting UI
Goal: Marketing and admins can view and export lead stats from the dashboard.
3.1 Lead stats page or section
- [x] Add a dashboard page (e.g. “Lead stats” or “Campaign stats”) with: date range picker, bucket selector (30 min / 60 min / day), dimension filters (project, source, country, city).
- [x] Display: volume-over-time chart, breakdown by type (actual, follow-up, bot, test, VPN, suspicious, hot), and breakdown by dimension (e.g. table or chart by project, by country, by source, by city).
- [x] Reuse existing dashboard API base URL and auth (store API key / session as for other analytics pages).
3.2 Export for AI or external use
- [x] Add export (CSV or JSON) of the current view (filtered and grouped stats) so users can paste into AI chat or use in external tools (see planning doc §7.5 Option A).
Success criteria: Users can select range and filters, see charts/tables, and export data without leaving the dashboard. Done: Stacked area chart by lead type and “Trend by lead type” (five small area charts) with theme colors and custom tooltip.
Phase 4: AI-ready exposure and optional automation
Goal: Stable, documented API shape for AI/agents; optional weekly digest or MCP/agent integration.
4.1 Documentation for AI consumers
- [ ] Document the lead-stats API response shape (field names, types, and meaning) in
docs/api/so prompt authors or agents can reliably use “by project”, “actual_lead_count”, etc. - [ ] Optional: add a small “summary” response example and suggested prompts (e.g. “What should we focus on this week?”) in the planning or API doc.
4.2 Optional: scheduled digest or agent integration
- [ ] Optional: scheduled job (e.g. weekly) that fetches lead-stats summary (or full stats), sends to an LLM or template, and posts a short “optimization brief” to Slack/email or dashboard. Design only or implement per product decision.
- [ ] Optional: MCP tool or agent that calls the lead-stats API and answers natural-language questions. Design only or implement per product decision.
Success criteria: API is documented for AI use; optional automation is scoped and either implemented or explicitly deferred.
Progress tracking
| Phase | Description | Status |
|---|---|---|
| 1 | Analytics Engine schema & events | Done |
| 2 | Lead stats API | Done |
| 3 | Dashboard & reporting UI | Done |
| 4 | AI-ready exposure / automation | Not started |
Update the Status column as work completes (e.g. “In progress”, “Done”).
References
- Lead stats & marketing campaigns (planning) — goals, data model, AE strategy, dimensions, AI use cases
- Analytics Engine schema — current and extended schema
- ROADMAP.md — Phase 11: Lead Stats & Marketing Campaign Optimization