Senior CS operator. Most recent book: $10.5M ARR enterprise portfolio
at 147% NRR, with 100% logo retention and
$934K of upsell pipeline generated in two quarters.
Across 10+ years I've built Customer Success programs from scratch,
run $25M ARR books at 120%+ NRR, and led the customer save plays that get
recognized by industry panels. Most recently the
2025 Creative Customer Success Leader Award.
What's different about the past 18 months: I've built an AI-augmented operating
model for Customer Success that returns hours of capacity per week to
high-value customer engagement. Patterns I built for myself have been picked up
across the broader Customer Success organization. The result is a working
template for what AI-native Customer Success looks like in practice. Operator
judgment in front, agentic infrastructure underneath.
Open to bringing this operating model to the next team.
10+Years CS Leadership
$33M+ARR Managed Across Career
147%Current Net Retention
1,200+Customers Served
The thesis
Reactive Customer Success vs proactive Customer Success.
Most CS organizations operate reactively: an inbox plus a Salesforce login.
The teams hitting 130%+ NRR aren't doing more of the same. They're operating
with a fundamentally different model. Here's what changes.
Reactive baseline
Where most CS functions live today.
Proactive operating model
The work I do, scaled across teams I've led.
Inbox-driven engagement. Reply when prompted; manage what's loud, miss what's quiet.
Scheduled cadence and signal-driven outreach. Quiet accounts surface as engagement gaps before they become churn risks.
QBR data extracted manually the day before each meeting. The whole prep day burns on spreadsheets.
Always-on telemetry feeding executive-ready charts. QBR cycle compressed 75%. Prep time goes to narrative and strategy, not data wrangling.
Single CSM thread to the customer. One relationship, one perspective, one point of failure.
Account pod model (CSM + AE + SE + RAM). Twelve-stakeholder customer mapped to four-function vendor pod, single shared roadmap.
Renewal scramble in the last 90 days. Sponsor change in month nine derails the conversation.
Renewal posture from day one. Stakeholder map maintained; sponsor changes caught early; expansion conversation always on the table.
CSM workflow ad hoc per person. Knowledge stays in heads; one departure resets the team.
Codified workflows as reusable systems. Account memory, voice profile, governance rules. New CSMs onboard onto an operating standard, not into a fog.
AI used as a side tool. ChatGPT for drafts. Tools added with no measurement.
AI as production infrastructure. Cost-tiered model routing. Value reported quarterly. Capacity returned to high-leverage human work.
Selected work
What I've built and what it produced
Five systems I've built and run in production against an enterprise portfolio.
Each started as a recurring problem in my own CSM workflow. Each produces
measurable capacity, consistency, or coverage that wouldn't exist otherwise,
and travels with me to the next team.
The Customer Success operating surface I run on
Hours per day of context-switching eliminated. Faster response to at-risk
signals. Better preparation for executive customer conversations.
A single web surface that aggregates everything a senior CSM needs to make
daily decisions: today's customer meetings with prep brief, escalations
needing my attention, customer signals from the past 24 hours, my prioritized
daily plan, and recap drafts after meetings.
Before this existed, the same context lived across 8 different tools and tabs.
The cost of switching between them was the dominant tax on the role.
Compressing that into one surface is the single biggest productivity unlock
I've shipped.
Stack
Cloudflare Workers · D1 · Pages
· Custom sync-daemon
· Daily production use
A library of repeatable Customer Success workflows
The same job, done in a fraction of the time, every time. Consistent customer
experience regardless of CSM workload. Best-practice execution baked into the
workflow itself, not relying on memory.
Fifty-plus self-contained workflows that handle the recurring jobs CS
practitioners do manually: account research before customer meetings, follow-up
emails after meetings, churn-risk scoring, escalation tracking, QBR data
preparation, executive briefing decks, customer reactivation outreach.
CS practitioners spend an enormous share of their time on knowable, repeatable
work. Capturing that work as reusable systems returns capacity to the part of
the job AI can't do: the human relationship work that actually drives renewal
and expansion.
Stack
OpenCode + Anthropic Claude ·
Model Context Protocol (MCP) ·
Versioned, evaluated, self-improving over time
Patterns adopted across the broader CS organization
Tools I built for myself, picked up by peers across the global CS team
where I work. The same time-saving and consistency benefits I get, scaled.
Multiple workflows from my personal library (a customer usage analytics
skill, a QBR report generator, a repository bootstrap tool, a doc-drift
audit skill) sanitized and contributed to my employer's central Customer
Success tooling library. Reviewed via the same merge-request process
used by engineering teams.
Internal AI tooling at scale is valuable when one practitioner's improvements
compound across an organization, not when each person reinvents the wheel.
Showing I can lead this kind of upstream contribution while still hitting
portfolio number is the differentiated profile.
Scope
Maintainer-level access on the shared library
· Active peer collaboration on a unified hub
An operating standard for AI in customer-facing work
AI assistance that's reliable enough to run against real customer accounts
without erosion of trust, voice, or accuracy. Mistakes that would have
damaged customer relationships, codified as guardrails before they happen.
A written-down, version-controlled set of rules and protocols defining how
AI systems handle customer-facing work safely. Covers voice and tone
discipline, when to escalate to a human, what to never automate, how to
handle customer data, how to attribute work honestly, and how to surface
uncertainty rather than fake confidence.
The biggest risk in AI-augmented customer-facing work isn't model capability.
It's drift between AI behavior and the standards a senior operator would apply.
This standard closes that gap operationally, not philosophically. It's the
kind of governance work most organizations will need within the next 12 months
and almost none have written down today.
Format
~3,000 lines of git-managed governance
· Reviewed and tightened on a fortnightly autonomous
audit cadence
An always-on layer that surfaces what matters
Important customer signals get caught early. The morning plan reflects current
reality, not yesterday's. Nothing important falls through the cracks because
it was buried in a notification stream.
Forty-plus scheduled jobs running before, during, and after my working day to
surface relevant signals: meeting prep before customer calls, engagement scans
across the book of business, escalation triage, end-of-day wrap-ups,
fortnightly audits of what I might have missed.
CSM workflows are fundamentally driven by external signal. Operating reactively
means missing signals. This layer makes the signals visible before I have to
ask, which is what separates senior IC operating from junior IC reaction.
Customer Success is easy.
People are hard.
Software is just the part nobody trips over.
Joshua Vogel · The CS PRESS · 2026
Flagship evidence
What the work actually looks like
Three artefacts from real anonymized customer engagements, rendered through
my qbr-charts skill. Customer profile is a synthetic analog of an enterprise
financial services account in my book. Data shapes, peak values, and visual
treatment are unchanged from production.
QBR opener · Customer Overview / Business Priorities
How I open every executive QBR. Before any usage data, before any
product roadmap. The slide that says I understand what your business
is trying to do.
WAF · 13-month request volume by rule type
Real chart shape from the QBR generator. Anonymized data; production
render pipeline.
Bot Management · automated vs human traffic
100-stacked classification over time. Same generator, same render
pipeline, different chart type.
Five systems that produce the work above. Each started as a recurring problem
in my own CSM workflow. Each is in daily production use. All examples use
synthetic customer data; actual customer information is never shown publicly.
Helios Energy renewal kickoff at 14:30. $1.2M annual contract; sponsor moved last quarter, new contact has 2 weeks of context. Brief draft prepped. Risk score: 6/10.
Acme Industries QBR at 09:30. Multi-product expansion ready to discuss. Bot Management adoption up 40% QoQ; ready to position Application Security upsell.
Globex Logistics weekly sync at 11:00. Recap from last week shows 3 unresolved action items. Following up on integration timeline.
Engagement gaps · 3 accounts
Polaris Financial: 24 days since last meaningful contact. Last 3 emails one-way (mine). Suggest re-engagement outreach.
Nimbus Software: 22 days quiet. Renewal in 73 days. Worth a check-in.
Atlas Retail: 21 days quiet. No meetings booked. Possible sponsor change; verify on LinkedIn.
Hot escalations · 2 unresolved
INC-1142 (Globex): 5 days unresolved. Ticket sitting on Engineering. Last update 48h ago. Worth nudging in QBR.
INC-1156 (Acme): 2 days unresolved. New escalation; product team aware. No action needed yet.
CS Skills Library · 54 production skills
The library I built one workflow at a time
Each skill is a self-contained, versioned, evaluated workflow that handles
a recurring CSM job. Highlighted skills have been contributed upstream
to the broader CS organization. Click any skill to see
what it does and an example of what it produces.
43 of 54 shown. Each skill is git-versioned and follows a maturity
progression: draft → tested → trial → crystallized.
Daily orchestration Featured · Contributed upstream
csm-morning-brief
Runs: Every weekday at 9:45 AM AEST
Prioritized daily briefing of meetings, escalations, customer signals, and engagement gaps. Pulls from calendar, email, escalation systems, and account memory. Surfaces what changed in the past 24 hours and prioritizes the day before I open my laptop.
Example output
FRIDAY 2026-05-08 09:45 AEST · 7 customer signals · 3 meetings · 2 escalations
TOP OF MIND TODAY
• Helios Energy renewal kickoff at 14:30. $1.2M annual; sponsor moved last quarter, new contact has 2 weeks of context. Brief draft prepped. Risk score: 6/10.
• Acme Industries QBR at 09:30. Bot Management adoption up 40% QoQ; ready to position Application Security upsell.
ENGAGEMENT GAPS · 3 accounts
• Polaris Financial: 24 days quiet. Re-engagement outreach.
• Nimbus Software: 22 days quiet. Renewal in 73 days.
Daily orchestration
csm-midday
Runs: Every weekday at 12:00 PM
Mid-day checkpoint. Reconciles what got done against the morning plan, scans for new inbound signals, re-prioritizes the afternoon, and surfaces anything overdue.
Example output
MIDDAY · 12:00 PM · 4 of 7 brief items addressed
BLOCKED
• Helios renewal: waiting on legal review (raised at 10:15)
• Acme expansion: SE availability for technical scoping
NEW INBOUND
• Globex CTO replied to Wednesday brief: positive
• 1 new escalation: INC-1162
Daily orchestration
csm-eod-wrap
Runs: Every weekday at 5:00 PM
End-of-day wrap. Reviews the meetings I had, drafts any missing follow-up emails, extracts action items, and checks on-site attendance for the next 7 days.
Example output
EOD WRAP · 17:00 AEST · 3 meetings · 2 follow-ups drafted
MEETINGS RECAPPED
• Acme QBR (09:30): expansion conversation landed. Drafted follow-up to CFO
• Helios renewal (14:30): risk reduced; sponsor introduced to AE
ACTION ITEMS CARRIED FORWARD
• Get SE on Acme call next Tuesday
• Draft executive briefing memo for Helios board (Tuesday)
Daily orchestration Featured · Contributed upstream
csm-meeting-prep
Runs: Pre-meeting trigger or scheduled 24h ahead
Pre-meeting briefing. Reads the calendar 24h ahead, pulls prior conversation history, recent customer signals, account memory, escalation status, and produces a one-pager calibrated to the meeting type (kickoff, QBR, executive briefing, follow-up).
Example output
MEETING PREP · Helios Energy renewal kickoff · 14:30
WHO
New sponsor: Sarah Chen (CTO, joined Q1). Previous: Mark Tao (departed Feb).
CONTEXT
· Renewal due 90 days. $1.2M ACV.
· Escalation INC-1142 unresolved 5 days; raise gently.
· Last QBR (Nov): 8/10 satisfaction, expansion conversation deferred.
MEETING OBJECTIVES
1. Establish relationship with Sarah
2. Confirm renewal trajectory
3. Surface INC-1142 if not raised first
Daily orchestration Featured · Contributed upstream
csm-meeting-recap
Runs: Auto-fires 20 minutes after each customer-facing meeting
Post-meeting recap generator. Reads the meeting transcript, extracts discussed topics, action items, and key decisions, filters out internal-only content, and produces a customer-ready recap with markdown, email draft, and PDF-ready data.
Example output
MEETING RECAP · Acme Industries Q4 QBR · 09:30 AEST
WHAT WE DISCUSSED
· Bot Management adoption (40% QoQ growth)
· Application Security upsell positioning
· Q1 roadmap alignment
ACTION ITEMS
· Josh: send Bot Management adoption summary by Tuesday
· Sarah Chen: bring Head of Security into next conversation
DRAFT EMAIL READY (in Gmail Drafts)
Customer engagement Featured · Contributed upstream
csm-customer-followup
Runs: Triggered by a meeting transcript or direct request
Drafts customer follow-up emails from meeting transcripts. Cross-references prior conversations, verifies live system state, applies my voice profile, surfaces any factual claims for verification, and delivers as a Gmail draft. Voice-aligned, not AI-tell-laden.
Example output
TO: sarah.chen@acme.example
SUBJECT: Following up · Q4 QBR
Hi Sarah,
Thanks for the time today. Three things from our conversation worth tracking:
· Bot Management has come a long way since Q3 (40% QoQ adoption). I'll send the snapshot by Tuesday.
· On Application Security — happy to scope a deeper conversation with our SE team next week if useful.
· INC-1142 — pinging product engineering tomorrow for a status update; you'll hear from me by EOD.
Let me know if I missed anything.
Best,
Josh
Customer engagement Featured · Contributed upstream
csm-account-research
Runs: Pre-engagement, ad-hoc
Multi-source account research. Pulls public news, recent product announcements, leadership changes, industry context, and combines with internal account history (prior QBRs, escalations, support tickets) to produce a one-pager that gets a CSM up to speed on a customer in 5 minutes.
Example output
ACME INDUSTRIES · ACCOUNT BRIEF
WHAT THEY DO
National specialist provider of financial services solutions to mid-market and enterprise customers.
RECENT NEWS
· Q3 earnings beat (revenue +12% YoY)
· New CTO appointed Feb 2026 (Sarah Chen, ex-Stripe)
· Announced AI-platform initiative; mentioned cloud spend in earnings call
INTERNAL HISTORY
· Customer 22 months · 17 products · Premium B support
· Last 3 QBRs: positive trajectory, expansion deferred twice
· 0 unresolved escalations as of today
Customer engagement
csm-churn-risk
Runs: Daily scan + ad-hoc
Churn-risk scoring across the portfolio. Combines engagement velocity, escalation density, sponsor tenure, sentiment signals, and renewal proximity into a numeric score. Generates save plays for accounts above threshold.
Drafts customer-facing emails for non-meeting scenarios: onboarding welcome, check-in, renewal nudge, escalation response, re-engagement. Different from meeting follow-up; this is for outreach without a transcript driver.
Customer engagement
csm-jargon-translate
Runs: Triggered by an internal escalation ticket
Translates internal escalation tickets (engineering language, product naming, internal status) into customer-friendly status updates that a CSM can send without paraphrasing or risking misrepresentation.
Reporting Featured · Contributed upstream
qbr-report-generator
Runs: Pre-QBR, on demand
End-to-end QBR generation. Synthesizes telemetry data, business context, and stakeholder-specific framing. Renders charts via headless Chrome, assembles slides via deck templates, exports PDF and PPTX. The same skill that produced the WAF and Bot Management charts above.
Example output
QBR GENERATED · Acme Industries · Q4 2026
14 slides · 11 charts · PDF + PPTX exported
INCLUDED CHARTS
· Total request volume (CDN)
· Firewall activity by rule type
· Bot Management classification
· DNS query patterns
· DLP event volume
NEXT STEPS
· Review and personalize narrative slides
· Send to Sarah Chen 24h before meeting
Generates customer-facing platform-update briefs filtered to that customer's contracted services. Translates raw changelog entries into "what this means for you" framing.
Reporting
cs-value-report
Runs: Monthly
Measures and reports the CS function's value contribution: capacity returned, hours saved, retention impact, expansion driven. Used in management one-on-ones and team-level reviews.
Account intelligence
csm-account-tracker
Runs: Always-on
Per-account task aggregator. Pulls items from the personal tracker, internal tickets, follow-up commitments, and the morning brief. Filters by account, buckets into a workflow model (Today / Planned / Blocked / Snoozed / Nurture / Internal), and surfaces overdue items.
Account intelligence
csm-account-enrichment
Runs: Daily
Batch-enriches the customer portfolio with engagement signals: last contact, email volume, sentiment, meeting frequency. Pushes structured context to the dashboard for the Accounts view.
Account intelligence
csm-account-intelligence-hub
Runs: Daily
Reads existing account memory and signal sources, merges them into a unified intelligence object per account, generates a sentiment narrative (Key Wins, Risks, Action Items), and writes it to the dashboard.
Account intelligence
csm-portfolio-data-sync
Runs: Daily
Pulls the customer portfolio from internal CRM systems in a single API call, enriches each account with per-account data (segment, industry, risk indicators, propensity), writes per-account state to the dashboard.
Account intelligence
csm-business-reviews-live
Runs: Daily
Scans the calendar for customer-facing meetings in the quarter, classifies meeting types, normalizes company names, and pushes enriched JSON to the dashboard for the Business Reviews view.
Account intelligence
csm-upcoming-meetings
Runs: Daily
Scans the calendar for customer-facing meetings in the next 14 days, filters out internal events, classifies meeting types, suggests relevant skills for prep, and pushes structured JSON to the dashboard.
Escalations
csm-jira-tracker-hub
Runs: Daily
Searches Jira broadly across escalation projects, checks for stalled support tickets, analyzes ticket status, and writes structured cards with deep-links for the Escalations dashboard.
Escalations
csm-policy-check
Runs: On demand
Self-fact-check skill. Answers questions like "did I cross a line?" or "what's the policy on X?" by mapping the situation to canonical references, scrubbing relevant channels for precedent, and producing a structured report. Refuses to soft-pedal.
Example output
POLICY CHECK · Question: Can I share QBR slides via my personal Drive?
FOUR LAYERS REVIEWED
· Data classification: Customer data (Tier 2)
· Sharing surface: External link
· Identity: Personal account
· Lifecycle: 30+ day retention
VERDICT: NO
Recommended path: share via shared drive with explicit Cloudflare-domain restriction.
Reference: [internal policy URL]
Escalations
csm-internal-comms
Runs: Ad-hoc
Generates executive summaries, product-feedback escalations, account handoff documents, and cross-functional updates. Translates raw account context into stakeholder-specific framing.
Escalations
csm-task-command-center
Runs: Always-on
Aggregates tasks from multiple sources (personal tracker, calendar action items, brief items, escalations, account intelligence) into a unified, deduplicated, prioritized task view. Pushes to the dashboard.
Operational
csm-tracker
Runs: Always-on
Manages the persistent CSM action item tracker. Bridged to the dashboard via sync-daemon. Action items, deadlines, snoozes, completion history.
Operational
csm-deploy
Runs: On demand
Safe deployment mechanics for personal infrastructure. Enforces pre-publish folder audit, access-before-deploy sequencing, and internal-vs-public file separation.
Operational
csm-doc-audit
Runs: On demand
Audits a repository's documentation for drift between the description, README, code, declared paths, file references, and the project registry. Reports Red/Yellow/Green per layer with file:line evidence.
Operational
csm-repo-bootstrap
Runs: On demand
Scaffolds a new repository with industry-best-practice documentation: README, LICENSE, CHANGELOG, AGENTS.md, .gitignore, MR templates, and optional catalog file.
Operational
csm-scrub
Runs: On demand
PII scrub. Scans a directory or repo for customer names, customer-affiliated zones, IDs, case numbers, and other identifiers that should be redacted before publishing.
Operational
csm-self-healing
Runs: Continuous
Recursive self-healing framework for skill improvement. Scores executions, detects failures, auto-repairs skills, and tracks maturity progression from draft through crystallization.
Operational
csm-skill-maintenance
Runs: Continuous
Standards, quality gates, and periodic health audit for the entire skill library. Governs how new skills are created, audits existing skills (scores, staleness, dependencies, token budgets, dead weight, command wiring, eval coverage).
Operational
csm-session-handoff
Runs: On demand
Appends a session entry to the project log and prints a copy-paste prompt for the next session. The "/handoff" pattern that makes long-running work survive across sessions.
Operational
csm-session-review
Runs: Weekly
Reviews session data to generate work logs, achievement summaries, value assessments, and time-savings estimates across sessions.
Operational
csm-tooling-watch
Runs: Weekly (Monday)
Weekly tooling-discovery scan. Scans relevant docs, changelogs, releases, plugin ecosystems, and package registries. Outputs a Monday-morning digest of what changed in the tools I depend on.
Operational
csm-audit-runner
Runs: Fortnightly
Orchestrates the fortnightly asks-vs-state audit. Extracts user asks from the session database, runs per-target sub-agent sweeps, reconciles against current state, triages findings into Tier 1/2/3, auto-fixes safe items.
Executive views
csm-exec-portfolio-hub
Runs: Weekly (Friday)
Generates compact per-account summary cards (ARR, health score, segment, renewal date, weekly narrative, watch items) for all accounts in the portfolio. Pushes to the Accounts dashboard tab.
Executive views
csm-exec-this-week-hub
Runs: Weekly (Friday)
Identifies 1-3 notable account events for the week (renewals, upsells, churn risk, escalation resolutions) and generates exec summaries plus a portfolio rollup. Pushes to the This Week dashboard tab.
Executive views
csm-exec-manager-1on1-hub
Runs: Weekly (Thursday)
Generates a structured 7-section prep brief for the weekly manager 1:1 meeting. Researches calendar, prior 1:1 notes, recent context, escalations, portfolio signals, and prep tips.
Executive views
csm-meeting-prep-auto
Runs: Daily (9:30 AM)
Automated meeting-prep orchestrator. Reads the calendar for upcoming customer meetings, pulls account context, runs the appropriate prep skill per meeting type, verifies all claims against sources, pushes verified prep output to the dashboard.
Slack
csm-slack-digest
Runs: On demand (paste-driven)
Ingests pasted Slack conversation context and structures it into account memory for use by meeting-prep, morning-brief, and EOD-wrap skills.
Slack
csm-slack-ingest
Runs: On demand (programmatic)
Programmatic Slack thread ingestion via a hardened wrapper. Reads customer Slack Connect threads through a Keychain-backed user OAuth token and writes structured account memory entries.
Engineering hygiene
pre-push-review
Runs: Pre-push
Mechanical pre-push review for non-trivial MRs. Runs an adversarial-input sweep, parallel-implementation symmetry audit, project-AGENTS.md compliance check, dead-code sweep, and engineering-codex compliance before git push.
Engineering hygiene
codex
Runs: On demand
Engineering codex skill covering Rust, TypeScript, Python, Go, Workers, reliability, AI, and governance standards. Use for any service development, code review, or architecture decisions.
QBR pipeline · End-to-end deck automation · Daily production use
The biggest single capacity unlock I've shipped
What used to take 4-6 hours of manual chart work per QBR now compresses
by 75%. Across a 44-account enterprise portfolio that's 150-180 hours
returned per quarter, or roughly four full working weeks of senior CSM
time redirected from spreadsheet wrangling to strategic preparation,
executive relationship work, and team coaching.
Most CSMs lose the day before a QBR to data extraction, copy-paste,
chart formatting, and slide assembly. The QBR pipeline turns that day
into a 30-minute run: telemetry pulled, charts rendered, slides
assembled, deck exported as PDF and PPTX. The opener slide ("Business
Priorities") proves I understand the business before showing any usage
data. The chart slides translate that data into executive-ready visual
framing. The CSM's job becomes the only thing the customer actually
values: framing the story, refining the narrative, anticipating the
executive question.
The dollar math is real. At a fully-loaded senior-CSM cost, the time
recovered is six-figure annual capacity per CSM, and it compounds across
a team. But the bigger impact is qualitative. CSMs who aren't burning
the day before a QBR show up to the meeting prepared to
think, not just present. That's the difference between a
tactical reporting cadence and a strategic partnership.
The three rendered artefacts (Business Priorities opener, WAF chart,
Bot Management chart) live above in the flagship section so the
evidence is findable in the first 60 seconds, not buried in this
section's detail. ↑ Back to the flagship visuals.
Stack
Node.js + Puppeteer (headless Chrome) ·
Custom HTML/CSS chart templates ·
JSON-driven data model ·
Synthesized from telemetry + public filings + internal research ·
14-slide deck output, PDF + PPTX exported
Agentic Codex · ~3,000 lines of governance
How AI handles customer-facing work, written down
A version-controlled rulebook that AI assistants read at every customer-facing
touchpoint. Excerpts from the table of contents:
01
CRM is the source of truth for the account team. No inferences. Helping doesn't transfer ownership.
02
Customer-facing email output format. Plain text in fenced code blocks; structured by ALL CAPS section headers and bullet items; no markdown rendering tricks.
03
Date-day verification, non-negotiable. Every date in customer output must be machine-verified against `cal`, never inferred.
04
Time-of-day verification. Run `date` before stating elapsed time, current time, or remaining time. Hallucinated time is the most common LLM failure mode.
05
Tool-first context retrieval. When the answer exists in a tool, grab it. Don't ask the user for what an MCP can answer in seconds.
06
Declaration discipline. Never claim "done" without verifying user-level success, not just file-state success.
07
Reproduce-first debugging. Before opening any source file in response to a UI bug, reproduce the symptom with DevTools open.
08
Sub-agent delegation gates. 5 mechanical gates fire BEFORE inline tool calls; main session reserved for judgment-heavy work.
⋯
12 more rules covering security, attribution, voice and tone, customer-artefact destination, sensitive file handling, and process-state-vs-file-state debugging.
System architecture
How the pieces fit together
User → Signal collection → Agentic orchestration → Cloudflare infrastructure → Operating surface
Career snapshot
Where I've worked
2025 — Now
Cloudflare · Customer Success Manager (Enterprise) · Sydney, AU
$10.5M ARR @ 147% NRR · 44 enterprise accounts · $934K Q3-Q4 pipeline · 100% logo retention · QBR cycle compressed 75% via AI tooling I built
2023 — 2025
aboutGOLF · Director of Customer Success and Support · US (Remote)
Built CS from scratch · 1,200 accounts · 98% retention · 44% churn reactivation ($400K) · $2.5M upsell ARR · 1-10 health scoring system in Salesforce · Cart-to-Curb e-commerce automation
2019 — 2023
WithYouWithMe · Head of Enterprise Account Management · Sydney, AU
$25M ARR @ 120% NRR · Promoted twice in 3.5 years · Grew Accenture from $1.3M → $5M ARR in 90 days · Led an 8-person CSM team across global expansion (UK + Canada launches)
2012 — 2019
U.S. Navy · IT Infrastructure Project Manager · Naples, Italy
140+ infrastructure projects across EMEA · Navy Achievement Medal · DISA Facility Control Office of the Year (2017) · The technical foundation underneath every CS role since
The numbers behind the roles. What I built scaled to thousands of accounts.
What I led scaled to dozens of teammates. Both axes matter for senior CS work.
Three roles. Three different stages, segments, and product categories. NRR consistently above the SaaS benchmark of ~110%. Most recent reading: top-decile.
Customer-facing scale
What 1:Many digital success looks like in practice. The cohort numbers behind
the headline retention metrics.
1,200Customer accounts
Active book at aboutGOLF (residential, SMB, mid-market) running on a
single CS function with four people.
98%Logo retention
Year-over-year subscription retention across the full 1,200-account
cohort. Two points below stretch target on commercial churn.
44%Churn reactivation
$400K of churned ARR recovered through cross-functional Happy Path
playbooks and re-onboarding programs. Industry-panel award winner.
67%CSM capacity scale
From 150 to 250+ accounts per CSM via a "1:Many" digital success
model: monthly Open Office Hours, Town Halls, automated touchpoints.
Leadership scope & mentorship
The team-leading half of the job. Senior CS isn't a single-thread
contributor role; it's a force multiplier across the people you work with.
12Direct team led
CS + Support team at aboutGOLF when the two functions merged under
one Director. CSAT up 10%, response times down 83%.
8Enterprise CSM team
Built and ran the enterprise CSM team at WithYouWithMe across UK,
Canada, and Australia expansion phases.
100+Mentees & coachees
Practitioners across career stages worked through the TORCHED
mentorship framework I created. Plus Catalyst Growth Coaching.
2Promotions in 3.5 yrs
Started as IC enterprise CSM at WithYouWithMe; left as Head of
Enterprise Account Management. Promotion arc that maps directly
to senior IC + leadership readiness.
Voice and recognition
Writing, speaking, awards
2025 Creative Customer Success Leader Award ·
Customer Success Collective. Selected by an industry panel of CS thought leaders
for the playbook used to reactivate 44% of churned accounts and drive 98%
retention across 1,200 customers.
Author · The CS PRESS · Substack newsletter
on practical CS leadership: building from scratch, AI-augmented Customer
Success, and the operator-engineer hybrid role.
Guest expert · The Customer Success Podcast with Irit Eizips ·
Featured episodes on reactivating churned accounts (44% recovery, 98%
retention across 1,200 accounts) and supporting major SaaS product launches
in mid-sized organizations.
TORCHED mentorship framework ·
A coaching framework I developed and applied with 100+ Customer Success
practitioners across career stages. Coach for the Catalyst Growth Coaching
program (2023–2024).
7,900+ LinkedIn followers ·
Active community around the realities of building Customer Success in
under-resourced environments. Posts regularly reach 500-2,500 impressions.
What's next
Where I'm pointing this work
The pattern I've built is portable. The next chapter is bringing this
operating model to a team that wants AI-native CS at scale, not as a
science experiment. Three problems I want to work on. First, the
IC-to-leader transition for AI-fluent CSMs: what does career growth look
like when the scope changes from accounts to systems? Second, the metrics
gap between AI-augmented capacity and traditional CS reporting: what do
we measure when 75% of QBR prep time has been compressed? Third, the
cross-functional shape of post-AI CS: where do the lines move between
CS, SE, and FDE when tools are agentic? Open to conversations on any of
these.
Get in touch
Talk to me
I'm actively exploring next-chapter opportunities in senior
Customer Success leadership, AI-native CS, and roles that bridge customer
relationship work with AI infrastructure. The fastest path to a conversation
is a direct email.
For AI-engineering hybrid roles, the
AI & Automation Portfolio Companion ↗
goes deeper on the daily systems, the operating model, and concrete
outcomes (151 dashboard commits, 54 skills, 42 scheduled automations).
One page, evidence-heavy.
Sydney-based with Australian Permanent Residency. Open to senior IC and leadership
conversations in Customer Success and AI-native CS / Solutions Engineering globally,
on-site in Sydney, hybrid, or fully remote.