/work/churn-prediction-model · Case Study 02
CASE STUDY 02
11 MIN READ · QUANATA · HIROAD

Churn prediction in production: a 0.93 AUC model that ships

Built a churn risk model in Einstein Model Builder, trained on behavioral signals from Data Cloud. Operationalized through Marketing Cloud journeys with a retention playbook attached to each risk tier. The model doesn't just predict, it triggers action.

Einstein Model BuilderData CloudMarketing Cloud
AUC
0.93
PLATFORM
Einstein
STATUS
In production

The Problem

At Quanata, we had a churn problem we couldn't see.

Customers would leave, and we'd find out when they didn't renew. By then, it was too late. The decision to leave had been made weeks or months earlier—we just didn't know it.

We had data. Lots of it. App engagement, driving scores, support tickets, billing history, quote activity. But it lived in silos, and nobody had connected it to churn in a predictive way.

The question: could we identify customers likely to churn *before* they made the decision, early enough to intervene?

The Constraint

Most churn models I'd seen in enterprise settings shared a common failure mode: they predicted but didn't act.

Data science team builds a model. Model lives in a notebook. Score gets exported to a CSV. Marketing team gets the CSV a week later. By the time anyone acts, the moment has passed.

I needed a model that:

  • Predicted with enough accuracy to be actionable
  • Updated scores in real-time as behavior changed
  • Automatically triggered retention journeys
  • Didn't require a data science team to maintain
  • And it had to work within our existing Salesforce + Marketing Cloud stack.

    The Approach

    Data Foundation (Data Cloud)

    First, I had to unify the signals. Data Cloud became the foundation:

  • App engagement events (opens, session duration, feature usage)
  • Driving score trends (improving, stable, declining)
  • Support ticket history (volume, resolution time, sentiment)
  • Billing events (payment failures, discount usage)
  • Quote activity (shopping behavior indicating they're considering alternatives)
  • Each signal got normalized into a unified customer profile with a 12-month lookback.

    Feature Engineering

    The raw signals weren't enough. I engineered features that captured *change*:

  • App engagement trend (last 30 days vs. prior 90 days)
  • Driving score trajectory (improving, stable, declining)
  • Support ticket velocity (accelerating, stable, decelerating)
  • Engagement gap (days since last meaningful interaction)
  • The insight: it's not the absolute values that predict churn, it's the *trajectory*. A customer with moderate engagement who's declining is higher risk than a customer with low engagement who's stable.

    Model Training (Einstein Model Builder)

    Einstein Model Builder let me train directly on our Data Cloud unified profiles. Key decisions:

  • Outcome: Did the customer churn within 90 days of the observation date?
  • Training window: 18 months of historical data
  • Holdout: 20% for validation
  • Features: 23 engineered features from the unified profile
  • The model achieved 0.93 AUC on the holdout set. More importantly, the precision-recall tradeoff at our chosen threshold gave us:

  • High-risk tier (top 10%): 78% actually churned
  • Medium-risk tier (next 20%): 34% actually churned
  • Low-risk tier (bottom 70%): 8% actually churned
  • Operationalization

    Here's where most models die. Not this one.

    Einstein scores update automatically as behavior changes. Those scores write back to the unified profile in Data Cloud. Data Cloud syncs to Marketing Cloud.

    I built three journey branches:

  • **High Risk**: Immediate intervention. Personal outreach from retention team + high-value offer
  • **Medium Risk**: Proactive engagement. Educational content + loyalty program highlight + survey
  • **Low Risk**: Standard nurture. No additional intervention
  • The journeys run continuously. No manual exports. No weekly batch processes. A customer's risk tier changes, their journey changes within hours.

    The Results

    0.93 AUC

    The model performs exceptionally well at distinguishing churners from non-churners. But AUC alone doesn't matter—what matters is whether we can act on it.

    78% Precision at High-Risk Tier

    When we flag someone as high-risk, we're right 78% of the time. That's high enough to justify intensive intervention without wasting resources on false positives.

    Real-Time Scoring

    Scores update as behavior changes. A customer who suddenly stops opening the app sees their risk score increase within 24 hours, not the next monthly batch.

    Automated Action

    Zero manual intervention required to move customers into retention journeys. The system watches, scores, and acts.

    What Made It Work

    Behavioral features over static attributes

    Demographics barely moved the needle. What predicted churn was *behavior change*—declining engagement, increasing support tickets, shopping signals.

    Tight feedback loop

    Because scores update in real-time and journeys trigger automatically, we could measure intervention effectiveness and iterate quickly.

    Tiered response

    Not every at-risk customer needs the same intervention. Tiering let us allocate resources appropriately—intensive outreach for high-risk, lighter touch for medium.

    What I'd Do Differently

    More sophisticated journey logic

    The current journeys are effective but relatively simple. With more time, I'd build branching logic that adapts based on intervention response.

    Churn reason classification

    The model predicts *if* someone will churn, not *why*. A secondary model classifying likely churn reason would enable more targeted interventions.

    A/B testing infrastructure

    We measured overall retention improvement but didn't have clean A/B tests of specific interventions. Building that infrastructure would accelerate learning.

    The Lesson

    The hard part of predictive modeling isn't the model. It's the operationalization.

    A model that lives in a notebook is a science project. A model that triggers automated action is a system. The difference is everything.

    We didn't build the most sophisticated churn model in the world. We built one that ships.