TALK
23 min read · December 2025

Building AI that ships

Dreamforce 2025 · Why most enterprise AI never leaves the notebook.

AIDreamforceProduction

This is an adapted transcript from my Dreamforce 2025 talk on getting AI from proof-of-concept to production in enterprise marketing environments.

The Notebook Graveyard

Raise your hand if your data science team has built an AI model that never made it to production.

Every hand should be up.

I've seen this pattern at three different companies now. Data science builds something impressive in a Jupyter notebook. They demo it to leadership. Everyone gets excited. Then... nothing. The model sits there. No one can figure out how to actually use it.

This is the notebook graveyard, and it's where most enterprise AI goes to die.

Why This Happens

The gap between "model that works in a notebook" and "model that's improving business outcomes" is massive. And it's not a technical gap — it's an organizational one.

Problem 1: No Clear Use Case Data science builds what's interesting. Business needs what's useful. These aren't always the same thing.

Problem 2: No Integration Path A model is useless if it can't connect to the systems where decisions happen. If your churn model can't trigger an action in your CRM, it's just a science project.

Problem 3: No Feedback Loop Models degrade over time. Without monitoring and retraining, your "AI" becomes an expensive random number generator.

What Actually Works

Let me tell you about a churn prediction model we shipped last year. 0.93 AUC — which is good, but not the point. The point is it's in production, it's being used, and it's moving metrics.

Here's how we did it:

Start with the Action

We didn't start with "let's build a churn model." We started with "what would we do differently if we knew a customer was likely to churn?"

The answer: trigger a retention campaign, route to a specialized team, offer a specific incentive.

Only after we knew the action did we build the model. The model exists to enable the action, not the other way around.

Build for Integration

From day one, we designed for Salesforce integration. The model outputs a score, the score updates a field, the field triggers a flow. No manual intervention required.

This sounds obvious, but most AI projects don't think about integration until the model is "done." By then, it's too late — you've built something that doesn't fit the systems you have.

Create the Feedback Loop

Every prediction gets tracked. Did the customer actually churn? How did our intervention perform? This data feeds back into retraining.

Six months in, our model is better than launch day — because it's learning from real outcomes, not just historical data.

The Playbook

If you want to ship AI that actually works:

  1. Start with the business action, not the model
  2. Design for integration from day one
  3. Build monitoring before you build the model
  4. Plan for retraining — models are not "done"
  5. Measure business outcomes, not just model metrics

The companies winning with AI aren't the ones with the best data scientists. They're the ones who've figured out how to connect models to actions, actions to outcomes, and outcomes back to models.

That's the loop that matters. Everything else is just notebooks.