iOS 14 broke client-side tracking. Match rates collapsed industry-wide. I built a server-side conversion pipeline from Salesforce to Meta's Graph API, with five conversion events, SHA-256 hashing at the source, and a custom audit log. Match rate went from 3.0 to 6.1. No PII left the org.
In 2021, Apple's iOS 14 update killed the IDFA. Then Safari and Firefox started blocking third-party cookies by default. Then Chrome announced its own deprecation timeline. In two years, the entire client-side tracking infrastructure that ad platforms had been built on for a decade quietly stopped working.
For a paid media program, this showed up as match rate collapse. Match rate is the percentage of your conversion events that Meta can actually tie back to a user it recognizes. Pre-iOS 14, well-instrumented programs typically saw match rates in the 6 to 9 range. Post-iOS 14, the same programs were watching match rates fall into the 2 to 4 range. That's not a minor degradation. That's your attribution model going blind in one eye.
At HiRoad, we were running paid social to drive insurance quotes. Our match rate was sitting at 3.0. That meant 70 percent of our conversions were invisible to Meta's optimization algorithms. Which meant Meta couldn't learn from our data. Which meant the algorithms couldn't find the people most likely to convert. Which meant we were spending more per acquisition every quarter and watching CAC drift in the wrong direction.
We needed a fix. And the fix wasn't going to come from the browser.
Before I get into the architecture, you need to know the constraint that shaped every decision: no PII was going to leave the org. Not hashed. Not encrypted. Not "we promise we sanitized it." None.
This was a deliberate choice, not a regulatory accident. HiRoad is an insurance brand inside a State Farm company, and the data governance bar in this corner of the industry is high. Even hashed PII carries operational risk: hash collisions, irreversibility audits, vendor key rotation, the question of whether SHA-256 will be considered adequate by regulators five years from now. The simpler answer to all of that is to design a system where PII never enters the equation at all.
This is not how most CAPI integrations get built. The standard playbook is to hash email and phone at the source and send them as match keys, because they produce the highest match rates. We were doing something harder: getting CAPI working with no email, no phone, no name, no address, no PII of any kind, and still getting match rates above 6.
This is the part of the case study that the rest of the architecture is in service of.
Meta released the Conversions API (CAPI) specifically to solve the iOS 14 problem. The idea is simple: instead of relying on a pixel firing in the user's browser, send conversion events directly from your server to Meta's server. Browser tracking goes through ad blockers, cookie restrictions, and iOS privacy frameworks. Server-to-server traffic does not.
The catch is that server-side tracking is a systems integration project, not a marketing project. There's no plug-and-play option that works at enterprise scale. You either pay a vendor to do it (with all the data residency and contract questions that come with that), use a CDP middleware layer (which most insurance orgs don't have fully implemented yet), or build it yourself directly inside the system that already holds the data.
For us, that system was Salesforce. Every quote, every lead, every sale lived there. Building the integration directly in Salesforce meant we never had to ship data to a third party, never had to negotiate with a vendor about what they could and couldn't store, and never had to worry about an intermediate system mishandling sensitive fields. The data stayed inside our boundary, and only the signals we explicitly approved ever crossed it.
I built it directly in Apex.
The architecture is straightforward when you look at it from a distance and gets interesting fast when you zoom in.
**The trigger layer.** Conversion events fire when specific records change state in Salesforce. A Lead is created. An Opportunity moves to Quote stage. A Policy reaches Closed Won. A Renewal is processed. A Cancellation is logged. Five conversion events total, each tied to a specific business outcome we wanted Meta to optimize for.
**The processing layer.** When one of those state changes happens, a Flow invokes an Apex HTTP Callout class. The class is responsible for assembling the CAPI payload, generating the match keys we were willing to send, building the event metadata, and logging the response. This separation matters. Flows are great at orchestration but terrible at security-sensitive operations. Apex is built for callouts and exception handling. Use each for what it's good at.
**The match-key layer.** This is the part that's genuinely different from a standard CAPI implementation. Without PII, Meta has to identify a user through other signals. The ones we sent were:
That's the entire user-identity payload. No email. No phone. No name. No address. Nothing that could be used to identify a person if the data ever leaked or got subpoenaed in a context it shouldn't have.
The trade-off, on paper, is that match rates are lower without email and phone. The reality, in our case, was different. By being disciplined about which signals we sent and how cleanly we captured them, especially fbc and fbp, we got to 6.1. More on that below.
**The authentication layer.** Access tokens are sensitive. They should never live in code. We used Named Credentials, which is Salesforce's secure way of storing endpoint URLs and tokens. Named Credentials handle rotation, encryption, and access auditing. They also keep the token out of any Apex source someone might pull from version control.
**The audit layer.** Every CAPI call writes a record to a custom object: Meta_API_Log__c. The log captures the timestamp, the conversion event type, the source record ID, the response status code, and an error message if one came back. This sounds boring. It saved us at least twice. When Meta updated their API to v18 and one of our payloads broke, the logs told us exactly which event type was failing and why. Without the logs, we'd have been guessing.
The implementation took about three weeks of focused work plus a slower tail of testing, Meta-side configuration, and stakeholder alignment. Once it was live, we monitored match rates daily.
**Match rate, before:** 3.0
**Match rate, after:** 6.1
**Lift:** 2.0×, sustained
For context, Meta's own published guidance is that match rates above 6.0 are considered "good." We weren't just patching a leak. We were getting to a place where Meta's optimization algorithms had enough signal to actually work.
The result that surprised me, honestly, was that we got there without PII. The conventional wisdom in paid media circles is that you need hashed email and phone to hit match rates above 5. We didn't. The combination of clean fbc capture at landing, consistent fbp propagation through the funnel, and stable external IDs across sessions gave Meta enough to match on. The lesson I took from this: PII isn't always required to do this well, if you're disciplined about everything else.
Within the first 60 days, our cost per acquisition on paid social started moving in the right direction. I'm not going to attribute every dollar of that to CAPI because that would be the kind of overclaim I don't believe in. There were other things changing at the same time. But every paid media operator I've talked to who's been through a CAPI implementation tells the same story: when match rate doubles, the algorithm starts finding people you couldn't reach before. The CAPI work made the rest of our optimization possible.
Every server-side implementation has the same failure modes. Here's where this one almost broke.
**1. Pressure to add PII back in.** Halfway through the build, more than one well-meaning stakeholder asked whether match rates would be higher if we sent hashed email. Yes, probably. Were we going to? No. This is the kind of constraint you have to hold the line on, because every exception is the start of a slope you don't want to be on. The way I held the line was to make the no-PII commitment a written architectural principle, signed off by data governance before any code got written. Once it was in the doc, "what if we just added one field" stopped being a casual question.
**2. Scope creep around LTV fields.** Halfway through the build, someone asked if we could send customer lifetime value as a custom parameter on the Purchase event so Meta could optimize for high-value customers, not just any converter. Good idea, real value, total scope expansion. We added it, but only after agreeing that LTV would come from a calculated, bucketed field on the Account object (not a real-time policy lookup) and that the field itself would be a tier indicator like "high / mid / low," not an exact dollar amount. Bucketing matters because exact LTV is approaching PII territory in some interpretations. Lesson: scope changes that look small can blow up integration testing and compliance review. Bound them aggressively or push them to phase two.
**3. The deduplication question.** Meta wants you to send the same event from both the browser pixel and CAPI. They use an event_id field to deduplicate. If your event_id doesn't match between the two sources, Meta counts the event twice and your numbers inflate. We had to coordinate the pixel team and the Salesforce team to use the same hashing strategy for event_id generation. This took longer than it should have because the pixel team didn't think it was their problem.
**4. Sandbox-to-production token confusion.** Named Credentials make token management easier, but they don't solve the problem of two different Meta Business Manager accounts, one for testing and one for production. We had one painful afternoon where test events were going to the production Meta pixel because someone, possibly me, configured the sandbox endpoint with the production token. Test events. In production. We caught it inside an hour, and because we weren't sending PII the blast radius was effectively zero. That was the moment the no-PII constraint paid for itself the first time. It taught me to never trust environment configuration that isn't enforced by automation.
**5. The non-tech stakeholder review.** Our paid media team didn't understand CAPI conceptually until I made them a diagram showing browser pixel versus server pixel side by side, plus a second diagram showing which fields we were and were not sending. Until they understood both, they couldn't help with the QA, couldn't validate the conversion definitions, and couldn't explain to leadership why the project mattered. Lesson: technical work needs a translation layer. Build the diagram before you build the integration.
Two things, in retrospect.
**Decouple the conversion definitions from the API code.** Our conversion event mappings are baked into Apex. If someone in marketing decides to add a sixth event, that's a code change, a deployment, a sandbox refresh, and a Meta-side config update. If I were building it again, I'd put the event mappings in custom metadata records, so non-developers could add new events through a UI without touching the integration code. This is the kind of thing you don't notice until the third or fourth time someone asks to add an event.
**Build a CAPI dashboard inside Salesforce.** The Meta_API_Log__c object has all the data needed for a real dashboard: events per day by type, success rates, error rates, average latency. We never built the dashboard. Instead, when leadership asked "is CAPI working?" we ran one-off reports. A dashboard would have saved that time and given the paid media team self-service visibility. Always build the dashboard.
This project is on my portfolio because it represents the kind of work I actually want to be doing. It's not glamorous. There's no AI demo, no flashy UI, no headline-grabbing announcement. It's plumbing. Server-side plumbing, built under a strict data governance constraint, that still managed to double the effective signal of a paid media program.
The no-PII constraint is the part of this project I'm most proud of. It would have been easier to ship a faster, higher-match-rate version by sending hashed email and phone. We didn't. We took the harder design path because it was the right one for an insurance brand inside a regulated parent company. The system is still running. Other people maintain it now. It pays for itself every quarter in better-targeted media spend. And it does it without ever putting a single piece of customer PII on the wire to Meta.
That's the bar for what shows up on this site. Every project here ships, scales, and survives me leaving the room. And in this case, survives a privacy review too.
---
Want to talk about CAPI, server-side integrations, or just compare notes on your own match rate horror stories? You can reach me [here](#contact) or grab a slot on my [Friday office hours](https://calendar.app.google/ySdikz5efXFEcEBv7).