by Margarita Savytska
Platform migration is sold as a clean slate. New system, better capabilities, fresh start. And the migration itself usually goes fine – records transfer, integrations reconnect, the new platform goes live on schedule.
The problems start a few weeks later, when someone notices that conversion rates have improved for no obvious reason, or that a suppressed segment is receiving campaigns, or that sales is drowning in “qualified” leads that don’t feel qualified at all. Nothing is technically broken. The dashboards look healthy. But the data has changed meaning – and every automation in the new platform is now acting on that changed meaning with complete confidence.
We see this pattern consistently across migrations, and the failure mode is almost never a hard error. It’s data that looks right, passes every validation check, and is structurally wrong in ways that take weeks or months to surface.
Lifecycle stages that rewrite history without triggering an alert
This one shows up regularly in HubSpot-to-Marketo migrations. Lifecycle stages get mapped literally – MQL to MQL, SQL to SQL – because the labels match. But the logic underneath doesn’t.
In HubSpot, the lifecycle stage is often manually updated or loosely tied to form submissions and list membership. In Marketo, it’s typically driven by smart campaigns and triggers that actively reclassify leads based on behaviour and scoring.
What happens after migration: every lead arrives in Marketo with its “correct” stage. Then the Marketo lifecycle campaigns start processing them against incomplete trigger logic. Within hours, a significant chunk of the database gets reclassified. SQLs get pushed back to MQL. Leads that were sales-ready get recycled into nurture.
The worst part is the dashboards actually look better. The funnel appears tidier. Conversion rates between stages improve – because leads were moved backward, not because anything real changed. The “wait, something’s off” moment usually arrives when someone compares month-over-month pipeline and realizes the numbers don’t connect to actual deals.
The fix: Before go-live, disable lifecycle campaigns for 48 hours after the data lands. Compare stage distribution before and after enabling them. If more than a small percentage of records reclassify immediately, the trigger logic needs adjusting before anything else runs on top of it.
Consent fields that technically migrate and legally break
Consent is almost never one-to-one between platforms, and this is where the most dangerous post-migration problems hide.
In HubSpot, consent might be structured around subscription types with lawful bases stored in a specific format. In Marketo, consent is often flattened into fields like “Opt-in,” “Marketing Suspended,” or custom booleans. During migration, the values transfer and the field names look right. Nothing errors.
But the logic behind the values has changed. We’ve seen cases where people who unsubscribed from a single email type in HubSpot ended up marked as fully opted-out in Marketo – and the opposite, where contacts who had opted out of specific communications were migrated as broadly opted-in.
Emails keep sending. Lists keep populating. Segmentation keeps running. The issue surfaces weeks later as an unexpected drop in engagement, or when someone spots that a segment marked as suppressed is actively receiving campaigns. The data didn’t fail loudly. It just changed meaning during the move, and nobody caught it because the fields were populated and the formats were valid.
When AI features start making suppression and personalization decisions based on that consent data, the problem scales. Every automated decision about who to contact and who to exclude is now grounded in consent logic that doesn’t reflect what the contact actually agreed to.
The fix: After migration, pull everyone marked as opted-in in the new platform and cross-reference against the source system’s consent records. Any mismatch beyond a few percent means the mapping broke something. Do the same for opted-out and suppressed contacts. This takes a day, not a week – and it’s the single highest-value check you can run post-migration.
Lead scoring that “works” but measures the wrong thing
This pattern is harder to spot and probably the most common. In HubSpot, scoring tends to be cumulative and relatively simple. In Marketo, scoring is typically tied to multiple campaigns, triggers, and decay logic. During migration, the score field gets copied and scoring campaigns get rebuilt – or partially rebuilt.
What goes wrong: historical scores don’t align with the new scoring logic. Trigger-based campaigns start firing on old activities that shouldn’t qualify. Some scores inflate rapidly while others stagnate. We’ve seen migrations where half the database sat above the MQL threshold within days of go-live – flooding sales with “qualified” leads that weren’t actually qualified under any meaningful definition.
Again, nothing is technically broken. The system is doing exactly what it was configured to do. The problem is that it’s no longer measuring the same thing the old system measured, but everyone’s treating the numbers as though they are.
The moment of realization almost always comes from sales, not marketing: “Something changed – these leads don’t feel the same.”
The fix: Freeze scoring campaigns for the first week post-migration. Pull the score distribution from the old system’s last quarter and compare it against the new one. If the shape is dramatically different – scores clustering where they didn’t before, or the MQL threshold catching twice as many leads – recalibrate before enabling any automation that acts on scores.
The question that should follow every migration
Every one of these patterns shares a root cause: the data was validated on structure, not on meaning. The fields were present, the values were formatted correctly, the records transferred successfully. Nobody checked whether the data still meant what it used to mean in the new environment.
That’s the audit most migration projects skip – not because the team is careless, but because the project plan declares success when the platform goes live, not when someone has verified that the data feeding every automated decision still reflects reality.
And now AI is layering on top of this. Every AI feature activated in a post-migration environment is making decisions based on whatever version of truth made it across. It’s scoring leads on logic nobody recalibrated. It’s suppressing contacts based on consent it didn’t verify. It’s personalizing based on preference data that changed meaning during the move.
The data passed every check. It’s still lying. The only question is how long it takes for someone to notice – and how many automated decisions happen in the meantime.
Margarita Savytska is a Marketing Executive at Sojourn Solutions, a Marketing Operations consultancy working with Enterprise clients across the UK, Europe and North America.
















