Why the Audit Comes Before the Agent
The most reliable predictor of a successful agent deployment is not the quality of the agent. It is the quality of what the agent is deploying into.
TL;DR: Deploying agents into a messy GTM stack produces messy results. This audit checklist helps you assess data quality, process documentation, system connectivity, and team readiness — so you know exactly what to fix before your first agent goes live.
A RevOps agent evaluating leads against a clean CRM, documented routing rules, and a validated ICP produces outputs your sales team trusts within weeks. The same agent evaluating leads against a CRM with 20% duplicate rates, routing logic that exists only in someone's head, and an ICP that three people define differently produces outputs nobody trusts — and that erosion of trust is almost impossible to recover from without a complete redeployment.
That is why the audit comes before the contract is signed, before the implementation starts, and certainly before anything goes live. It takes two to four hours to complete and saves four to six months of a struggling deployment.
The audit covers four independent dimensions. A team can score well on one and poorly on another — the audit identifies which dimension is the binding constraint so you know exactly where to invest preparation time before the clock starts.
Data Quality: The Binding Constraint
Data quality is the dimension that fails most deployments. Agents make decisions based on the data they can see. If the data is wrong, the decisions are wrong — at volume, continuously, until someone notices.
The most important metric to check first is the duplicate contact rate. Run a deduplication analysis in HubSpot — either using HubSpot's native deduplication tool or by exporting contacts and matching on email domain and name combinations. Calculate what percentage of your contact base has a probable duplicate record.
Above 15% duplicate rate is a blocking issue. Agents making routing and scoring decisions on duplicated contacts produce inconsistent outputs by definition — the same person exists twice in the CRM with different activity histories, different field values, and potentially different lifecycle stages. Below 15% is workable. Below 8% is good. You don't need perfect data before deployment, but you need data clean enough that the agent's decisions will be recognisably correct to the people reviewing them.
The second check is the core field completion rate. Filter your HubSpot contact list by contacts missing company name, job title, lifecycle stage, lead source, and country. Calculate the completion percentage for each. Below 55% completion on any of these fields is a material problem — the agent cannot make reliable ICP or routing decisions if a third or more of its records are missing the signals it needs.
Third is CRM history depth. Check what percentage of your contacts were created more than 12 months ago. If fewer than 40% of contacts have 12+ months of history, your CRM may lack the historical patterns needed to validate scoring model decisions. This matters less for the first deployment than for ongoing optimisation — agents improve faster with richer historical data.
Fourth is email deliverability health. In your HubSpot email health report, check bounce rate and unsubscribe rate across the last three months. A bounce rate above 5% is a red flag — a lifecycle agent triggering email sequences against a list with high bounce rates will damage the domain's reputation rapidly. This is the data quality problem with the fastest and most visible negative consequence.
Fifth is lead source attribution. Filter contacts by "original source is unknown." If more than 25% of your contact base has no source attribution, the agent's ability to weight signals by source quality is limited. Source data is one of the highest-signal inputs for both routing and scoring decisions.
The threshold for each:
-
Duplicate rate below 8% is green, 8–15% is amber, above 15% is red and blocking.
-
Core field completion above 75% is green, 55–75% is amber, below 55% is red on any single field.
-
Email bounce rate below 2% is green, 2–5% is amber, above 5% is red.
-
Lead source attribution above 90% is green, 75–90% is amber, below 75% is red.
Process Documentation: The Blueprint the Agent Executes
Process documentation is the second most common binding constraint, and the one most teams underestimate.
The test is simple. Could a new RevOps hire, using only your written documentation and without asking anyone, correctly execute your lead routing logic, your MQL qualification process, and your data governance rules in their first week?
If the answer is no — if the documentation doesn't exist, or exists partially, or exists but doesn't match how things are actually done — the agent will be configured against someone's best reconstruction of the process, which will be wrong in ways that won't become apparent until the agent has been making wrong decisions for two weeks.
The five process areas to audit are lead routing logic, ICP definition, lifecycle stage definitions, data governance rules, and reporting structure.
For lead routing, check whether a document exists specifying which leads go to which reps or queues, what the criteria are for each routing rule, what happens when a rep is at capacity, and what the SLA is for each lead tier.
A green rating means a full document exists that the sales team would recognise as accurate. Amber means partial documentation with some informal or tribal knowledge elements. Red means routing logic exists primarily in people's heads.
For ICP definition, check whether a written document specifies firmographic criteria — company size, industry, geography, funding stage — and behavioural criteria, and whether that document reflects at least six specific criteria that sales and marketing leadership agree on. A vague ICP like "mid-market SaaS companies" is not an ICP definition — it's a category. The agent needs specific thresholds it can evaluate.
For lifecycle stage definitions, check whether the entry criteria for each stage are written down and agreed. Not just the HubSpot property configuration — the actual business definition of what qualifies a contact for MQL, SQL, Opportunity, and Customer. If different people give different answers to "what makes something an MQL," the stage definitions are not sufficiently documented.
For data governance, check whether a written standard exists covering which fields are required at each lifecycle stage, what values are accepted in key fields, and who is responsible for data quality maintenance. Field governance is the single most important thing you can do to prevent data quality from degrading after the agent starts running.
For the reporting structure, check whether your reporting cadence, metrics, data sources, and ownership are documented. An agent-generated report should produce outputs that match what stakeholders expect. If the reporting structure is informal, the agent's outputs will be assessed against implicit expectations that nobody has made explicit.
System Connectivity: The Infrastructure for Multi-System Action
Agentic AI produces its most significant value when agents can act across multiple systems simultaneously. That requires reliable connectivity to each system before deployment, not as a configuration task during it.
The most important connectivity check is HubSpot API access. Confirm your HubSpot subscription includes full API access — Marketing Hub Professional tier and above. HubSpot Starter plans have limited API access that is insufficient for full agent connectivity. If your plan requires an upgrade, that needs to happen before the implementation timeline begins.
For the BI Agent, check Databox connectivity. Is a Databox account active? Is HubSpot connected and returning clean data? Are the key data sources that will feed the agent's reporting already integrated? A Databox account that exists in principle but hasn't been properly connected to your HubSpot data is not a ready integration — it's a setup task that will delay week four if it hasn't been done.
For the Lifecycle Agent — if that's in scope for the first deployment — check Customer.io integration status. Is the Customer.io HubSpot integration active? Is the API key available? Are existing lifecycle sequences documented clearly enough for the agent to understand what it's orchestrating?
For agent notifications, confirm a Slack workspace is available and that the bot integration has been approved by your IT or security team if approval is required. This check takes five minutes and prevents a common go-live surprise where the Slack notification layer doesn't fire because a bot approval request was never submitted.
For orchestration, confirm n8n access — either through the ARISE GTM hosted environment or a self-hosted instance. This is the layer that allows complex multi-step workflow orchestration across connected systems, and it needs to be provisioned before configuration work begins.
The connectivity audit should produce a simple status for each integration: active and configured, available but not yet configured, or not yet available. Any integration in the third category that is required for the initial deployment needs to be moved to the second category before week one begins.
Team Readiness: The Human Layer That Makes It Work
Technical readiness is necessary but not sufficient. The most common deployment failures in organisations that have good data quality and solid process documentation are human failures — nobody owns the governance, the sales team wasn't briefed, the RevOps team doesn't know which of their current tasks the agent will handle.
The governance owner check is binary. Is there a named person, with allocated time in their week, who is briefed on the governance role? Not "we'll figure out who does that as we go." A specific person with a specific time allocation. This person is the difference between a system that calibrates successfully over 90 days and one that produces inconsistent outputs that nobody investigates.
The sales team briefing check is also binary. Does the sales team know what is changing and why, before it changes? Have they been told what the agent will do, what will look different in their lead queue, and who to contact if they see something unexpected? If the answer is no, go-live creates confusion that takes weeks to resolve.
The RevOps team alignment check has more nuance. Does the RevOps team understand which of their current tasks the agent will handle and what they will own instead?
If the agent is taking over lead routing and data hygiene, the RevOps team needs to know what their new responsibilities look like — not vaguely, but specifically. What are they doing with the hours they're getting back? If that question isn't answered before deployment, the hours don't actually get reclaimed because old habits fill them.
The escalation protocol check is the one most commonly missed. Is there a written protocol covering what triggers a human review, who receives the escalation, and what the expected response time is?
Without this, agent exceptions sit in a queue with no defined resolution path. The agent flags an unusual decision for human review. Nobody has been told they're responsible for reviewing it. It sits. The deal it affects ages out. The agent gets blamed for a failure that was actually a governance gap.
How to Score Your Readiness
Score each of the 14 checks across the four dimensions as green, amber, or red.
Eleven to fourteen greens: deployment-ready. Proceed to the GTM Blueprint and agent configuration.
Eight to ten greens with no reds: near-ready. Address amber items in parallel with the initial configuration in weeks one and two.
Five to seven greens: foundation needed. Resolve red items before starting configuration. Address amber items in the first two weeks of the project.
Below five greens: not yet ready. A six to twelve-week foundation programme is the right first investment before any deployment begins.
The threshold that matters most: any single red in the data quality dimension — duplicate rate above 15%, core field completion below 55%, bounce rate above 5% — should delay the RevOps Agent deployment regardless of how the rest of the audit scores. These specific problems produce visible, immediate output quality failures that will erode team trust before the agent has had a chance to demonstrate its value.
What to Fix First and in What Order
If you have more items to address than you can tackle simultaneously, prioritise in this order.
ICP definition first. The single highest-leverage fix across all four dimensions. Every agent decision — routing, scoring, lifecycle management, expansion identification — is better when the ICP is clearly defined. If one thing from this audit gets fixed before anything else, it should be this.
CRM deduplication second. Duplicate records produce inconsistent outputs that are visible and credible to the people evaluating agent quality. A 20% duplicate rate means that approximately one in five routing decisions is made on incomplete or contradictory data. This is fixable in a focused sprint.
Lead routing documentation third. The agent cannot route correctly without written rules. This is documentation work, not technical work. It takes a focused week with the right people in the room to produce documentation that is complete enough for agent configuration.
HubSpot API access fourth. This is a technical requirement that can take time to provision through procurement or IT approval processes. Start this process early — before it becomes the thing blocking week three configuration.
Core field completion fifth. This is a hygiene project that improves incrementally over time. Begin the process early, but don't wait for it to reach a green rating before starting the rest of the deployment. An amber field completion rate is sufficient to proceed if the other red items have been resolved.
Frequently Asked Questions
How long does this audit take to complete?
For a RevOps Manager or Marketing Ops lead with full HubSpot access, the audit takes two to four hours to complete across all four dimensions. The data quality checks take the longest — approximately 90 minutes to run the reports and interpret the results. Process documentation and team readiness checks are faster if documentation already exists and slower if it doesn't.
What is the most commonly missed audit item?
The escalation protocol in the team readiness dimension is consistently underdeveloped. Teams focus heavily on data and process readiness but don't define what happens when the agent produces a decision it can't resolve autonomously. The result is that exceptions have no resolution path, agent-generated issues sit unreviewed, and the governance owner learns about problems when the sales team complains rather than when the system flags them.
Can I start deployment before all amber items are resolved?
Yes, with careful sequencing. Amber items in the system connectivity dimension can be resolved in parallel with initial configuration work. Amber items in the data quality dimension can be partially mitigated by configuring the agent with more conservative thresholds until the data improves.
Amber items in the process documentation dimension should be resolved before go-live — deploying an agent against partially documented processes produces unpredictable outputs. Red items in any dimension should be resolved before configuration begins.
What if our processes are partially documented but not fully?
Start the deployment with the processes that are fully documented and defer the others. A RevOps Agent handling only inbound lead routing — where the rules are clear and written — is more valuable than a RevOps Agent attempting to handle all RevOps processes, half of which are underdocumented. Deploy into clarity. Expand into the areas where documentation catches up.
How does the readiness audit relate to the GTM Blueprint?
The GTM Blueprint includes a facilitated version of this audit as part of its architecture phase. If you're uncertain about your readiness scores in any dimension — particularly data quality or process documentation — the Blueprint is the right starting point. It surfaces the gaps, quantifies them, and produces a sequenced plan for resolving them before configuration begins.
Not sure where your stack sits? Our GTM Blueprint includes a facilitated readiness audit, delivered as part of the architecture phase.
Book a Blueprint Conversation →
Published by Paul Sullivan, March 2026. Paul Sullivan is the founder of ARISE GTM, a HubSpot Platinum Partner specialising in agentic AI for B2B SaaS revenue teams, and author of Go-To-Market Uncovered (Wiley, 2025).