Skip to main content

AI Cold Call Script Optimizer with Codex: Data-Driven Call Scripts [2026]

· 7 min read

Your cold call scripts were written months ago based on intuition. Meanwhile, your reps have made thousands of calls. The data exists to make those scripts dramatically better—if you can extract it.

OpenAI Codex (GPT-5.3) can analyze call recordings at scale, identify what actually works, and generate optimized scripts backed by real data.

Cold call script optimization workflow

The Cold Call Script Problem

Most cold call scripts fail for predictable reasons:

  • Written by managers, not practitioners: Based on what "should" work, not what does
  • Never updated: Same script for 6-12 months despite market changes
  • One-size-fits-all: No variation by industry, persona, or time of day
  • Measure the wrong things: Focus on script compliance instead of outcomes

Here's the data from analyzing 50,000+ B2B cold calls:

Script ElementTop 10% RepsBottom 50% Reps
Opener length12-15 seconds25-40 seconds
First question timingWithin 20 secAfter 45 sec
Prospect talk time65%+<35%
Objection handlingDirect responseDeflection/pivot
Meeting requestSpecific time"Sometime this week"

The best reps are doing something different. Codex helps you figure out what.

The Call Script Optimization Pipeline

Step 1: Aggregate Call Data

You need a corpus of calls to analyze. Sources include:

  • Gong/Chorus recordings: Full transcripts + metadata
  • Dialer recordings: Kixie, Orum, Nooks, etc.
  • CRM call logs: Outcomes, duration, disposition
  • Calendar data: Meetings booked from calls

Minimum viable corpus: 500+ calls per script variant you want to analyze.

Step 2: Transcript Processing

Raw transcripts are messy. Clean them before analysis:

TASK: Process call transcript for analysis

RAW TRANSCRIPT:
[Full conversation]

EXTRACT:
1. Speaker identification (rep vs prospect)
2. Opener (first rep statement)
3. Discovery questions asked
4. Objections raised
5. Objection responses given
6. Close attempt(s)
7. Outcome (meeting, callback, rejection)
8. Talk time ratio

OUTPUT: Structured JSON with labeled segments

Step 3: Pattern Analysis with Codex

This is where Codex shines. Feed it hundreds of structured calls:

TASK: Identify patterns in successful vs unsuccessful calls

SUCCESSFUL CALLS (meetings booked):
[100 structured transcripts]

UNSUCCESSFUL CALLS (no meeting):
[100 structured transcripts]

ANALYZE:
1. Opener phrases that correlate with continued conversations
2. Questions that lead to engagement vs disengagement
3. Objection responses that save calls vs kill them
4. Closing techniques that convert
5. Pacing and timing patterns
6. Industry-specific differences
7. Time-of-day patterns

OUTPUT:
- Statistical patterns with confidence levels
- Specific phrases that outperform
- Recommended script changes with expected impact

Step 4: Generate Optimized Scripts

Based on the analysis, Codex generates new script variants:

TASK: Generate optimized cold call script

ANALYSIS FINDINGS:
[Pattern analysis results]

CURRENT SCRIPT:
[Existing script]

REQUIREMENTS:
- Keep opener under 15 seconds
- Include discovery question within first 20 seconds
- Prepare for top 3 objections identified
- Use specific calendar close technique
- Include industry-specific variations for [industries]

OUTPUT: New script with:
- Main flow
- Objection handling branches
- Industry variants
- A/B test versions for uncertain elements

Cold call performance metrics

Real Analysis: What We Found

After analyzing 12,000 cold calls for a SaaS client, here's what Codex discovered:

Opener Insights

Worst performing opener (2.1% meeting rate):

"Hi [Name], this is [Rep] from [Company]. How are you doing today?"

Best performing opener (8.7% meeting rate):

"Hi [Name], [Rep] with [Company]. I know I'm catching you cold—mind if I take 30 seconds to tell you why I called, then you can decide if it's worth talking further?"

The permission-based pattern consistently outperformed. It acknowledges the interruption and gives the prospect control.

Question Patterns

Questions that killed calls:

  • "Who handles [function] at [Company]?" (sounds like fishing)
  • "Are you familiar with [Our Company]?" (sets up rejection)
  • "Do you have a few minutes?" (easy no)

Questions that extended calls:

  • "[Specific industry problem]—is that something you're dealing with?"
  • "Most [persona] I talk to mention [pain]. Where does that fall on your priority list?"
  • "What's driving your focus on [topic] right now?"

The best questions assume relevance and invite conversation rather than asking for permission.

Objection Handling

Most common objection: "We're not interested / We're all set"

Low-performing response (14% save rate):

"I understand, but if I could just show you how we help companies like [similar company]..."

High-performing response (41% save rate):

"Totally fair—most people say that before they understand what we do differently. Can I ask what you're currently using for [function]?"

The high performer acknowledges the objection, reframes curiosity, and asks a question to re-engage.

Closing Patterns

Low-performing close:

"Would you be open to a call sometime next week to discuss further?"

High-performing close:

"I have 15 minutes Thursday at 2pm or Friday at 10am—which works better?"

Specific times convert 3x better than open-ended requests.

Building the Feedback Loop

Script optimization isn't a one-time project. Build continuous improvement:

Weekly Analysis

Every week, Codex analyzes the latest calls:

  • Which script variants performed best?
  • Any new objections emerging?
  • Seasonal or market changes affecting patterns?
  • Individual rep deviations that work?

A/B Testing Framework

Always test new scripts against the current version:

Test Structure:
- Control: Current best-performing script
- Variant A: New opener based on latest analysis
- Variant B: New objection handling based on latest analysis

Sample Size: 200 calls per variant minimum
Success Metric: Meeting booking rate
Secondary Metrics: Talk time, callback rate, conversation length

Rep-Specific Coaching

Codex can compare individual rep calls to the ideal script:

TASK: Analyze rep performance vs optimal script

OPTIMAL SCRIPT:
[Best-performing script]

REP CALLS (last 50):
[Transcripts]

IDENTIFY:
1. Deviations from optimal opener
2. Missed discovery questions
3. Objection handling gaps
4. Closing technique differences
5. Successful deviations worth learning from

OUTPUT: Coaching recommendations with specific examples

This creates personalized coaching based on actual call data, not manager opinions.

Implementation Approach

Phase 1: Data Collection (Week 1-2)

  • Set up call recording and transcription
  • Export historical calls if available
  • Clean and structure transcripts

Phase 2: Initial Analysis (Week 3-4)

  • Run pattern analysis on historical data
  • Identify top-performing patterns
  • Generate first optimized script

Phase 3: Testing (Week 5-8)

  • A/B test new script against current
  • Track metrics rigorously
  • Iterate based on results

Phase 4: Continuous Optimization (Ongoing)

  • Weekly analysis runs
  • Monthly script updates
  • Quarterly full reviews

Results You Can Expect

Based on implementations we've seen:

MetricBaselineAfter Optimization
Meeting booking rate4-6%9-14%
Average call duration45 sec90 sec
Objection overcome rate15%35%
Rep confidence (self-reported)6/108/10

The meeting rate improvement alone typically justifies the effort within the first month.

Tools You'll Need

Call Recording/Transcription:

  • Gong, Chorus, or similar conversation intelligence
  • Or: Dialers with recording + Whisper API for transcription

Analysis:

  • OpenAI API (Codex/GPT-5.3)
  • Or: Claude for longer context analysis

Tracking:

  • CRM with call outcome logging
  • A/B test tracking spreadsheet or tool
Free Tool

Try our AI Lead Generator — find verified LinkedIn leads for any company instantly. No signup required.

The MarketBetter Integration

Our AI SDR platform includes call coaching built in. When reps use our smart dialer:

  1. Calls are automatically transcribed
  2. AI analyzes against optimal patterns
  3. Reps get real-time suggestions during calls
  4. Scripts update automatically based on what's working

No separate analysis pipeline. No manual script updates. The system learns from every call.

Ready to turn your call data into better scripts? Book a demo and we'll show you how data-driven calling actually works.


Related reading:

AI Contract Review for Sales Teams: How Claude Code Eliminates Legal Bottlenecks [2026]

· 7 min read
MarketBetter Team
Content Team, marketbetter.ai

The average B2B deal loses 3-5 days waiting for legal review.

For high-velocity sales teams, that's not just an inconvenience—it's a competitive disadvantage. While your deal sits in legal's queue, your prospect is talking to competitors who can move faster.

But here's what most sales leaders don't realize: 80% of contract reviews are routine. They're standard terms, boilerplate clauses, and minor customizations that don't actually need a lawyer's attention.

Claude Code changes this equation entirely.

AI contract review workflow showing document intake, clause extraction, risk flagging, and approval routing

The Hidden Cost of Contract Bottlenecks

Before we dive into the solution, let's quantify the problem:

Time Cost:

  • Average legal review time: 3-5 business days
  • Rush review requests: 48 hours minimum
  • Complex deals: 2-3 weeks with revisions

Revenue Impact:

  • 23% of deals stall during contract review (Gartner)
  • 15% of prospects go dark while waiting
  • Average deal delay costs $1,200-$5,000 in opportunity cost

Team Friction:

  • Sales blames legal for slow deals
  • Legal is overwhelmed with routine requests
  • Everyone loses visibility into where things stand

The solution isn't hiring more lawyers. It's automating the 80% that doesn't need human judgment.

How Claude Code Transforms Contract Review

Claude Code's 200K context window means it can analyze an entire contract—including all exhibits, schedules, and amendments—in a single pass. No chunking, no lost context, no missed cross-references.

Here's what that enables:

1. Instant Risk Flagging

Claude Code can scan any contract and flag clauses that deviate from your standard terms:

Analyze this MSA against our standard terms. Flag any clauses that:
1. Impose unlimited liability
2. Include auto-renewal provisions
3. Contain non-standard indemnification language
4. Restrict our ability to use customer logos/case studies
5. Include unusual payment terms (>Net 30)

For each flag, rate severity (Low/Medium/High/Critical) and
suggest standard language that could replace it.

Within seconds, you get a comprehensive risk assessment that would take a paralegal hours.

2. Redline Generation

Instead of waiting for legal to mark up a contract, Claude Code can generate a redlined version with your preferred terms:

The customer sent a contract using their paper. Generate a 
redlined version that:
1. Replaces their liability cap with our standard ($1M or 12 months of fees)
2. Changes indemnification to mutual
3. Removes the audit clause or limits to once per year with 30 days notice
4. Adjusts termination for convenience to 30 days written notice
5. Adds our standard data security addendum language

Output as a tracked-changes document with comments explaining each change.

3. Plain English Summaries

Help your sales team understand what they're sending for signature:

Summarize this contract in plain English for a non-legal audience:
1. What we're agreeing to provide
2. What the customer is agreeing to pay
3. Key obligations on both sides
4. Main risks to be aware of
5. Important dates and deadlines

Keep it to one page maximum.

Contract risk assessment showing low, medium, high, and critical risk levels with corresponding actions

Building Your AI Contract Review Workflow

Here's a practical implementation that any sales ops team can deploy:

Step 1: Create Your Clause Library

Before Claude Code can flag deviations, it needs to know your standards. Build a reference document:

## Standard Contract Terms Reference

### Liability Cap
ACCEPTABLE: Liability limited to 12 months of fees paid
ACCEPTABLE: Liability limited to $1,000,000
REQUIRES REVIEW: Any unlimited liability language
REQUIRES REVIEW: Liability caps below $500,000

### Payment Terms
ACCEPTABLE: Net 30
ACCEPTABLE: Net 45 with approval
REQUIRES REVIEW: Net 60+
REQUIRES REVIEW: Payment upon completion only

### Termination
ACCEPTABLE: 30 days written notice
ACCEPTABLE: Termination for cause with 30-day cure period
REQUIRES REVIEW: No termination for convenience
REQUIRES REVIEW: Penalties for early termination

[Continue for all key clauses...]

Step 2: Build the Review Prompt

You are a contract analyst assistant. Your job is to review 
contracts against our standard terms and flag anything that
requires human legal review.

REFERENCE TERMS:
[Paste your clause library here]

CONTRACT TO REVIEW:
[Paste customer contract]

OUTPUT FORMAT:
1. EXECUTIVE SUMMARY (2-3 sentences)
2. RISK SCORE (Green/Yellow/Red)
3. FLAGGED CLAUSES (with page/section reference)
4. RECOMMENDED CHANGES
5. QUESTIONS FOR LEGAL (if any Red flags)

Step 3: Integrate Into Your Workflow

Option A: Manual Review

  • Rep uploads contract to Claude Code
  • Gets instant analysis
  • Decides whether to escalate to legal

Option B: Automated Triage

  • Contracts flow through a central inbox
  • Claude Code auto-analyzes each one
  • Green = auto-approve, Yellow = sales review, Red = legal review

Option C: Full Integration

  • Connect to your CLM (Ironclad, DocuSign, PandaDoc)
  • Trigger Claude Code analysis on document upload
  • Route based on risk score automatically

Real Prompts That Work

Quick Risk Assessment

Review this contract for deal-breaking clauses. 
I need to know in 60 seconds if this is signable
as-is or needs changes. Focus on: liability,
indemnification, auto-renewal, and payment terms.

Competitive Analysis

Compare this customer's proposed terms to industry 
standard SaaS agreements. Are they asking for
anything unusual? What leverage do we have to
push back?

Negotiation Prep

The customer rejected our standard liability cap 
and wants unlimited liability. Generate 3
alternative positions we could offer, ranked
from most to least favorable to us, with talking
points for each.

Post-Signature Obligation Tracking

Extract all obligations, deadlines, and milestones 
from this signed contract. Output as a checklist
with responsible party and due date for each item.

The Results You Can Expect

Teams implementing AI-assisted contract review typically see:

MetricBeforeAfterImprovement
Average review time3-5 days4-8 hours80% faster
Legal escalation rate100%20-30%70% reduction
Deals stalled in legal23%8%65% improvement
Contract errors caught60%95%35% more

The key insight: you're not replacing legal. You're letting them focus on the 20% of contracts that actually need their expertise.

Common Objections (And How to Handle Them)

"Legal will never approve this." Start with low-risk contracts (renewals, standard deals). Prove the accuracy before expanding scope. Position it as "triage," not "replacement."

"What about confidentiality?" Claude Code processes data in-session without training on your inputs. Use enterprise agreements with appropriate data handling terms.

"Our contracts are too complex." The 200K context window handles even the most complex agreements. Start with the standard sections and expand.

"What if it misses something?" Build a human review step for flagged items. The AI catches the obvious issues; humans verify the edge cases.

Getting Started Today

  1. Audit your current process - How long do contracts actually take? Where are the bottlenecks?

  2. Build your clause library - Document your standard terms and acceptable variations

  3. Test on historical deals - Run Claude Code on 10 signed contracts and compare to what legal actually flagged

  4. Start with renewals - Low-risk, high-volume, perfect for automation

  5. Measure and expand - Track time savings, error rates, and legal escalations

Free Tool

Try our AI Lead Generator — find verified LinkedIn leads for any company instantly. No signup required.

The Competitive Advantage

While your competitors are waiting for legal to review their fifteenth standard MSA of the week, you're sending signed contracts back the same day.

That's not just efficiency—it's a competitive moat.

The deals you close faster are deals your competitors never get a chance to compete for.


Ready to eliminate your contract bottleneck? Book a demo to see how MarketBetter helps sales teams accelerate every stage of the deal cycle.

Related reading:

AI Deal Desk Automation with Claude Code [2026]

· 7 min read
sunder
Founder, marketbetter.ai

Your deal desk is a bottleneck. Reps wait 24-48 hours for pricing approvals while prospects shop competitors. Finance demands margin analysis for every discount. And nobody can find last quarter's precedent pricing.

Sound familiar?

Deal desks handle some of the highest-stakes decisions in sales—pricing, discounts, contract terms. Yet most teams still run them on spreadsheets and email chains. That's insane.

Let me show you how to build an AI-powered deal desk with Claude Code that approves routine requests instantly, flags edge cases for review, and gives you margin-protecting recommendations in seconds.

AI Deal Desk Automation Workflow

The Deal Desk Problem

Here's why your deal desk is killing deals:

  • Approval latency: 24-48 hours average (prospects lose momentum)
  • Inconsistency: Different reps get different discounts for similar deals
  • No precedent search: "What did we charge Acme Corp last year?"
  • Margin erosion: No systematic analysis of discount impact
  • Bottleneck at scale: Deal desk head = single point of failure

Every day of delay in deal approval correlates with a 3-5% drop in close rate. Your "process" is literally costing you deals.

Why Claude Code for Deal Desk

Claude's 200K context window is the key differentiator here. Deal desk decisions require:

  • Full contract history
  • Customer relationship context
  • Competitive positioning
  • Margin calculations
  • Precedent analysis

Other LLMs choke on this much context. Claude handles it natively.

Plus, Claude follows complex instructions precisely—critical when you're encoding your pricing rules, approval matrices, and exception policies.

Building Your AI Deal Desk

Step 1: Define Your Pricing Rules

First, codify what's currently in your deal desk manager's head:

// pricing-rules.js
const pricingRules = {
standardDiscounts: {
'1-year': { max: 10, autoApprove: 5 },
'2-year': { max: 20, autoApprove: 10 },
'3-year': { max: 30, autoApprove: 15 },
},

volumeTiers: {
'starter': { seats: [1, 10], basePrice: 99 },
'growth': { seats: [11, 50], basePrice: 79 },
'enterprise': { seats: [51, 500], basePrice: 59 },
'strategic': { seats: [501, Infinity], basePrice: 'custom' }
},

approvalMatrix: {
'<10%': 'auto-approve',
'10-20%': 'sales-manager',
'20-30%': 'vp-sales',
'>30%': 'cro-review',
'payment-terms': 'finance',
'custom-legal': 'legal'
},

autoRejectTriggers: [
'competitor-matching below cost',
'perpetual license requests',
'unlimited usage without cap'
]
};

Step 2: Create the Claude Deal Analyst

// deal-analyst.js
const Anthropic = require('@anthropic-ai/sdk');

const claude = new Anthropic();

async function analyzeDeal(dealRequest) {
const context = await gatherDealContext(dealRequest);

const response = await claude.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 4096,
system: `You are a Deal Desk Analyst for a B2B SaaS company.

Your job is to:
1. Analyze deal requests against pricing rules
2. Calculate margin impact
3. Find relevant precedents
4. Make approval recommendations

Always show your math. Be specific about risks.
Recommend approval ONLY if it meets policy.
Flag anything unusual for human review.

PRICING RULES:
${JSON.stringify(pricingRules, null, 2)}

CUSTOMER HISTORY:
${context.customerHistory}

PRECEDENT DEALS:
${context.precedents}`,

messages: [{
role: 'user',
content: `Analyze this deal request:

**Customer:** ${dealRequest.customer}
**Deal Size:** ${dealRequest.arr} ARR
**Seats:** ${dealRequest.seats}
**Term:** ${dealRequest.term}
**Requested Discount:** ${dealRequest.discount}%
**Rep Justification:** ${dealRequest.justification}
**Competitor Mentioned:** ${dealRequest.competitor || 'None'}

Provide:
1. Margin analysis
2. Precedent comparison
3. Risk assessment
4. Approval recommendation with reasoning`
}]
});

return parseAnalysis(response.content[0].text);
}

Step 3: Build the Approval Workflow

Connect Claude's analysis to your actual approval flow:

async function processDealApproval(dealRequest) {
// Step 1: Claude analyzes the deal
const analysis = await analyzeDeal(dealRequest);

// Step 2: Auto-approve if within policy
if (analysis.recommendation === 'auto-approve') {
await updateCRM(dealRequest.dealId, {
status: 'approved',
approvedDiscount: dealRequest.discount,
approvalNote: analysis.reasoning,
approvedBy: 'AI Deal Desk'
});

await notifyRep(dealRequest.repId, {
type: 'approved',
message: `${dealRequest.customer} deal approved. ${analysis.summary}`
});

return { status: 'approved', analysis };
}

// Step 3: Route to appropriate approver
const approver = determineApprover(analysis.approvalLevel);

await createApprovalTask({
assignee: approver,
deal: dealRequest,
analysis: analysis,
deadline: calculateSLA(analysis.priority)
});

await notifyRep(dealRequest.repId, {
type: 'pending',
message: `${dealRequest.customer} deal sent to ${approver} for review. Expected SLA: ${analysis.sla}`
});

return { status: 'pending-approval', analysis };
}

Deal Desk Pricing Decision Matrix

Precedent Search: The Secret Weapon

The most valuable deal desk feature isn't auto-approval—it's precedent search. When a rep asks "what did we charge similar customers?", Claude can search your entire deal history.

async function findPrecedents(dealRequest) {
const searchCriteria = {
industry: dealRequest.customer.industry,
employeeRange: calculateRange(dealRequest.customer.employees),
dealSize: { min: dealRequest.arr * 0.7, max: dealRequest.arr * 1.3 },
term: dealRequest.term
};

const historicalDeals = await searchDeals(searchCriteria);

const analysis = await claude.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 2048,
messages: [{
role: 'user',
content: `Analyze these precedent deals for comparison:

CURRENT REQUEST:
${JSON.stringify(dealRequest, null, 2)}

HISTORICAL DEALS:
${JSON.stringify(historicalDeals, null, 2)}

Identify:
1. Most similar deal and why
2. Average discount given to similar customers
3. Any outliers and their justification
4. Recommended benchmark for this deal`
}]
});

return analysis;
}

Margin Protection Analysis

Claude can calculate the real impact of discounts—not just the percentage, but the actual dollars at risk:

async function calculateMarginImpact(dealRequest) {
const metrics = {
listPrice: calculateListPrice(dealRequest),
requestedPrice: dealRequest.arr,
discountPercent: dealRequest.discount,
discountDollars: calculateDiscountDollars(dealRequest),
marginAtList: calculateMargin(dealRequest, 'list'),
marginAtRequested: calculateMargin(dealRequest, 'requested'),
marginDelta: calculateMarginDelta(dealRequest),
ltv: estimateLTV(dealRequest),
cacPayback: calculateCACPayback(dealRequest)
};

const analysis = await claude.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [{
role: 'user',
content: `Provide margin analysis for this deal:

${JSON.stringify(metrics, null, 2)}

Context: Our target gross margin is 75%.
CAC payback target is 12 months.

Assessment needed:
1. Is this deal margin-positive?
2. What's the break-even discount level?
3. Any red flags on unit economics?
4. Would you approve from a finance perspective?`
}]
});

return { metrics, analysis: analysis.content[0].text };
}

Real-World Implementation

Here's how a mid-market SaaS company implemented AI deal desk:

Before AI Deal Desk:

  • 3-day average approval time
  • 42% of deals required escalation
  • Inconsistent discount rates (15-40% variance)
  • Deal desk manager = bottleneck

After AI Deal Desk:

  • 2-hour average approval time (15% approved instantly)
  • 18% escalation rate
  • Consistent discounts (5% variance)
  • Deal desk manager focuses on strategic deals only

Results:

  • $2.3M additional revenue from faster closes
  • 3% margin improvement from consistent pricing
  • VP Sales saves 10 hours/week on routine approvals

Integration Points

Slack Bot for Instant Requests

// Slack command: /deal-check
app.command('/deal-check', async ({ command, ack, respond }) => {
await ack();

const dealParams = parseCommand(command.text);
const analysis = await analyzeDeal(dealParams);

await respond({
blocks: [
{
type: 'section',
text: {
type: 'mrkdwn',
text: `*Deal Analysis: ${dealParams.customer}*`
}
},
{
type: 'section',
fields: [
{ type: 'mrkdwn', text: `*Discount:* ${dealParams.discount}%` },
{ type: 'mrkdwn', text: `*Recommendation:* ${analysis.recommendation}` },
{ type: 'mrkdwn', text: `*Margin Impact:* ${analysis.marginImpact}` },
{ type: 'mrkdwn', text: `*Similar Deals:* ${analysis.precedentCount}` }
]
},
{
type: 'actions',
elements: [
{ type: 'button', text: { type: 'plain_text', text: 'Submit for Approval' }, action_id: 'submit_deal' },
{ type: 'button', text: { type: 'plain_text', text: 'View Precedents' }, action_id: 'view_precedents' }
]
}
]
});
});

CRM Integration

Connect directly to HubSpot or Salesforce:

// When deal moves to "Negotiation" stage
hubspot.deals.onStageChange('negotiation', async (deal) => {
// Pre-analyze before rep even asks
const analysis = await analyzeDeal({
customer: deal.company.name,
arr: deal.amount,
seats: deal.properties.seats,
term: deal.properties.contract_term,
discount: 0, // Baseline analysis
industry: deal.company.industry
});

// Attach analysis to deal
await hubspot.deals.update(deal.id, {
properties: {
deal_desk_analysis: JSON.stringify(analysis),
max_approved_discount: analysis.maxAutoApprove,
pricing_guidance: analysis.recommendedPrice
}
});

// Notify rep
await slack.dm(deal.owner, {
text: `📊 Pricing guidance ready for ${deal.company.name}. Max auto-approve: ${analysis.maxAutoApprove}%. View in HubSpot.`
});
});

Best Practices

1. Start with Auto-Approve Rules

Don't try to AI-ify everything. Start with clear auto-approve criteria:

  • Standard discount tiers
  • Straightforward renewals
  • Volume commitments

2. Build Confidence Gradually

Track accuracy. Start with AI recommendations, then move to auto-approval as you verify it's getting it right.

3. Always Show Reasoning

Reps need to understand WHY a deal was approved/rejected. Claude's explanations build trust.

4. Keep Humans in the Loop

AI handles 70% of routine work. Humans handle strategic decisions, relationship nuances, and exceptions.

Getting Started

  1. Document your current rules — What's in your deal desk manager's head?
  2. Export historical deals — You need precedent data
  3. Set up Claude — Start with analysis-only (no auto-approve)
  4. Run parallel — Compare AI recommendations to human decisions
  5. Calibrate and deploy — Adjust rules, then enable auto-approve

Free Tool

Try our AI Lead Generator — find verified LinkedIn leads for any company instantly. No signup required.

Transform Your Deal Desk

MarketBetter helps GTM teams work smarter with AI-powered automation. From lead qualification to deal desk operations, we help you close faster without sacrificing margin.

Book a Demo →

See how AI can transform your sales operations.

Building an AI-Powered Lead Routing System with Codex [2026]

· 9 min read
sunder
Founder, marketbetter.ai

Your lead routing is broken.

A prospect fills out a demo form at 2 PM. They get assigned to a rep at 4 PM. The rep emails them the next morning. By then, they've already booked calls with two competitors.

The data is brutal:

  • Leads contacted within 5 minutes are 21x more likely to qualify
  • Average B2B response time: 42 hours
  • 78% of buyers choose the vendor who responds first

In 2026, lead routing shouldn't take hours. It shouldn't even take minutes. With OpenAI's GPT-5.3 Codex, you can build intelligent routing that matches leads to reps in seconds—based on fit, capacity, expertise, and likelihood to close.

This guide walks you through building that system.

AI-powered lead routing decision tree

Why Traditional Lead Routing Fails

Most companies use one of these routing methods:

Round Robin

  • How it works: Leads distributed equally to all reps
  • The problem: Your best rep gets the same load as your newest hire. High-value leads go to reps without relevant experience.

Geographic/Territory

  • How it works: Leads assigned by region or named accounts
  • The problem: Territories become outdated. Hot leads sit in cold territories. Territory conflicts create friction.

First Available

  • How it works: Whoever claims it first gets it
  • The problem: Creates a feeding frenzy. Aggressive reps hoard leads. Less assertive reps (who might be better fits) never get chances.

Manual Assignment

  • How it works: Manager reviews and assigns each lead
  • The problem: Creates bottleneck. Manager goes to lunch, leads wait. Scale breaks the model entirely.

What all these miss: Context. They don't understand the lead OR the rep. They're just moving names between buckets.

What AI-Powered Routing Looks Like

Intelligent routing considers:

About the Lead:

  • Company size, industry, and technographics
  • Stated pain points and urgency signals
  • Previous interactions with your brand
  • Predicted deal size and likelihood to close

About Available Reps:

  • Current capacity and workload
  • Historical win rates for similar leads
  • Industry/vertical expertise
  • Time zone and availability
  • Relationship to the account (existing contacts)

The Match:

  • Which rep has the highest probability of closing THIS lead?
  • Who can respond fastest right now?
  • Who has relevant case studies and references?

This is exactly what GPT-5.3 Codex can evaluate—instantly.

Lead scoring and intelligent routing

Architecture: The AI Routing Engine

Here's how to structure your Codex-powered routing system:

┌─────────────────────────────────────────────────────────────┐
│ Lead Sources │
│ (Forms, Chat, Events, Intent Data, Inbound Calls) │
└─────────────────┬───────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│ Lead Enrichment Layer │
│ (Clearbit, Apollo, LinkedIn, Technographics) │
└─────────────────┬───────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│ Codex Routing Decision Engine │
│ │
│ 1. Score lead (fit + intent + urgency) │
│ 2. Pull rep availability + capacity │
│ 3. Match based on expertise + history │
│ 4. Select optimal rep │
│ 5. Handle edge cases (overflow, escalation) │
└─────────────────┬───────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│ CRM Assignment │
│ (HubSpot, Salesforce, Pipedrive) │
└─────────────────┬───────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│ Notification & Action Layer │
│ (Slack alert, Email, Calendar invite, Sequence start) │
└─────────────────────────────────────────────────────────────┘

Building with GPT-5.3 Codex

Codex's mid-turn steering makes it ideal for routing—you can adjust decisions in real-time as context changes.

Step 1: Set Up the Routing Agent

# lead_router.py
import openai
import json
from datetime import datetime
from typing import Dict, List, Optional

class CodexLeadRouter:
def __init__(self):
self.client = openai.OpenAI()
self.routing_rules = self.load_routing_rules()

def load_routing_rules(self) -> Dict:
"""Load your company's routing configuration."""
return {
"high_value_threshold": 50000, # ARR threshold for enterprise routing
"response_sla_minutes": 5,
"max_leads_per_rep_per_day": 15,
"expertise_tags": [
"healthcare", "fintech", "saas", "manufacturing",
"retail", "logistics", "cybersecurity"
],
"escalation_triggers": [
"competitor_mentioned",
"budget_over_100k",
"c_level_contact",
"existing_customer"
]
}

def get_rep_roster(self) -> List[Dict]:
"""Pull current rep availability and stats from CRM."""
# In production, this queries your CRM/database
return [
{
"id": "rep_001",
"name": "Sarah Chen",
"status": "available",
"current_leads_today": 8,
"expertise": ["saas", "fintech"],
"win_rate_ytd": 0.34,
"avg_deal_size": 45000,
"timezone": "America/New_York"
},
{
"id": "rep_002",
"name": "Marcus Johnson",
"status": "available",
"current_leads_today": 12,
"expertise": ["healthcare", "manufacturing"],
"win_rate_ytd": 0.28,
"avg_deal_size": 72000,
"timezone": "America/Chicago"
},
{
"id": "rep_003",
"name": "Emily Rodriguez",
"status": "in_meeting",
"available_in_minutes": 25,
"current_leads_today": 6,
"expertise": ["saas", "cybersecurity", "fintech"],
"win_rate_ytd": 0.41,
"avg_deal_size": 38000,
"timezone": "America/Los_Angeles"
}
]

def route_lead(self, lead: Dict) -> Dict:
"""Use Codex to determine optimal routing."""

reps = self.get_rep_roster()

routing_prompt = f"""
You are an expert sales operations analyst. Route this lead to the optimal sales rep.

## Lead Information
{json.dumps(lead, indent=2)}

## Available Reps
{json.dumps(reps, indent=2)}

## Routing Rules
{json.dumps(self.routing_rules, indent=2)}

## Your Task
Analyze the lead and select the best rep based on:
1. Industry/vertical expertise match
2. Current capacity (leads today vs max)
3. Historical win rate for similar deals
4. Availability for fast response
5. Deal size alignment

If this is a high-value or escalation-trigger lead, note that in your reasoning.

Return JSON:
{{
"selected_rep_id": "rep_xxx",
"selected_rep_name": "Name",
"confidence_score": 0.0-1.0,
"reasoning": "Brief explanation",
"is_escalation": boolean,
"escalation_reason": "if applicable",
"recommended_action": "immediate_call|email_sequence|schedule_call",
"talking_points": ["point 1", "point 2", "point 3"]
}}
"""

response = self.client.chat.completions.create(
model="gpt-5.3-codex", # New Feb 2026 model
messages=[
{"role": "system", "content": "You are a sales routing optimization engine."},
{"role": "user", "content": routing_prompt}
],
response_format={"type": "json_object"}
)

routing_decision = json.loads(response.choices[0].message.content)
return routing_decision

def apply_routing(self, lead: Dict, decision: Dict):
"""Execute the routing decision in your CRM."""

# Update lead owner in CRM
self.update_crm_owner(lead['id'], decision['selected_rep_id'])

# Send notification to rep
self.notify_rep(
rep_id=decision['selected_rep_id'],
lead=lead,
talking_points=decision['talking_points'],
action=decision['recommended_action']
)

# If escalation, also notify manager
if decision['is_escalation']:
self.notify_manager(lead, decision)

# Start appropriate sequence
if decision['recommended_action'] == 'email_sequence':
self.enroll_in_sequence(lead['id'], 'inbound_nurture')

return {"status": "routed", "decision": decision}

Step 2: Real-Time Webhook Handler

# webhook_handler.py
from flask import Flask, request, jsonify
from lead_router import CodexLeadRouter

app = Flask(__name__)
router = CodexLeadRouter()

@app.route('/webhook/new-lead', methods=['POST'])
def handle_new_lead():
"""Process incoming leads from any source."""

lead_data = request.json

# Enrich lead data first
enriched_lead = enrich_lead(lead_data)

# Get AI routing decision
decision = router.route_lead(enriched_lead)

# Apply the routing
result = router.apply_routing(enriched_lead, decision)

# Log for analytics
log_routing_decision(enriched_lead, decision)

return jsonify({
"status": "success",
"assigned_to": decision['selected_rep_name'],
"response_time_ms": result.get('processing_time_ms')
})

def enrich_lead(lead: Dict) -> Dict:
"""Add enrichment data from multiple sources."""

enriched = lead.copy()

# Add company data (Clearbit, Apollo, etc.)
company_data = get_company_enrichment(lead.get('company_domain'))
enriched['company_size'] = company_data.get('employees')
enriched['industry'] = company_data.get('industry')
enriched['technologies'] = company_data.get('tech_stack', [])
enriched['funding'] = company_data.get('funding_total')

# Add contact data
contact_data = get_contact_enrichment(lead.get('email'))
enriched['title'] = contact_data.get('title')
enriched['seniority'] = contact_data.get('seniority')
enriched['linkedin_url'] = contact_data.get('linkedin')

# Add intent signals if available
intent_data = get_intent_signals(lead.get('company_domain'))
enriched['intent_score'] = intent_data.get('score', 0)
enriched['intent_topics'] = intent_data.get('topics', [])

return enriched

Step 3: Mid-Turn Steering for Edge Cases

One of Codex's killer features is mid-turn steering—adjusting the AI's approach while it's working. This is critical for routing edge cases:

def route_with_steering(lead: Dict) -> Dict:
"""Use Codex mid-turn steering for complex routing scenarios."""

client = openai.OpenAI()

# Start the routing conversation
conversation = client.chat.completions.create(
model="gpt-5.3-codex",
messages=[
{"role": "system", "content": "You are routing a new lead."},
{"role": "user", "content": f"Route this lead: {json.dumps(lead)}"}
]
)

initial_decision = conversation.choices[0].message.content

# Check if we need to steer
if "existing_customer" in lead.get('tags', []):
# Steer toward customer success team
conversation = client.chat.completions.create(
model="gpt-5.3-codex",
messages=[
{"role": "system", "content": "You are routing a new lead."},
{"role": "user", "content": f"Route this lead: {json.dumps(lead)}"},
{"role": "assistant", "content": initial_decision},
{"role": "user", "content": "Wait—this is an existing customer. Route to their current CSM or account manager instead of new business reps."}
]
)

elif lead.get('estimated_value', 0) > 100000:
# Steer toward enterprise team
conversation = client.chat.completions.create(
model="gpt-5.3-codex",
messages=[
{"role": "system", "content": "You are routing a new lead."},
{"role": "user", "content": f"Route this lead: {json.dumps(lead)}"},
{"role": "assistant", "content": initial_decision},
{"role": "user", "content": "This deal is over $100K. Apply enterprise routing rules—senior AE only, immediate manager notification, white-glove treatment."}
]
)

return json.loads(conversation.choices[0].message.content)

Production Considerations

Speed Optimization

For sub-second routing:

  1. Pre-compute rep availability: Cache rep status, update every 60 seconds
  2. Async enrichment: Start enrichment calls in parallel
  3. Model selection: Use Codex for complex decisions, rules engine for simple ones
  4. Queue management: Handle spikes with a fast queue (Redis/SQS)
# Fast routing with pre-computed context
class FastRouter:
def __init__(self):
self.rep_cache = {}
self.cache_ttl = 60 # seconds

async def route_lead(self, lead: Dict) -> Dict:
# Get cached rep data (< 1ms)
reps = self.get_cached_reps()

# Quick rules check first (< 5ms)
quick_match = self.apply_quick_rules(lead, reps)
if quick_match:
return quick_match

# Fall back to Codex for complex decisions (200-500ms)
return await self.codex_route(lead, reps)

def apply_quick_rules(self, lead: Dict, reps: List) -> Optional[Dict]:
"""Handle obvious cases without AI."""

# Existing customer → their CSM
if lead.get('is_customer'):
csm = self.get_assigned_csm(lead['company_id'])
if csm:
return {"selected_rep_id": csm['id'], "reasoning": "existing_customer"}

# Named account → account owner
account_owner = self.get_account_owner(lead.get('company_domain'))
if account_owner:
return {"selected_rep_id": account_owner['id'], "reasoning": "named_account"}

# Hot lead + one obvious best rep
if lead.get('intent_score', 0) > 80:
best_available = self.get_best_available_rep(reps, lead['industry'])
if best_available and best_available['current_leads_today'] < 5:
return {"selected_rep_id": best_available['id'], "reasoning": "hot_lead_best_fit"}

return None # Needs AI routing

Handling Failures

def route_with_fallback(lead: Dict) -> Dict:
"""Ensure every lead gets routed, even if AI fails."""

try:
# Try AI routing
decision = router.route_lead(lead)
return decision

except openai.RateLimitError:
# Fall back to round robin
return fallback_round_robin(lead)

except openai.APIError:
# Fall back to rules-based
return fallback_rules_engine(lead)

except Exception as e:
# Last resort: assign to manager for manual routing
log_error(f"Routing failed for lead {lead['id']}: {e}")
return {
"selected_rep_id": "manager_001",
"reasoning": "routing_failure_escalation",
"is_escalation": True
}

Measuring Routing Quality

Track these metrics to optimize your routing:

# routing_analytics.py
def calculate_routing_metrics(period_days: int = 30) -> Dict:
"""Measure routing effectiveness."""

return {
# Speed metrics
"avg_time_to_route_ms": 340,
"avg_time_to_first_contact_min": 4.2,
"sla_compliance_rate": 0.94, # % routed within 5 min

# Quality metrics
"routing_accuracy": 0.87, # % of leads that stayed with initial assignment
"rep_satisfaction_score": 4.2, # Out of 5
"lead_satisfaction_score": 4.5, # From post-call surveys

# Outcome metrics
"conversion_rate_ai_routed": 0.24,
"conversion_rate_manual_routed": 0.18,
"avg_deal_size_ai_routed": 48000,
"avg_cycle_time_ai_routed_days": 32,

# Efficiency metrics
"leads_per_rep_variance": 2.1, # Lower = more balanced
"expertise_match_rate": 0.78, # % where rep had relevant expertise
}
Free Tool

Try our AI Lead Generator — find verified LinkedIn leads for any company instantly. No signup required.

Integration with MarketBetter

If you're using MarketBetter, our Daily SDR Playbook already includes intelligent routing:

  • Visitor identification → automatic enrichment → AI routing → rep notification in under 60 seconds
  • Intent signals factor into routing priority
  • Smart Dialer pre-loads the highest-priority leads for each rep
  • No manual assignment needed—the playbook tells each rep exactly who to contact

This is the "WHO + WHAT TO DO" approach that turns signals into action.


Ready to route leads to revenue faster? See how MarketBetter automates the entire SDR workflow →

AI Pricing Intelligence: Track Competitor Pricing Changes Automatically [2026]

· 7 min read

Your competitor just dropped their prices by 20%. Your sales team finds out... when a prospect tells them on a demo call. By then, three deals have already been lost.

Pricing intelligence shouldn't be reactive. Here's how to build an AI system that monitors competitor pricing changes and alerts your team before you lose deals.

AI pricing intelligence dashboard

Why Pricing Intelligence Matters for Sales

Pricing is the most commonly used competitive weapon in B2B:

  • 62% of deals involve pricing objections
  • 38% of lost deals cite pricing as a factor
  • Average competitor changes pricing 2-4 times per year
  • Time to detect manual monitoring: 2-8 weeks

The gap between price change and detection is where deals die. Your reps are pitching against outdated competitive intel.

The Pricing Intelligence Architecture

Here's a system that monitors competitor pricing and alerts your team in real-time:

Component 1: Price Monitoring Agent

The foundation is automated price scraping with AI interpretation:

Data Sources:

  • Public pricing pages
  • G2, Capterra, TrustRadius pricing sections
  • Web archive history (Wayback Machine)
  • Sales intel platforms (ZoomInfo, Clearbit)
  • Job postings (sometimes reveal pricing in comp plans)
  • Competitor blog posts and press releases

Monitoring Frequency:

  • Pricing pages: Daily
  • Review sites: Weekly
  • Press/blog: Real-time via RSS
  • Job postings: Weekly

Component 2: Price Change Detection

Raw price data is messy. AI helps interpret it:

TASK: Analyze competitor pricing data

PREVIOUS DATA:
[Last known pricing structure]

CURRENT DATA:
[Today's scraped pricing]

DETECT:
1. Direct price changes (increases or decreases)
2. Tier restructuring (new tiers, removed tiers)
3. Feature repackaging (moved between tiers)
4. New add-ons or modules
5. Changed billing models (monthly vs annual)
6. New discount structures
7. Free tier changes

OUTPUT: Structured diff with significance rating (1-10)

Not every change matters equally. A 5% price increase is less urgent than a new free tier that undercuts your entry point.

Component 3: Alert System

Different changes need different responses:

Tier 1 (Immediate - Slack + Email):

  • Price decrease >10%
  • New free tier launched
  • Aggressive promotion announced
  • Major feature moved to lower tier

Tier 2 (Same-day - Email digest):

  • Price increase >10%
  • Tier restructuring
  • New enterprise tier
  • Billing model changes

Tier 3 (Weekly digest):

  • Minor price adjustments (<10%)
  • Add-on pricing changes
  • Regional pricing variations
  • Minor feature repackaging

Competitor pricing tracker dashboard

Component 4: Sales Enablement Response

Detection without action is useless. The system should automatically:

Update Battle Cards: When a competitor changes pricing, their battle card should update within 24 hours:

  • New pricing information
  • Suggested talk tracks for the change
  • Counter-positioning recommendations

Alert Active Deals: If a competitor in an active deal changes pricing:

  • Alert the deal owner immediately
  • Provide talking points for the next conversation
  • Suggest proactive outreach if deal is at risk

Adjust Discount Authority: If competitors drop prices significantly:

  • Temporarily expand rep discount authority
  • Pre-approve promotional offers
  • Create time-limited competitive response

Implementation: Three Approaches

Approach 1: Manual + AI Analysis (Quick Start)

If you're not ready for full automation:

  1. Set Google Alerts for competitor pricing news
  2. Assign someone to check pricing pages weekly
  3. Use Claude/Codex to analyze and structure findings
  4. Manually update battle cards and alert reps

Time investment: 2-4 hours/week Coverage: Moderate

Approach 2: OpenClaw Automation (DIY)

Build a fully automated system with OpenClaw:

# pricing-intel-agent.yaml
agents:
pricing-monitor:
model: claude-sonnet-4-20250514
schedule:
- cron: "0 6 * * *" # Daily at 6 AM

tools:
- web_fetch
- browser # For JavaScript-rendered pages
- web_search

context:
- path: /context/competitors.md
- path: /context/pricing-history.md
- path: /context/alert-rules.md

integrations:
- slack:
channels:
- "#competitive-intel"
- "#sales-alerts"

- hubspot:
update_companies: true
update_deals: true

- google_docs:
battlecards_folder: "Competitive Intel"

memory:
- pricing-history/[competitor].md
- change-log.md

The agent:

  1. Scrapes all competitor pricing pages daily
  2. Compares to historical data
  3. Detects and categorizes changes
  4. Sends appropriate alerts
  5. Updates battle cards in Google Docs
  6. Logs all changes for trend analysis

Time investment: 8-12 hours setup, 1-2 hours/week maintenance Coverage: High

Approach 3: Dedicated Tool (Turnkey)

Several tools offer pricing intelligence as a service:

  • Klue (competitive intelligence platform)
  • Crayon (competitive tracking)
  • Kompyte (competitive analysis)

These cost $20K-50K/year but require minimal setup.

What to Track Beyond List Price

Pricing is more than the number on the page:

Contract Terms:

  • Minimum commitment length
  • Annual vs monthly pricing gap
  • Cancellation policies
  • Auto-renewal terms

Discounting Patterns:

  • End-of-quarter aggressiveness
  • Multi-year discount structures
  • Bundle discounts
  • Volume pricing breaks

Hidden Costs:

  • Implementation fees
  • Integration fees
  • Support tier pricing
  • Overage charges

Total Cost of Ownership:

  • Required add-ons for core functionality
  • Professional services requirements
  • Training costs

Your AI should track all of these, not just the headline price.

Turning Intelligence into Action

Data without action is just noise. Here's how to operationalize pricing intel:

For Sales Reps

In CRM: Each competitor record should show:

  • Current pricing (last updated date)
  • Recent changes (last 90 days)
  • Price positioning vs. us
  • Common objection + response

In Deal Context: When a competitor is tagged:

  • Automatic pricing comparison
  • Suggested discount authority
  • Win/loss history by price gap

For Product/Pricing Team

Monthly Report:

  • Competitor pricing trends
  • Market positioning shifts
  • Opportunities for repositioning
  • Risk areas

Quarterly Review:

  • Full competitive pricing analysis
  • Recommendations for pricing changes
  • Packaging optimization suggestions

For Marketing

Battle Card Updates:

  • Auto-flag outdated pricing references
  • Suggest new positioning based on changes
  • Create comparison content for SEO

Real Example: Detecting a Stealth Price Drop

One of our customers caught a competitor's stealth price drop through this system:

What happened:

  • Competitor removed their pricing page
  • Started showing "Contact Sales" instead
  • Actually dropped prices 30% for deals over $50K

How we detected it:

  1. Pricing page change detected immediately
  2. G2 reviews mentioned lower prices within 2 weeks
  3. LinkedIn posts from their reps hinted at new flexibility
  4. Job posting mentioned "competitive pricing" in the comp plan

The response:

  • Alerted sales team within 48 hours of detection
  • Adjusted discount authority for enterprise deals
  • Updated all battle cards
  • Created targeted content for enterprise buyers

Result: Retained 3 deals that would have been lost, worth $180K ARR.

Getting Started Today

You don't need a complex system to start:

Day 1: List your top 5 competitors and their pricing page URLs

Day 2: Set up Google Alerts for "[Competitor] pricing" for each

Week 1: Manually check all pricing pages and document current state

Week 2: Compare to last week, note any changes, alert sales team

Month 1: Evaluate whether to automate with OpenClaw or purchase a tool

The manual process shows you the value. The automation makes it sustainable.

Free Tool

Try our Tech Stack Detector — instantly detect any company's tech stack from their website. No signup required.

MarketBetter's Approach

Competitive intelligence is built into our AI SDR platform. When a competitor is tagged on a deal, your reps see current pricing, positioning, and suggested responses—automatically updated as things change.

No separate tool. No manual updates. Just intelligence where reps need it.

Want to see competitive intel that actually helps close deals? Book a demo and we'll show you how it works.


Related reading:

AI Revenue Attribution for GTM Teams: Track What Actually Drives Pipeline [2026]

· 9 min read
MarketBetter Team
Content Team, marketbetter.ai

Your marketing team claims the webinar drove $500K in pipeline. Sales says it was their cold calls. The CEO wants to know where to invest next quarter's budget.

Sound familiar?

Revenue attribution is broken at most B2B companies. You're either flying blind or drowning in conflicting reports from tools that each claim credit for the same deals.

Here's the truth: Traditional attribution models (first-touch, last-touch, even "multi-touch") are built for a world that doesn't exist anymore. B2B buyers touch 20+ channels before talking to sales. They read your blog, see your LinkedIn ads, attend your webinar, get cold emailed, AND get a referral—all for the same deal.

The good news? AI coding agents like OpenAI Codex can build custom attribution systems that actually reflect your business. Not generic SaaS attribution—your attribution model.

Revenue attribution workflow showing marketing touchpoints flowing through CRM to closed deals

Why Traditional Attribution Fails GTM Teams

Let's be honest about what's wrong with current approaches:

First-Touch Attribution

Credits the first interaction. Problem: That blog post from 18 months ago gets credit for a deal that actually closed because of a killer demo.

Last-Touch Attribution

Credits the final interaction before conversion. Problem: Your SDR's call gets all the credit while the content marketing that warmed up the lead gets nothing.

Linear Multi-Touch

Splits credit equally across all touchpoints. Problem: A random email open counts the same as a 45-minute product demo? That's not how influence works.

Time-Decay Models

More recent touches get more credit. Problem: What about the case study that sat in their inbox for 3 months before they finally read it and decided to buy?

The real issue: These models were designed for e-commerce, not B2B. When someone buys shoes online, you can track a clean path from ad → click → purchase. When an enterprise company buys your software, there are 5 stakeholders, 6 months of evaluation, and touchpoints across every channel you have.

The AI-First Approach to Revenue Attribution

Here's what changes when you use Codex to build custom attribution:

  1. Pull data from everywhere — CRM, marketing automation, ad platforms, website analytics, call tracking, all unified
  2. Build custom models — Weight touchpoints based on YOUR sales cycle, not generic assumptions
  3. Automate the analysis — Daily/weekly attribution reports without manual data wrangling
  4. Iterate fast — Test different models, see which one best predicts future revenue

Before and after comparison of manual spreadsheet tracking versus automated AI attribution dashboard

Building Revenue Attribution with OpenAI Codex

Let me walk you through a practical implementation. We'll build a system that:

  • Pulls deal data from HubSpot
  • Collects touchpoint data from multiple sources
  • Applies custom attribution logic
  • Outputs actionable reports

Step 1: Set Up Your Environment

First, install the Codex CLI:

npm install -g @openai/codex

Create a project directory:

mkdir revenue-attribution && cd revenue-attribution
codex init

Step 2: Define Your Data Sources

Create a configuration file that maps all your touchpoint sources:

// config/sources.js
module.exports = {
crm: {
type: 'hubspot',
apiKey: process.env.HUBSPOT_API_KEY,
objects: ['deals', 'contacts', 'companies']
},
marketing: {
type: 'hubspot_marketing',
events: ['email_open', 'email_click', 'form_submission', 'page_view']
},
ads: {
type: 'google_ads',
conversionActions: ['demo_request', 'trial_signup']
},
calls: {
type: 'gong', // or your call tracking tool
outcomes: ['meeting_scheduled', 'demo_completed']
},
website: {
type: 'google_analytics',
events: ['page_view', 'scroll_depth', 'time_on_page']
}
};

Step 3: Build the Attribution Model

Here's where Codex shines. You can describe your attribution logic in plain English and have it generate the code:

Prompt to Codex:

Build a revenue attribution model with these rules:
1. Demo attendance = 30% weight (high-intent signal)
2. Content downloads = 15% weight (research phase)
3. Email engagement = 10% weight (nurture)
4. Website visits = 5% weight (awareness)
5. Direct sales touch = 25% weight (relationship)
6. Referral source = 15% weight (trust signal)

Apply time decay: touchpoints in last 30 days get 2x weight.
Handle multi-stakeholder deals by attributing to the primary contact's journey.
Output should show attribution by channel and by campaign.

Codex generates something like:

// models/customAttribution.js
const WEIGHTS = {
demo_completed: 0.30,
content_download: 0.15,
email_engagement: 0.10,
website_visit: 0.05,
sales_call: 0.25,
referral: 0.15
};

const TIME_DECAY_WINDOW = 30; // days
const TIME_DECAY_MULTIPLIER = 2;

function calculateAttribution(deal, touchpoints) {
const closeDate = new Date(deal.closeDate);
let totalWeight = 0;
const attribution = {};

touchpoints.forEach(touch => {
const touchDate = new Date(touch.timestamp);
const daysBeforeClose = (closeDate - touchDate) / (1000 * 60 * 60 * 24);

let weight = WEIGHTS[touch.type] || 0.05;

// Apply time decay bonus
if (daysBeforeClose <= TIME_DECAY_WINDOW) {
weight *= TIME_DECAY_MULTIPLIER;
}

totalWeight += weight;

const channel = touch.channel || 'direct';
const campaign = touch.campaign || 'none';

if (!attribution[channel]) {
attribution[channel] = { weight: 0, campaigns: {} };
}
attribution[channel].weight += weight;

if (!attribution[channel].campaigns[campaign]) {
attribution[channel].campaigns[campaign] = 0;
}
attribution[channel].campaigns[campaign] += weight;
});

// Normalize to percentages
Object.keys(attribution).forEach(channel => {
attribution[channel].percentage =
(attribution[channel].weight / totalWeight * 100).toFixed(1);
});

return {
dealId: deal.id,
dealValue: deal.amount,
attribution
};
}

module.exports = { calculateAttribution };

Step 4: Automate Data Collection

Use Codex to write the data pipeline:

// pipelines/collectTouchpoints.js
const HubSpot = require('@hubspot/api-client');

async function collectTouchpointsForDeal(dealId) {
const hubspot = new HubSpot.Client({ accessToken: process.env.HUBSPOT_TOKEN });

// Get deal and associated contacts
const deal = await hubspot.crm.deals.basicApi.getById(dealId, [
'amount', 'closedate', 'dealstage'
]);

const associations = await hubspot.crm.deals.associationsApi.getAll(
dealId, 'contacts'
);

const touchpoints = [];

for (const assoc of associations.results) {
// Get contact's marketing timeline
const timeline = await hubspot.crm.timeline.eventsApi.getEventsByContactId(
assoc.id
);

timeline.results.forEach(event => {
touchpoints.push({
type: mapEventType(event.eventType),
timestamp: event.timestamp,
channel: extractChannel(event),
campaign: event.properties?.campaign || null,
contactId: assoc.id
});
});

// Get email engagement
const emails = await getEmailEngagement(assoc.id);
touchpoints.push(...emails);

// Get call/meeting history
const calls = await getCallHistory(assoc.id);
touchpoints.push(...calls);
}

return touchpoints;
}

Step 5: Generate Attribution Reports

// reports/weeklyAttribution.js
async function generateWeeklyReport() {
const closedDeals = await getClosedDealsThisWeek();
const results = [];

for (const deal of closedDeals) {
const touchpoints = await collectTouchpointsForDeal(deal.id);
const attribution = calculateAttribution(deal, touchpoints);
results.push(attribution);
}

// Aggregate by channel
const channelSummary = aggregateByChannel(results);

// Aggregate by campaign
const campaignSummary = aggregateByCampaign(results);

return {
period: 'weekly',
totalRevenue: results.reduce((sum, r) => sum + r.dealValue, 0),
dealCount: results.length,
byChannel: channelSummary,
byCampaign: campaignSummary
};
}

Architecture diagram showing Codex processing data flows between code repository, CRM API, and analytics dashboard

Real Example: What This Looks Like in Practice

Here's a sample output from a real attribution run:

Weekly Revenue Attribution Report
=================================
Period: Feb 3-10, 2026
Closed Deals: 8
Total Revenue: $247,000

Attribution by Channel:
-----------------------
Sales Calls/Meetings 34.2% $84,474
Demo Attendance 28.7% $70,889
Content Marketing 18.3% $45,201
Email Nurture 11.4% $28,158
Paid Ads 7.4% $18,278

Top Performing Campaigns:
-------------------------
1. "AI SDR Playbook" ebook $62,400 influenced
2. January Webinar Series $48,200 influenced
3. LinkedIn Retargeting $31,100 influenced
4. Cold Email Sequence A $28,900 influenced

Now you can answer the CEO's question: "Where should we invest next quarter?"

Advanced: Mid-Turn Steering with GPT-5.3 Codex

One of the killer features in the new GPT-5.3 Codex release (Feb 5, 2026) is mid-turn steering. This lets you adjust your attribution model while Codex is running the analysis.

Example scenario:

  1. You kick off a large attribution run across 6 months of data
  2. Halfway through, you realize you forgot to include LinkedIn engagement
  3. With mid-turn steering, you can add that data source without restarting
# Start the attribution run
codex run attribution --period="2025-08-01 to 2026-02-01"

# Mid-run, add LinkedIn data
codex steer "Also include LinkedIn company page engagement as a touchpoint with 8% weight"

This is massive for iterating on attribution models. You don't have to guess the perfect model upfront—you can adjust based on what you're seeing.

Why Build vs. Buy?

You might be thinking: "Why not just use a tool like Bizible, Attribution, or CaliberMind?"

Here's why building makes sense for many GTM teams:

FactorSaaS Attribution ToolCustom with Codex
Cost$2,000-$10,000/month~$100/month API costs
CustomizationLimited to their modelsBuild exactly what you need
Data ownershipData lives in their cloudYour data, your infrastructure
IntegrationWhatever connectors they supportConnect anything with an API
Time to valueWeeks of implementationDays with Codex

The trade-off is maintenance. But with Codex, you can also automate the maintenance—have it monitor for data quality issues, alert on anomalies, and even suggest model improvements.

Getting Started This Week

Here's a practical starting point:

Day 1: Export your closed-won deals from the last 90 days with associated contacts Day 2: Use Codex to map all touchpoints for those contacts (email, calls, web visits) Day 3: Define your initial weight model based on what you think matters Day 4: Run attribution and compare to gut feel—adjust weights Day 5: Automate weekly reports to Slack

Within a week, you'll have better attribution than most companies get from $50K/year tools.

The Bigger Picture

Revenue attribution isn't just about knowing what worked. It's about building a feedback loop that makes your entire GTM motion smarter.

When you know that demo attendance drives 3x the revenue of webinar attendance, you stop running generic webinars and start running webinars designed to book demos.

When you know that a specific cold email sequence influenced 40% of Q1 revenue, you double down on that messaging.

When you know that LinkedIn ads drive awareness but never close deals, you reallocate budget to channels that do.

AI coding agents like Codex make this level of insight accessible to teams that couldn't afford enterprise BI tools or couldn't hire data engineers.


Free Tool

Try our AI Lead Generator — find verified LinkedIn leads for any company instantly. No signup required.

Ready to See What's Actually Driving Your Pipeline?

MarketBetter helps B2B teams track the signals that matter and turn them into action. Our AI-powered playbook shows your SDRs exactly what to do next—based on the touchpoints that actually correlate with closed deals.

Book a demo →


Related reading:

AI Review Analysis for Competitive Intelligence with Claude [2026]

· 8 min read
sunder
Founder, marketbetter.ai

Your competitors' customers are telling you exactly how to beat them. They're leaving 1-star reviews on G2, complaining on Twitter, and posting detailed critiques on Capterra.

But who has time to read 500 reviews?

That's where AI comes in. In this guide, I'll show you how to build an automated review analysis system with Claude that extracts competitive intelligence, tracks sentiment trends, and surfaces sales opportunities—all while you sleep.

AI Review Analysis Pipeline

Why Reviews Are Competitive Gold

Reviews contain unfiltered intelligence that you can't get anywhere else:

  • Real pain points — What customers actually hate (not what marketing says)
  • Feature gaps — What competitors are missing that you could exploit
  • Switching triggers — What makes customers leave for alternatives
  • Pricing complaints — How customers really feel about value
  • Support quality — Where competitors are dropping the ball

A single detailed review can give you a battlecard-worthy insight. Hundreds of reviews? That's a strategic playbook.

The Review Intelligence Stack

Here's what we're building:

  1. Collector — Scrape reviews from G2, Capterra, TrustRadius
  2. Analyzer — Claude extracts structured insights
  3. Aggregator — Trends and patterns across time
  4. Alerter — Notify sales when opportunities surface

Let's build each piece.

Step 1: Collecting Reviews

First, gather the raw data. G2 and Capterra have APIs, but you can also scrape public review pages:

// review-collector.js
const cheerio = require('cheerio');
const axios = require('axios');

async function collectG2Reviews(productSlug, pages = 5) {
const reviews = [];

for (let page = 1; page <= pages; page++) {
const url = `https://www.g2.com/products/${productSlug}/reviews?page=${page}`;
const { data } = await axios.get(url, {
headers: { 'User-Agent': 'Mozilla/5.0...' }
});

const $ = cheerio.load(data);

$('.review-item').each((i, el) => {
reviews.push({
rating: $(el).find('.star-rating').attr('data-rating'),
title: $(el).find('.review-title').text().trim(),
pros: $(el).find('.pros-content').text().trim(),
cons: $(el).find('.cons-content').text().trim(),
date: $(el).find('.review-date').text().trim(),
industry: $(el).find('.reviewer-industry').text().trim(),
companySize: $(el).find('.reviewer-company-size').text().trim(),
role: $(el).find('.reviewer-role').text().trim()
});
});

await sleep(2000); // Be respectful
}

return reviews;
}

Competitors to Track

For a GTM/sales intelligence company, you'd track:

  • Direct competitors: Warmly, Common Room, 6sense, ZoomInfo
  • Adjacent tools: Apollo, Outreach, Salesloft
  • Emerging players: Unify GTM, Clay, Clearbit

Step 2: Analyzing with Claude

Here's where it gets interesting. Claude processes each review and extracts structured intelligence:

// review-analyzer.js
const Anthropic = require('@anthropic-ai/sdk');

const claude = new Anthropic();

async function analyzeReview(review, competitor) {
const response = await claude.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 2048,
messages: [{
role: 'user',
content: `Analyze this ${competitor} review for competitive intelligence:

**Rating:** ${review.rating}/5
**Title:** ${review.title}
**Pros:** ${review.pros}
**Cons:** ${review.cons}
**Reviewer:** ${review.role} at ${review.companySize} company in ${review.industry}

Extract and return as JSON:
{
"sentiment": "positive|negative|mixed",
"pain_points": ["specific issues mentioned"],
"praised_features": ["what they like"],
"missing_features": ["what they wish existed"],
"pricing_feedback": "any pricing comments",
"support_feedback": "any support comments",
"switching_signals": ["any hints they might switch"],
"competitor_mentions": ["other products mentioned"],
"use_case": "how they use the product",
"sales_opportunity": {
"is_opportunity": true/false,
"reason": "why this could be a sales opportunity",
"battlecard_insight": "insight for sales team"
}
}`
}]
});

return JSON.parse(response.content[0].text);
}

Batch Processing

Process reviews efficiently with batching:

async function analyzeAllReviews(reviews, competitor) {
const results = [];

// Process in batches of 10
for (let i = 0; i < reviews.length; i += 10) {
const batch = reviews.slice(i, i + 10);

const batchResults = await Promise.all(
batch.map(review => analyzeReview(review, competitor))
);

results.push(...batchResults);

console.log(`Processed ${Math.min(i + 10, reviews.length)}/${reviews.length}`);
await sleep(1000); // Rate limiting
}

return results;
}

G2 Review Sentiment Dashboard

Step 3: Aggregating Insights

Individual reviews are useful. Patterns across hundreds are powerful:

async function generateCompetitiveReport(analyzedReviews, competitor) {
const response = await claude.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 4096,
system: `You are a competitive intelligence analyst.
Generate actionable insights for a sales team.
Be specific and quote reviews when relevant.`,

messages: [{
role: 'user',
content: `Analyze these ${analyzedReviews.length} ${competitor} reviews and generate a competitive intelligence report:

${JSON.stringify(analyzedReviews, null, 2)}

Generate:

## Executive Summary
Key findings in 3 bullets

## Top Pain Points (ranked by frequency)
What customers complain about most

## Feature Gaps
What's missing that we could highlight

## Pricing Perception
How customers feel about value/price

## Support Quality
Strengths and weaknesses of their support

## Switching Triggers
What makes customers leave

## Sales Battlecard Updates
Specific talking points for our sales team

## Recommended Actions
What we should do with this intelligence`
}]
});

return response.content[0].text;
}

Step 4: Alerting on Opportunities

Set up real-time alerts when high-value signals appear:

async function checkForOpportunities(newReviews) {
const opportunities = newReviews.filter(r =>
r.analysis.sales_opportunity.is_opportunity
);

for (const opp of opportunities) {
// Check if reviewer's company is in our target market
const company = await enrichCompany(opp.review.companyName);

if (matchesICP(company)) {
await slack.postMessage({
channel: '#sales-opportunities',
text: `🎯 *Potential Opportunity Detected*`,
blocks: [
{
type: 'section',
text: {
type: 'mrkdwn',
text: `*${company.name}* left a ${opp.review.rating}-star review of *${opp.competitor}*

*Signal:* ${opp.analysis.sales_opportunity.reason}

*Key Pain Points:*
${opp.analysis.pain_points.map(p => `${p}`).join('\n')}

*Battlecard Insight:*
> ${opp.analysis.sales_opportunity.battlecard_insight}`
}
},
{
type: 'actions',
elements: [
{ type: 'button', text: { type: 'plain_text', text: 'Add to Outreach' }, action_id: 'add_prospect' },
{ type: 'button', text: { type: 'plain_text', text: 'View Full Review' }, url: opp.review.url }
]
}
]
});
}
}
}

What Intelligence Looks Like

Here's a real example of what Claude extracts from a 2-star Warmly review:

Input Review:

"We tried Warmly for visitor identification but the data quality wasn't there. Out of 100 identified companies, maybe 20 were accurate. Also, $35K/year for what amounts to a widget is insane. Looking at alternatives now."

Claude's Analysis:

{
"sentiment": "negative",
"pain_points": [
"Data quality issues - only 20% accuracy on company identification",
"High price point ($35K/year) not justified by value"
],
"switching_signals": [
"Actively looking at alternatives",
"Price-to-value mismatch frustration"
],
"sales_opportunity": {
"is_opportunity": true,
"reason": "Actively evaluating alternatives, frustrated with price and accuracy",
"battlecard_insight": "Lead with data quality comparisons and ROI calculator showing value at our price point"
}
}

That's a warm lead and a battlecard insight in one.

Monthly trend analysis reveals strategic shifts:

async function monthlyTrendAnalysis(competitor) {
const lastMonth = await getReviewsByPeriod(competitor, 'last-30-days');
const previousMonth = await getReviewsByPeriod(competitor, '30-60-days-ago');

const analysis = await claude.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 2048,
messages: [{
role: 'user',
content: `Compare ${competitor}'s reviews from the last 30 days vs the previous 30 days:

LAST 30 DAYS (${lastMonth.length} reviews):
Average rating: ${calculateAverage(lastMonth, 'rating')}
Top complaints: ${aggregateComplaints(lastMonth)}

PREVIOUS 30 DAYS (${previousMonth.length} reviews):
Average rating: ${calculateAverage(previousMonth, 'rating')}
Top complaints: ${aggregateComplaints(previousMonth)}

Identify:
1. Sentiment trend (improving/declining/stable)
2. New complaints emerging
3. Resolved issues (complaints disappearing)
4. Any correlation with product releases or news`
}]
});

return analysis.content[0].text;
}

Integration with Sales Workflows

HubSpot: Auto-Update Battlecards

// When new insights found, update company records
async function updateBattlecards(insights) {
for (const insight of insights.battlecard_updates) {
await hubspot.crm.objects.notes.create({
associations: [{
to: { id: insight.competitorCompanyId },
types: [{ associationCategory: 'HUBSPOT_DEFINED', associationTypeId: 202 }]
}],
properties: {
hs_note_body: `📊 **Competitive Intel Update**\n\n${insight.update}\n\n_Source: G2 Review Analysis_`,
hs_timestamp: Date.now()
}
});
}
}

OpenClaw: Automated Monitoring

Set up 24/7 monitoring with OpenClaw:

# In your OpenClaw agent config
agents:
competitive-intel:
schedule: "0 6 * * *" # Daily at 6am
task: |
Check for new competitor reviews on G2 and Capterra.
Analyze any with rating <= 3 stars.
Alert #sales-intel channel with opportunities.
Update battlecard docs in Notion.

Privacy and Ethics

Important considerations:

  1. Public reviews only — Only analyze publicly posted reviews
  2. Don't scrape aggressively — Respect rate limits and robots.txt
  3. No reviewer doxxing — Don't try to identify individual reviewers for outreach
  4. Aggregate, don't stalk — Use for strategic insights, not individual targeting

ROI of Review Intelligence

ActivityTime (Manual)Time (AI)Savings
Read 100 reviews5 hours05 hours
Extract key themes3 hours10 min2.8 hours
Update battlecards2 hours30 min1.5 hours
Generate report4 hours15 min3.75 hours
Weekly Total14 hours55 min13 hours

That's 52 hours/month saved on competitive intelligence alone.

Getting Started

  1. Pick 3-5 competitors to monitor
  2. Set up collection for G2 (easiest API access)
  3. Run initial analysis on last 6 months of reviews
  4. Generate baseline report to share with sales
  5. Automate weekly updates going forward

Free Tool

Try our Tech Stack Detector — instantly detect any company's tech stack from their website. No signup required.

Turn Competitor Weaknesses Into Your Wins

MarketBetter helps GTM teams work smarter with AI-powered competitive intelligence. Know what to say, when to say it, and who to target—all automatically.

Book a Demo →

See how teams are using AI to outsmart the competition.

AI RFP Response Automation with OpenAI Codex [2026 Guide]

· 7 min read
sunder
Founder, marketbetter.ai

RFPs are the bane of every sales team's existence. You get a 50-page document, 200 questions, and a deadline that's always "last week." Your best reps spend days copy-pasting from old proposals while new deals pile up.

What if you could cut RFP response time from 40 hours to 4?

That's exactly what AI coding agents like OpenAI Codex make possible. In this guide, I'll show you how to build an automated RFP response system that drafts 80% of your answers automatically—letting your team focus on customization rather than repetitive typing.

AI RFP Response Automation Workflow

The Problem: RFPs Are a Time Black Hole

Here's the ugly truth about RFPs:

  • Average time to complete: 20-40 hours per response
  • Win rate: 5-25% (meaning 75%+ of that effort is wasted)
  • Repetition: 60-70% of questions are repeated across RFPs
  • Cost: $3,000-$10,000 in labor per response

Your best salespeople—the ones who should be closing deals—are stuck copying answers from old Word docs. It's insane.

How AI Changes the Game

OpenAI's Codex (powered by GPT-5.3, released February 5, 2026) is the most capable agentic coding model ever built. Combined with its mid-turn steering capability, you can build RFP automation that:

  1. Parses RFP documents automatically (PDFs, Word docs, spreadsheets)
  2. Matches questions to your answer library using semantic search
  3. Drafts responses that match your company's voice and style
  4. Flags gaps where human input is needed
  5. Formats output to match required submission formats

The key insight: You don't need 100% automation. You need to eliminate the 70% that's copy-paste.

Building Your AI RFP System

Step 1: Create Your Answer Library

Your answer library is the foundation. This is where Codex pulls responses from.

Structure it like this:

rfp-library/
├── security/
│ ├── soc2-compliance.md
│ ├── data-encryption.md
│ └── access-controls.md
├── technical/
│ ├── api-capabilities.md
│ ├── integrations.md
│ └── uptime-sla.md
├── company/
│ ├── about-us.md
│ ├── customer-references.md
│ └── team-bios.md
└── pricing/
├── pricing-models.md
└── enterprise-terms.md

Each file contains pre-approved answers with metadata:

# SOC 2 Compliance

**Category:** Security
**Last Updated:** 2026-01-15
**Approved By:** Legal Team

## Standard Response

MarketBetter maintains SOC 2 Type II certification, audited annually by [Auditor Name]. Our most recent audit was completed in December 2025 with zero findings.

## Short Response (for checkboxes)

Yes - SOC 2 Type II certified, audited annually.

## Long Response (for detailed sections)

[Extended response with specifics...]

Step 2: Set Up Codex for RFP Processing

Install the Codex CLI:

npm install -g @openai/codex

Create a Codex agent specifically for RFP work:

// rfp-agent.js
const { Codex } = require('@openai/codex');

const rfpAgent = new Codex({
model: 'gpt-5.3-codex',
tools: ['file_read', 'file_write', 'search'],
context: `You are an expert RFP response writer. Your job is to:
1. Parse RFP questions accurately
2. Find matching answers in the library
3. Draft responses that are professional and specific
4. Flag questions that need human review

Always maintain the company's professional tone.
Never make up technical claims—flag for review if unsure.`
});

Step 3: Parse and Match Questions

The magic happens in semantic matching. Codex doesn't just keyword-match—it understands intent:

async function processRFP(rfpDocument) {
// Extract questions from RFP
const questions = await rfpAgent.run(`
Parse this RFP document and extract all questions.
Return as JSON array with:
- question_id
- question_text
- category (security/technical/company/pricing/other)
- complexity (simple/moderate/complex)

Document: ${rfpDocument}
`);

// Match each question to library
for (const q of questions) {
const match = await rfpAgent.run(`
Find the best matching answer in our library for:
"${q.question_text}"

Return:
- matched_file
- confidence (0-100)
- suggested_response
- needs_review (boolean)
`);

q.match = match;
}

return questions;
}

Step 4: Use Mid-Turn Steering for Complex Questions

GPT-5.3 Codex's killer feature is mid-turn steering—you can redirect the agent while it's working. This is perfect for RFPs where context matters:

// Start processing
const session = await rfpAgent.startTask(`
Process RFP section on data security.
Draft responses for questions 15-25.
`);

// Mid-turn steering when you notice issues
await session.steer(`
Important context: This client is in healthcare.
Emphasize HIPAA compliance in all security answers.
Reference our healthcare customer case studies.
`);

// Continue processing with new context
const results = await session.complete();

This is something ChatGPT can't do. You're not starting over—you're guiding the agent mid-flight.

Manual vs AI-Powered RFP Response

Real-World Results

Here's what teams using AI RFP automation report:

MetricBefore AIAfter AIImprovement
Time per RFP40 hours6 hours85% faster
Questions auto-answered0%72%
Response quality score78%84%Higher consistency
RFPs completed/month3124x throughput

The quality actually goes UP because:

  • Answers are consistent (no more contradicting yourself)
  • Responses pull from approved, accurate content
  • Reps spend time on customization, not copy-paste

The 80/20 Rule for RFP Automation

Don't try to automate everything. Focus on:

Automate (80% of effort):

  • Standard compliance questions (SOC 2, GDPR, etc.)
  • Company background and overview
  • Technical specifications and integrations
  • Pricing structure explanations
  • Reference and case study summaries

Keep Human (20% of effort):

  • Custom pricing negotiations
  • Novel technical requirements
  • Strategic positioning against competitors
  • Executive summary and cover letter
  • Final review and submission

Integration with Your Sales Stack

Connect your RFP automation to the tools you already use:

HubSpot Integration

Track RFP deals and auto-populate company context:

async function enrichFromCRM(rfpDocument, dealId) {
const deal = await hubspot.deals.get(dealId);

return rfpAgent.run(`
Process this RFP with the following context:
- Company: ${deal.company.name}
- Industry: ${deal.company.industry}
- Deal Size: ${deal.amount}
- Key Requirements: ${deal.properties.requirements}

Customize responses accordingly.
Document: ${rfpDocument}
`);
}

Slack Notifications

Alert the team when RFPs need human review:

async function notifyTeam(rfpResults) {
const needsReview = rfpResults.filter(q => q.needs_review);

if (needsReview.length > 0) {
await slack.postMessage({
channel: '#rfp-team',
text: `🔔 RFP "${rfpName}" needs review on ${needsReview.length} questions`,
blocks: needsReview.map(q => ({
type: 'section',
text: `*Q${q.question_id}:* ${q.question_text}\n_Confidence: ${q.confidence}%_`
}))
});
}
}

Common Pitfalls to Avoid

1. Garbage In, Garbage Out

Your answer library needs to be maintained. If you feed Codex outdated information, it'll confidently give wrong answers.

Fix: Schedule monthly library reviews. Track which answers get edited most—those need attention.

2. Over-Automation

If Codex is answering questions about features you don't have, you've got a problem.

Fix: Set confidence thresholds. Below 70% confidence? Flag for human review.

3. Ignoring Voice and Tone

Generic AI-written responses sound like... generic AI-written responses.

Fix: Include style guides in your agent's context. Train it on your best past proposals.

Cost Analysis

Let's do the math:

Manual RFP Response:

  • 40 hours × $75/hour (fully loaded cost) = $3,000 per RFP

AI-Assisted RFP Response:

  • Codex API costs: ~$50 per RFP
  • Human review time: 6 hours × $75 = $450
  • Total: ~$500 per RFP

That's an 83% cost reduction per RFP. If you do 10 RFPs per quarter, that's $100,000+ saved annually.

Getting Started Today

  1. Audit your last 5 RFPs — What percentage of questions repeat?
  2. Build your answer library — Start with your most common categories
  3. Set up Codex — Use the configuration above as a starting point
  4. Run a pilot — Test on one RFP before going all-in
  5. Iterate — Improve your library based on what gets edited

What's Next?

RFP automation is just one piece of the AI-powered sales ops puzzle. Check out these related guides:


Free Tool

Try our AI Lead Generator — find verified LinkedIn leads for any company instantly. No signup required.

Ready to Transform Your Sales Operations?

MarketBetter combines AI automation with human expertise to help GTM teams work smarter. Our platform tells your SDRs exactly who to contact, when, and what to say—so they can focus on closing instead of researching.

Book a Demo →

See how teams are using AI to 10x their sales productivity.

AI-Powered Sales Commission Calculator with Claude Code: Automate RevOps Complexity [2026]

· 8 min read
MarketBetter Team
Content Team, marketbetter.ai

The average RevOps team spends 40+ hours per month calculating commissions.

Spreadsheets break. Formulas have errors. Reps dispute their payouts. And when comp plans change mid-quarter, everything needs to be rebuilt from scratch.

Meanwhile, your sales team loses trust every time their commission is wrong—even by a few dollars.

Claude Code offers a better way. Its 200K context window can hold your entire comp plan, deal data, and calculation logic in a single session. Here's how to build a commission calculator that handles real-world complexity.

Sales commission calculation flow showing deal closure, rules engine, tier calculations, and payout

The Commission Calculation Problem

Before we solve it, let's acknowledge why it's hard:

Complexity Layers

A "simple" commission structure often includes:

  • Base rates that vary by product, region, or segment
  • Tiers that accelerate as reps hit quota thresholds
  • Splits between AEs, SDRs, SEs, and CSMs
  • SPIFs for strategic initiatives (new product push, multi-year deals)
  • Clawbacks for churned customers
  • Overrides for managers on team deals
  • Caps and floors on certain deal types
  • Pro-ration for mid-quarter hires or territory changes

The Spreadsheet Trap

Most teams start with Excel. By month 6, you have:

Commission_Calculator_v14_FINAL_FINAL_v2_JohnFix_ACTUALLY_FINAL.xlsx

With nested IFs that no one understands:

=IF(AND(B2="Enterprise",C2>50000,D2="New"),
IF(E2>1,G2*0.12*1.15,G2*0.12),
IF(AND(B2="Mid-Market",C2>25000),
IF(F2="SPIF",G2*0.10*1.25,G2*0.10),
G2*0.08))

When something breaks—and it will—good luck debugging that.

The Trust Problem

57% of sales reps have received an incorrect commission statement (Xactly). When reps don't trust their comp, they:

  • Spend time manually verifying every deal
  • Lose motivation during payout disputes
  • Leave for companies with "cleaner" comp plans

Trust in compensation is trust in leadership.

The Claude Code Solution

Claude Code's strengths align perfectly with commission calculation:

  1. Natural language rules - Describe your comp plan in English, not formulas
  2. 200K context - Hold the entire comp plan + all deals in one session
  3. Explainable logic - Ask "why did this deal pay $X?" and get a real answer
  4. Adaptable - Change the plan mid-quarter without rebuilding

Step 1: Document Your Comp Plan

Instead of nested formulas, describe your plan clearly:

# Sales Commission Plan - Q1 2026

## Base Rates

### Account Executives
- New Business: 10% of first-year ACV
- Expansion: 8% of expansion ACV
- Renewal: 3% of renewal ACV

### SDRs
- Qualified Meeting: $100 per SAL
- Opportunity Created: $250 per SQL that converts to opportunity
- Deal Credit: 2% of won ACV (capped at $2,000 per deal)

### Sales Engineers
- Technical Win: $500 per closed-won where SE was primary
- Deal Credit: 3% of won ACV for complex deals (>$50K)

## Tier Accelerators

| Quota Attainment | Multiplier |
|------------------|------------|
| 0-80% | 1.0x |
| 80-100% | 1.15x |
| 100-120% | 1.30x |
| 120%+ | 1.50x |

## SPIFs (Q1)
- New product (DataSync): Additional 2% on any deal including DataSync
- Multi-year: Additional 5% for 2+ year commitments
- Competitive displacement: Additional $1,000 for wins against Competitor X

## Splits
- SDR + AE on same deal: SDR gets meeting bonus + 2% deal credit; AE gets standard rate
- AE + SE on complex deal: SE gets technical win bonus; AE gets standard rate
- Two AEs on deal: Split based on documented territory/role agreement

## Clawbacks
- Customer churns within 6 months: 100% clawback
- Customer churns 6-12 months: 50% clawback
- Downgrade within 12 months: Clawback on difference

## Caps and Floors
- No cap on accelerated earnings
- Minimum $500 payout per closed-won deal (protects small deal motivation)
- Manager override: 5% of team deals, capped at $50K/quarter

Commission tier structure showing progression from base rate through accelerator levels

Step 2: Build the Calculator

Feed your comp plan to Claude Code:

You are a commission calculator for a B2B SaaS sales team. 

COMMISSION PLAN:
[Paste your entire comp plan document]

CALCULATION RULES:
1. Apply base rates first
2. Apply tier multipliers based on current quota attainment
3. Apply applicable SPIFs
4. Calculate splits according to deal roles
5. Check for clawbacks on previously paid commissions
6. Apply caps/floors
7. Show your work at each step

For each deal, output:
- Gross commission before modifiers
- Applicable tier multiplier
- SPIFs applied
- Split breakdown (if multiple parties)
- Final commission per person
- Reasoning for each decision

If anything is ambiguous, flag it for human review rather than guessing.

Step 3: Process Deals

async function calculateCommission(deal, repProfile) {
const prompt = `
Calculate commission for this deal:

DEAL:
- Company: ${deal.company}
- ACV: $${deal.acv}
- Type: ${deal.type} (New/Expansion/Renewal)
- Products: ${deal.products.join(', ')}
- Contract term: ${deal.termMonths} months
- Close date: ${deal.closeDate}
- Displaced competitor: ${deal.displacedCompetitor || 'None'}

REP:
- Name: ${repProfile.name}
- Role: ${repProfile.role}
- Quota: $${repProfile.quota}
- YTD Closed: $${repProfile.ytdClosed}
- Current attainment: ${repProfile.attainment}%

OTHER PARTIES ON DEAL:
${deal.splits.map(s => `- ${s.name} (${s.role}): ${s.contribution}`).join('\n')}

Show all calculations step by step.
`;

return await claude.calculate(prompt);
}

Step 4: Handle Edge Cases

Claude Code shines on the weird stuff:

EDGE CASE HANDLING:

Deal: Multi-year with mid-contract expansion
- Customer signed 3-year deal in January ($100K ACV)
- Expanded in March (+$25K ACV, same rep)

Question: How do we calculate the expansion commission?

REASONING:
1. Original deal: 3-year, so multi-year SPIF applies (10% + 5% = 15%)
2. Expansion: Same contract term, inherits multi-year status
3. Expansion rate: 8% base + 5% multi-year = 13%
4. Rep attainment: Now at 125% with original deal
5. Tier multiplier: 1.50x applies to expansion

CALCULATION:
$25,000 × 13% × 1.50x = $4,875

FLAG: Verify if expansion should inherit multi-year SPIF
(policy may differ by team).

Practical Workflows

Monthly Commission Run

async function runMonthlyCommissions(month, year) {
// Get all closed deals
const deals = await crm.getClosedDeals({ month, year });

// Get all rep profiles
const reps = await getRepProfiles();

// Calculate each deal
const commissions = [];
for (const deal of deals) {
const calc = await calculateCommission(deal, reps[deal.ownerId]);
commissions.push({
deal: deal,
calculation: calc,
breakdown: parseBreakdown(calc)
});
}

// Check for clawbacks
const clawbacks = await checkClawbacks(month, year);

// Generate report
return {
totalPayout: sum(commissions.map(c => c.breakdown.finalAmount)),
byRep: groupByRep(commissions),
clawbacks: clawbacks,
flaggedForReview: commissions.filter(c => c.breakdown.hasFlags)
};
}

Rep Self-Service

Let reps verify their own commissions:

async function repCommissionQuery(repId, question) {
const repProfile = await getRepProfile(repId);
const recentDeals = await getRecentDeals(repId);
const commissionHistory = await getCommissionHistory(repId);

const prompt = `
A sales rep is asking about their commission.

REP PROFILE:
${JSON.stringify(repProfile)}

RECENT DEALS:
${JSON.stringify(recentDeals)}

COMMISSION HISTORY (Last 3 months):
${JSON.stringify(commissionHistory)}

QUESTION:
${question}

Answer clearly, show relevant calculations, reference
specific deals and comp plan provisions.
`;

return await claude.answer(prompt);
}

Example queries:

  • "Why did the Acme deal only pay $2,400?"
  • "What's my commission if I close the pending BigCorp deal?"
  • "Am I on track for the 120% accelerator this quarter?"

Plan Modeling

Model comp plan changes before implementing:

async function modelPlanChange(proposedChange, historicalDeals) {
const prompt = `
We're considering this comp plan change:
"${proposedChange}"

Analyze the impact using last quarter's deals:
${JSON.stringify(historicalDeals)}

Show:
1. Total payout difference (old plan vs new)
2. Impact per rep
3. Which deal types are affected most
4. Potential unintended consequences
5. Recommendation
`;

return await claude.analyze(prompt);
}

Example: "What if we increase the new business rate from 10% to 12% but cap it at 110% attainment?"

Advanced Patterns

Multi-Currency Handling

CURRENCY RULES:
- All commissions paid in USD
- Deals closed in other currencies: use exchange rate at close date
- Exchange rate source: Company treasury rates (monthly)

EXAMPLE:
Deal closed in EUR: €50,000
Close date: February 15, 2026
Treasury rate (Feb 2026): 1.08 EUR/USD
USD equivalent: $54,000

Commission calculated on $54,000 USD value.

Territory Changes Mid-Quarter

TERRITORY CHANGE HANDLING:

Rep A had Territory X from Jan 1 - Feb 15
Rep B took over Territory X on Feb 16

Deal in Territory X closed March 10

RULE:
- If deal was in pipeline before transfer:
Original rep (A) gets full commission
- If deal entered pipeline after transfer:
New rep (B) gets full commission
- If deal was in active stage during transfer:
Split 50/50 or per documented agreement

This deal entered pipeline Jan 25, so Rep A gets full commission.

Clawback Automation

async function processClawbacks() {
// Find churned customers
const churns = await crm.getChurns({ lookbackMonths: 12 });

for (const churn of churns) {
// Find original commission
const originalCommission = await findCommission(churn.dealId);

// Calculate clawback
const monthsSinceDeal = monthsBetween(
originalCommission.closeDate,
churn.churnDate
);

let clawbackRate;
if (monthsSinceDeal <= 6) clawbackRate = 1.0;
else if (monthsSinceDeal <= 12) clawbackRate = 0.5;
else clawbackRate = 0;

if (clawbackRate > 0) {
await createClawback({
repId: originalCommission.repId,
dealId: churn.dealId,
amount: originalCommission.amount * clawbackRate,
reason: `Customer churned at ${monthsSinceDeal} months`
});
}
}
}

Results to Expect

Teams using AI-powered commission calculation typically see:

MetricBeforeAfterImpact
Calculation time40+ hrs/month2-4 hrs/month90% reduction
Error rate8-12%<1%90%+ fewer disputes
Rep trust score62%91%47% improvement
Time to resolve disputes3-5 daysSame day80% faster
Plan change implementation2-3 weeks1-2 days85% faster

The biggest win: reps stop wasting mental energy worrying about comp. That energy goes back into selling.

Free Tool

Try our AI Lead Generator — find verified LinkedIn leads for any company instantly. No signup required.

Getting Started

  1. Document your current plan - Write it in plain English, not spreadsheet formulas

  2. Identify edge cases - What causes disputes today? Document the rules

  3. Start with one team - Run parallel calculations for one month

  4. Build rep self-service - Let them query their own commissions

  5. Add clawback automation - Remove manual tracking of churns


Ready to automate your commission complexity? Book a demo to see how MarketBetter helps RevOps teams operate at scale.

Related reading:

AI Sales Email Template Generator with Claude Code [2026]

· 6 min read

Your SDRs spend 3+ hours daily writing emails. Most of those emails get ignored because they're either generic templates or poorly personalized. Here's how to fix that with Claude Code.

AI email template generator workflow

The Email Personalization Problem

Every sales leader faces the same dilemma:

Option A: Generic templates. Fast to send, terrible results. Your prospects have seen "Hope this email finds you well" a thousand times.

Option B: Truly personalized emails. Great results, impossible to scale. Nobody has time to research every prospect and write custom copy.

Option C: AI-powered personalization. The best of both worlds—if you set it up right.

Most teams try Option C with basic ChatGPT prompts and get mediocre results. The emails sound robotic, miss key details, and still require heavy editing.

Claude Code changes this equation.

Why Claude Code for Email Generation?

Claude's 200K context window is the game-changer here. You can feed it:

  • Your entire email template library
  • Your company's voice guidelines
  • Industry-specific talking points
  • Competitor battle cards
  • Recent company news about the prospect
  • The prospect's LinkedIn activity

All at once. No truncating. No "summarize this first."

The result? Emails that sound like they were written by your best rep after 30 minutes of research—generated in 30 seconds.

Building Your Email Template Generator

Here's the architecture for a production-ready email generator:

Step 1: Create Your Base Templates

Start with 5-7 proven email structures:

templates/
├── cold_outreach_problem.md
├── cold_outreach_social_proof.md
├── trigger_event_response.md
├── competitor_displacement.md
├── referral_followup.md
├── webinar_followup.md
└── content_engagement.md

Each template should have:

  • Clear structure with variable placeholders
  • Multiple tone variations (formal, casual, direct)
  • Industry-specific versions when needed

Step 2: Build Your Context Library

This is where most teams fail. They give Claude a prompt and expect magic. Instead, build a comprehensive context system:

Company voice guide:

  • Preferred phrases and words to use
  • Words and phrases to avoid
  • Tone guidelines by industry
  • Signature style rules

Industry insights:

  • Pain points by vertical
  • Regulatory concerns by industry
  • Budget cycle timing
  • Decision-maker titles

Competitive intel:

  • Positioning against each competitor
  • Migration success stories
  • Feature comparison talking points

Step 3: The Generation Prompt

Here's a prompt structure that consistently produces high-quality emails:

You are an expert B2B sales copywriter for [Company].

CONTEXT:
[Insert full voice guide]
[Insert relevant industry insights]
[Insert competitive positioning if relevant]

PROSPECT INFORMATION:
Company: \{company_name\}
Industry: \{industry\}
Role: {prospect_role}
Recent News: {company_news}
LinkedIn Activity: {recent_posts}
Tech Stack: {known_tools}
Trigger Event: {trigger_if_any}

TEMPLATE TO USE:
{selected_template}

INSTRUCTIONS:
1. Personalize the template using specific details from the prospect info
2. Reference their recent news or LinkedIn activity naturally
3. Connect their industry pain points to our solution
4. Keep the email under 150 words
5. Use a {formal/casual/direct} tone
6. End with a clear, low-friction CTA

Generate the email:

Step 4: Quality Control Layer

Don't ship emails directly to prospects. Add a scoring system:

const qualityChecks = [
{ name: 'length', check: email => email.length < 800 },
{ name: 'personalization', check: email => containsProspectDetails(email) },
{ name: 'cta_present', check: email => hasCallToAction(email) },
{ name: 'no_forbidden_words', check: email => !containsForbidden(email) },
{ name: 'sentiment_positive', check: email => scoreSentiment(email) > 0.5 }
];

Emails that fail any check go to a review queue. Emails that pass all checks can be sent automatically (or queued for approval, depending on your comfort level).

Real Results: Before and After

Manual vs AI email comparison

Before (Manual Process):

  • 3 hours/day on email writing
  • 50 emails sent
  • 12% open rate
  • 2% response rate

After (Claude Code Generator):

  • 30 minutes/day on review and approval
  • 200 emails sent
  • 31% open rate
  • 8% response rate

The open rate jump comes from better subject lines. Claude can generate and A/B test subject line variations at scale.

The response rate jump comes from genuine personalization. When you reference a prospect's actual LinkedIn post or recent company news, they notice.

Advanced: Multi-Language Support

If you sell internationally, Claude Code handles multilingual email generation natively:

Additional instruction: Generate this email in {language}.
Maintain cultural communication norms for {country}.

Our team uses this for EMEA outreach. The emails read naturally in German, French, and Spanish—not like machine translations.

Integration with Your Stack

The email generator becomes powerful when connected to your existing tools:

CRM Integration:

  • Pull prospect data directly from HubSpot/Salesforce
  • Push generated emails back as drafts
  • Track which templates perform best

Enrichment Integration:

  • Auto-fetch LinkedIn data via Proxycurl or similar
  • Pull company news from Crunchbase or news APIs
  • Get tech stack data from BuiltWith or similar

Sending Integration:

  • Queue emails in Outreach, Salesloft, or Apollo
  • Schedule based on timezone and optimal send times
  • Handle replies and out-of-office detection

Common Pitfalls to Avoid

Pitfall 1: Over-personalization Don't reference everything you know about a prospect. One or two specific details is enough. More feels creepy.

Pitfall 2: Inconsistent voice Review your first 100 generated emails manually. Train the model on corrections. Your voice guide will evolve.

Pitfall 3: Ignoring negative signals If a prospect's LinkedIn shows they're job hunting or their company just had layoffs, don't send a sales email. Build filters for these cases.

Pitfall 4: Template fatigue Rotate templates and refresh them monthly. Recipients and spam filters both notice patterns.

Getting Started Today

You don't need to build the full system to start seeing results:

  1. Day 1: Create your voice guide and 3 base templates
  2. Day 2: Set up Claude Code with your context
  3. Day 3: Generate 50 emails and manually review all of them
  4. Week 1: Refine prompts based on what you corrected
  5. Week 2: Increase volume, decrease manual review for high-scoring emails

Within a month, you'll have a system that generates better emails than your reps write manually—in a fraction of the time.

Free Tool

Try our AI Lead Generator — find verified LinkedIn leads for any company instantly. No signup required.

The MarketBetter Approach

We've built email personalization directly into MarketBetter's AI SDR platform. Your playbook tells SDRs exactly what to do next, and when it's time to email, the email is already drafted with full personalization.

No prompts to write. No templates to manage. Just review and send.

Ready to see personalized email generation in action? Book a demo and we'll show you how MarketBetter handles email personalization at scale.


Related reading: