How we think about AI SDRs

How we think about AI SDRs

Estimated reading time: 5 minutes

What an AI SDR Is For

A fully loaded human SDR in North America costs roughly $88,000 to $131,000 a year. Salary, commission, benefits, manager time, the tool stack they ride on. Set that next to what an AI SDR can deliver when it's built and run correctly:

MetricHuman SDRAI SDR
Annual cost, fully loaded$88K to $131K$27K to $92K
Daily outbound volume50 to 150 touches1,000+ touches
Ramp time60 to 90 days1 to 2 weeks
Cost per lead (MarketsandMarkets)~$262~$39
Cold email reply rate3 to 7%3 to 7%
LinkedIn reply rate16 to 21%16 to 21%

The headline is parity. Built right, an AI SDR matches the reply rates of a good human BDR on both cold email and LinkedIn. The leverage shows up everywhere else: cost per lead, ramp time, the number of well-targeted touches a team can run in a quarter without burning the list.

The typical AI SDR product in market delivers abysmal reply rates, burned sending domains, and prospect lists you can't approach again for six months. Spam 2.0: faster, cheaper, just as easy to ignore.

The receipts are public. 11x has been reported to churn 70 to 80% of its customers and is facing challenges over named logos that say they never bought the product. Artisan was banned from LinkedIn for roughly two weeks around the end of 2025; one buyer documented 1,400 emails sent through it with zero responses, another reported 20,000 messages and 3,000 LinkedIn requests producing zero meetings. Regie.ai reviewers describe the messaging as bland, flag AI hallucinations as a brand risk, and report churn in the same neighborhood. As a category, managed AI SDR contracts cancel at roughly 50 to 70% inside ninety days. That's the floor on what a volume-first AI SDR delivers, and it's the part of the market we're not trying to be in.

The failure is structural. A volume-first model breaks in any market with a finite buyer pool, and every industry we work in at Quantonica has one. A few thousand active carbon credit buyers in one market. A few hundred industrial decarbonization leads in another. A small number of regulated operators in a third. Burn that list once with sloppy outreach and you don't get another swing. Reputation in a small market travels. The volume-as-product approach treats the prospect list as a renewable resource.

So the way we build an AI SDR inverts the default.

Intelligence over volume. The fewest touches needed to hit the pipeline target, sent to the people for whom the message is most likely to land. The AI is doing the part of the work where intelligence compounds. The BDR is doing the part where intelligence is born. Sustainability is the niche we know best, but the rule is the same in every market we serve: small TAM, long memory, replies matter more than sends.

That principle only works if the system can tell you which segment is replying and which message variant is working. So we built around an orchestration layer that runs parallel targeted experiments and surfaces the outcomes fast enough for a human to aim the next campaign before anyone gets burned.

Two hard boundaries hold the model together.

The first is the reply boundary. The AI handles outbound. The moment a prospect replies, a human takes over. We don't run AI-to-prospect conversations on the inbound side. Generic AI replies miss the nuance a good BDR catches in the first three sentences, and the goodwill cost of a wrong reply in a small market is permanent.

The second is the voice boundary. For phone outreach, the AI selects the calls worth making and dials them. The BDR talks. We don't automate the conversation itself. The technology isn't mature enough and a synthetic call that goes wrong gets remembered, told, and retold.

With those guardrails in place, the BDR's day stops being the dial-grind. The grinding is done. Their time goes to the work that compounds. Research a specific account deeply enough to write something the buyer hasn't seen ten times this quarter. Show up at a CDR conference prepared, with intel on who's there, what they're working on, which retirements they've signed, which project releases are imminent. Make ten well-researched calls a day instead of fifty cold ones. Build a LinkedIn presence that compounds week after week. Pull internal account signals into the next campaign brief. Bring the AI a sharper hypothesis for the next experiment cycle.

Prospecting is a search problem. Each turn produces a campaign more targeted than the last. Reply rates climb. The numbers in the table are what you get once that loop has been running in a market for a few cycles. They aren't what you get on day one, and they aren't what you get if you treat the AI as a volume amplifier.

A common misread of all this is that we're trying to replace BDR teams. The contest worth winning is between a BDR team without intelligence underneath and a BDR team with it. We're building software that handles the prospecting tax, surfaces real-time signal from parallel experiments, and respects the boundaries where humans need to be present.

So when a buyer asks how we compare to Apollo, Sales Navigator, or just pointing ChatGPT at their list, the honest answer is that we're not the same kind of solution. Apollo and Sales Navigator are databases. ChatGPT and Claude are reasoning engines with no outbound stack underneath them. What we build is the layer that ties data, reasoning, and orchestration together so a BDR team can scale. The right comparison was never AI versus human. The real question is what your team does once the system is running underneath them.

The teams that win are the ones whose BDRs spend their hours at conferences with intel in hand, on the phone with the ten accounts worth calling today, building a LinkedIn presence buyers remember, and shaping the next campaign from what the last one taught them. The AI keeps the replies coming in the background. That's the game.