Support in 2030: the customer won't ask you - their agent will
December 31, 2025 · 6 min read
If my phone is overheating, I don't want to bounce between a forum thread, a generic chatbot answer, and a support portal that asks me to "clear cache and restart." I want my assistant to say: "I can run diagnostics, check warranty, and book a repair - want me to do it?"
Here's my bet for the next half of the decade: support becomes agent-to-agent by default. Humans don't disappear. But the first interaction stops being human-to-human - because customers start inside a general-purpose assistant, and that assistant becomes the new front door.
1) The front door is moving off your website - whether you like it or not
Customers are already treating third-party assistants as the "first line." Gartner predicts that by 2027, 40% of customer service issues will be fully resolved by unofficial third-party GenAI tools.
And they go further: by 2028, 70% of customer service journeys will begin - and be resolved - inside conversational, third-party assistants built into mobile devices.
That's not a new channel. That's a new operating system for support.
The uncomfortable part: if the assistant gives the wrong answer, the customer can churn without you ever seeing the interaction. Gartner explicitly calls out the risk that customers could defect because of incorrect third‑party answers "without the company ever knowing."
So the strategic question isn't "how do we improve our chatbot?" It's:
Do you want to be the source of truth… or the last resort?
2) "Ticket volume down" is a trap - interaction volume goes up, work gets atomized
The lazy take is "AI will reduce tickets." My take is more specific:
- Total interactions increase because asking becomes nearly free.
- Human-handled work decreases because much of that interaction gets auto‑resolved or auto‑routed.
Zendesk's Benchmark data says 83% of CX leaders expect customer self‑service interactions to increase 5× thanks to AI.
So yes - your ticket queue might shrink. But your support surface area expands: more self-serve attempts, more micro-questions, more proactive nudges, more "my agent contacted your agent" moments.
This is why I think the future isn't just deflection. It's orchestration: verified answers, permissions, routing, and escalation across a much larger set of "support-like" interactions than we used to count.
And it's also why "omnichannel" gets rewritten. Gartner predicts that by 2028, 30% of Fortune 500 companies will offer service only through a single AI-enabled channel spanning text, image, and sound.
I read that less as "one channel," and more as: one continuous thread, multiple modalities, minimal resets.
3) The winning stack becomes "agent-ready," not "ticket-ready"
Most support systems were built for a simple flow:
human reports issue → human agent triages → human resolves
Agent-first flips the primary user. In an agent-to-agent world, your "user" is often software. That changes what the stack needs to expose.
In my opinion, agent-ready support has three primitives:
1) Verifiable truth Not "a help center article." A knowledge system that's current, structured, versioned - and can be cited.
2) Authenticated actions The assistant shouldn't just explain steps; it should execute them: reset access, update an address, run diagnostics, trigger a refund, schedule a repair.
3) Deterministic escalation When automation fails, escalation can't be "submit a ticket and wait." It needs to be: context, identity, logs, attempted steps, and a clean handoff to the right specialist.
This is also where general-purpose assistants stop being "chatbots" and start being routers.
OpenAI has explicitly moved in this direction: connectors are now "apps" in ChatGPT, with a unified "app directory" experience.
And the mechanism is not hypothetical. OpenAI's docs state that GPT Actions let ChatGPT interact with external applications via REST APIs, converting natural language into the JSON schema required for an API call.
Meanwhile, the integration layer is standardizing. Anthropic introduced MCP (Model Context Protocol) as an open standard for secure, two-way connections between data sources and AI tools.
OpenAI's own release notes show MCP connectors rolling into ChatGPT (including partner-built connectors reviewed by OpenAI), and also describe custom MCP connectors for company knowledge.
Put that together and the direction is obvious:
Support becomes something assistants can call - not something customers must navigate.
4) Pricing follows the new unit of work: digital labor → outcomes
This is the part I feel most strongly about: pricing will drift away from feature checklists and toward "pay for work done" and "pay for outcome achieved."
Feature-based pricing made sense when the scarce resource was:
- seats (human access), and
- workflow features (routing, SLAs, macros)
In an agent-first world, the scarce resource becomes digital labor: tasks that used to require humans.
So pricing starts to map to the "replacement unit":
- atomic activities (classification, summarization, escalation, workflow steps)
- full outcomes (resolution, refund issued, replacement shipped, appointment booked)
We're already seeing real market movement:
- Intercom's Fin is explicitly priced at $0.99 per resolution ("pay when Fin achieves the outcome").
- Zendesk introduced outcome-based pricing tied to issues resolved autonomously by AI agents.
- Salesforce lists Agentforce at $2 per conversation, and also introduced Flex Credits to "pay for the exact actions Agentforce performs."
My prediction: the market settles into hybrids:
- seat pricing for humans,
- usage pricing for automation,
- outcome pricing where the vendor is confident they can measure value cleanly.
But outcome-based pricing has a built-in knife fight: what counts as a "resolution"? Intercom's own documentation frames it clearly, but definitions will vary - and enterprises will demand auditability because "resolved" becomes a billable event.
5) Reality check: a lot of "agentic" projects will face-plant
I'm bullish, but I'm not naive.
Gartner predicts over 40% of agentic AI projects will be canceled by the end of 2027 due to escalating costs, unclear business value, or inadequate risk controls.
So I'm not arguing "agents will magically solve support." I'm arguing something narrower and more practical:
- the interface is shifting to third-party assistants, and
- the winning companies will be the ones that expose boring, reliable, tool-connected support with great escalation.
The playbook I'd run (if I owned support for the next five years)
- Make knowledge machine-consumable (structured, current, citeable).
- Expose actions, not just answers (real workflows behind authentication).
- Treat escalation like a product (context-rich, deterministic handoff).
- Measure outcomes (resolution, time-to-resolution, error cost - not vanity deflection).
- Procure like you're buying digital labor (per action / per conversation / per outcome).
If you do that, support stops being "tickets + humans." It becomes infrastructure: humans, models, tools, and policies cooperating to reduce customer effort - even when the "customer" is an agent.
And that's the real competition in 2026–2030: not "who has the best chatbot," but who turns support into a trustworthy, callable layer that other agents can actually use.