The Death of the Copilot

Key Takeaways

Gartner predicts over half of enterprises will stop paying for copilots by 2028 — shifting to platforms that deliver workflow results, not suggestions. Vendors that bolt on AI face up to 80% margin compression by 2030.

The copilot model has a trust problem. GitHub will auto-opt-in user data for AI training starting April 24, 2026. Every time your team uses assistive AI, they’re potentially sharing proprietary code and business logic with a third-party training pipeline.

My take: The copilot was a stepping stone, not the destination. If your copilot investment can’t show business outcomes and can’t guarantee data privacy, it’s time to rethink — not double down.

The Prediction That Should Worry Every SaaS Vendor

On April 2, 2026, Gartner dropped a prediction that should worry every software company selling AI as a feature: by 2028, over half of enterprises will stop paying for assistive AI — copilots, smart advisors, AI-enhanced search — and shift to platforms that commit to workflow results.

Think about that. The AI most vendors are selling today? Enterprises are going to stop paying for it.

The follow-up is worse: by 2030, software companies that bolt AI onto legacy apps instead of redesigning for agentic execution will face margin compression of up to 80%.

Not a gradual shift. A structural break.

The Trust Problem Nobody Talks About

Beyond the ROI question and the numbers are brutal: Gartner surveyed 782 I&O leaders and found only 28% of AI use cases actually meet ROI expectations, 20% fail outright — there’s something else going on. Enterprises are feeding proprietary data into AI systems they don’t control.

Most AI vendors keep the right to train on your data by default. GitHub just made this painfully clear, starting April 24, 2026, Copilot uses your interaction data (prompts, code, outputs) to train their models unless you manually opt out. You’re in by default. Millions of developers affected.

This isn’t just GitHub. It’s the whole copilot model: every time your team uses assistive AI, they’re potentially feeding proprietary code, customer data, and business logic into someone else’s training pipeline. Gartner predicts 50% of organizations will adopt zero-trust data governance by 2028 specifically because of this.

I’ve watched this play out in real time:

From Assistance to Execution

Here’s what Gartner actually means: it’s not about whether your product has AI. It’s about whether that AI has “delegated authority to trigger actions across enterprise systems within policy, governance, and identity constraints.”

Put simply:

Assistive AI (Copilot)Execution AI (Agentic)
Suggests a responseSends the response with trust references
Drafts a reportAutomatically generates, reviews, and distributes the report with human in the loop
Recommends an actionTakes the action within predefined policy guardrails and observability
Helps you work fasterDoes the work, you supervise
Measured by adoptionMeasured by outcomes

Your role changes from “doing the work with AI help” to “supervising AI that does the work.” Gartner calls this the “Agent Steward” — you set the policies and governance, the AI executes within those boundaries.

If you read last week’s piece on measuring AI ROI, this is where it clicks. Copilots are stuck on Layer 1 (efficiency — “are we saving time?”). Execution platforms play on Layer 2 and 3 — revenue and strategic impact.

Navigating the shift from assistive to execution AI

Why Most Vendors Aren’t Ready

Look at the numbers. Microsoft’s Copilot for M365 — the biggest assistive AI bet in the enterprise — has 15 million paid seats after two years. That’s 3.3% of 450 million addressable users. Teams hit 300 million in under three years. Without structured change management, enterprise Copilot deployments show 30-50% active usage at six months. Half the licenses sitting idle.

This isn’t a Microsoft problem. It’s what happens when you bolt AI onto existing workflows and hope people use it.

The winners will embed agent orchestration into systems of record, build policy-aware APIs, and handle identity and audit at the control plane. Everyone else who just slapped a copilot on top? Up to 80% margin compression by 2030.

If you’re evaluating vendors, stop asking “does this have AI?” Ask: “can this execute workflows on its own, within our governance framework?”

What This Means for Your AI Strategy

If you’re a CTO or CDO reading this, three implications:

Rethink your copilot investment. Plateaued usage + no business outcomes = product-market fit problem, not an adoption problem. Good for a quick POC. Not the destination.

Evaluate vendors on execution authority, not AI features. Can the platform act on your behalf? Does it respect your identity model? Can you audit what it did and why? Here’s the test: if the answer to every question is “it suggests, you click” — you’re paying for assistive, not execution. That distinction will matter a lot more when Gartner’s prediction hits and your competitors are running autonomous workflows while you’re still clicking “accept suggestion.”

Prepare for the Agent Steward role. Different work, not less work. Your people will supervise AI execution, set policies, handle exceptions, make the judgment calls AI can’t. Skill shift, not headcount reduction.

The copilot got us comfortable with AI at work. That was its job. But Gartner is saying the next step is here — from AI that assists to AI that executes.

The companies that thrive won’t be the ones with the most AI features. They’ll be the ones that redesigned their workflows around delegated and integrated execution — with humans steering, not typing.


Read more: Gartner — Enterprises to Abandon Assistive AI by 2028 (Apr 2, 2026), Gartner — Only 28% of AI Projects Deliver ROI (Apr 7, 2026), Gartner — Zero-Trust Data Governance (Jan 2026), Heise — GitHub Copilot Training Data Policy