March 5, 2026
When Agents Do the Work, Who Captures the Value?

The agentic AI transition doesn't destroy enterprise software value. It redistributes it.

This is the 3rd of a 3 part series of blog posts, intended to examine the advent of Agentic A.I and the impact agents could have on the SaaS industry. Earlier, I laid out a simple framework for thinking about SaaS moats — data and system of record, workflow and process complexity, and user interface and engagement. I argued that not all of them are equally durable in an agentic AI world.

In my next post, I looked at ten of the largest SaaS companies through that lens. The observation that kept surfacing was this: the companies with the most defensible positions tended to be the ones whose value lives underneath the interface — in the data layer, in the governance layer, in the infrastructure that agents must run on. The ones with the thinnest ice underfoot were the ones whose stickiness has historically been about the experience of using the product rather than the irreplaceability of what lives inside it.

In this final post, I want to zoom out further. Because the question of which individual SaaS companies survive the agentic transition is really just one part of a bigger question: in a world where AI agents are doing more and more of the actual work of enterprise operations, where does the value pool ultimately collect? Who ends up capturing the most — and why?

I don't think there are clean answers to this yet. But I do think there are some useful ways to think about it. Here's how I'm approaching it.

The Infrastructure Floor: Hyperscalers

Let's start with the most structurally obvious beneficiaries: Microsoft Azure, Amazon Web Services, and Google Cloud. Every AI agent that gets deployed in an enterprise runs on compute somewhere. Every vector database that stores embeddings lives somewhere. Every LLM inference call gets routed through an API that someone is hosting. That someone, in the vast majority of enterprise deployments, is one of these three.

What makes this position interesting isn't just that they provide the compute. It's that they are increasingly providing the full stack — the models, the orchestration frameworks, the agent-building environments, and the monitoring and governance tooling. Microsoft's Copilot Studio, Google's Agent Space, and Amazon's Bedrock are all, in different ways, attempts to become the platform on which enterprise agent workloads are built and managed. If they succeed, the economic relationship between enterprises and their SaaS vendors gets mediated through a hyperscaler layer that captures a meaningful portion of the value.

That said, I think it's worth being careful about assuming hyperscaler dominance is inevitable. Enterprises have learned hard lessons about hyperscaler lock-in. Multi-cloud strategies are real. And there are categories of enterprise data — financial records, HR data, regulatory filings — where enterprises may be quite reluctant to have everything run through a single cloud provider's AI stack. The infrastructure floor is real. Whether it becomes the entire building is a different question.

A thought on this: The hyperscalers have the most durable structural position in the agentic value chain — but how much of the application-layer value they eventually capture is still genuinely open.

The Intelligence Layer: LLM Providers

The large language model providers — OpenAI, Anthropic, Google DeepMind, Meta AI, Mistral — sit at the heart of the agentic AI conversation. The models they build are the reasoning engines that agents run on. Without a capable LLM underneath, there is no meaningful agentic behavior. That gives them an extremely central role in the architecture.

But centrality in the architecture doesn't automatically translate into durable economic capture. One of the most interesting dynamics playing out right now is model commoditization — the pace at which open-source models (Llama, Mistral, Falcon) are closing the capability gap with proprietary frontier models. If enterprise-grade reasoning becomes available at near-zero marginal cost through open-source models fine-tuned on proprietary data, the economic position of closed LLM providers becomes much less certain. The intelligence layer may turn out to be infrastructure rather than a sustainable moat in itself.

The more compelling long-term play for LLM providers may actually be in the agentic orchestration layer — the frameworks, tools, and protocols through which agents are built, communicate, and operate safely. Anthropic's Model Context Protocol, OpenAI's Agents SDK, and Google's Agent-to-Agent Protocol are all early bets on becoming the connective tissue of the agent economy. Whether any of them achieve the kind of network effects that would make them sticky is worth watching closely.

A thought on this: The intelligence layer is critical but may face commoditization pressure over time. The more interesting value capture question for LLM providers may be in orchestration standards and protocols rather than raw model inference.

The New Challengers: Vertical AI Native Companies

This is the category I find most intellectually interesting to think about — and perhaps the one that gets the least attention in the mainstream conversation about AI and enterprise software. Vertical AI native companies are startups and scale-ups that are building purpose-built AI products for specific enterprise workflows, with no legacy architecture to defend and no installed base to protect. They can design for an agentic world from first principles.

In legal, Ironclad and Harvey are rebuilding contract review and legal research as agent-native workflows. In finance, Ramp and Brex are building AI-first expense and spend management. In healthcare operations, Abridge and Nabla are reimagining clinical documentation. In coding and software development, the GitHub Copilot competitors are multiplying rapidly. In each case, the value proposition is essentially: we can do what the legacy SaaS application does, but faster, cheaper, and natively integrated with AI reasoning — without the interface overhead that nobody needs anymore.

The risk for these companies is the same risk that every enterprise software challenger faces: getting to scale before the incumbents wake up and bundle a good-enough version of what you're doing into the existing contract. Salesforce, ServiceNow, and Workday are all making significant acquisitions and investments in this direction. But the incumbents have a real disadvantage too: their architectures were designed around human users, and rewriting them from the ground up is genuinely hard to do while simultaneously running a large, complex business. The vertical AI natives have a real window. How long it stays open is the question.

A thought on this: Vertical AI native companies may represent the most underappreciated threat to incumbent SaaS — not because they have bigger balance sheets, but because they have no legacy to protect and can think from first principles about what enterprise workflows should look like when humans are no longer in the loop.

The Quiet Winners: Data Aggregators and Infrastructure

If I had to identify the category most likely to quietly and durably capture value in the agentic era, it might actually be the companies that aggregate, clean, govern, and provide access to enterprise data at scale. Not the applications that use the data. Not the models that reason over the data. The infrastructure that makes the data accessible, trustworthy, and auditable in the first place.

Snowflake, Databricks, dbt Labs, and their peers in the modern data stack sit in a fascinating position. Every AI agent that needs to query enterprise data needs a reliable, governed data layer to read from and write to. The messier and more distributed enterprise data is, the more valuable a clean, well-governed data infrastructure becomes. And enterprise data is, almost universally, messy and distributed. The data cleanup and governance work required before most enterprises can meaningfully deploy agentic AI is enormous — and it creates a long runway of value creation for the companies best positioned to do it.

What's also interesting is the role that data infrastructure companies are beginning to play in the AI model ecosystem itself. Snowflake's partnerships with both OpenAI and Anthropic aren't just about giving customers access to models inside a data environment. They're about becoming the place where enterprise AI runs — where models have governed access to the data they need, within security and compliance perimeters that regulated enterprises require. That's a different and potentially more durable value proposition than being a fast query engine.

A thought on this: Data infrastructure may be the most structurally underrated category in the agentic value conversation. Agents need data more than they need interfaces — and the companies that make enterprise data reliable, governed, and accessible may end up capturing more long-term value than many of the applications that sit on top of them.

The Often Forgotten Piece: The Human Expertise Layer

One of the things I keep coming back to as I think through this transition is how easy it is to talk about AI agents replacing human work — and how much more complicated the reality is likely to be. Yes, agents can execute routine workflows. Yes, they can query data, draft communications, update records, and close loops on repetitive tasks. But the domains where enterprise software creates the most value are also typically the domains with the most regulatory complexity, the most ambiguity, and the highest cost of error.

Intuit's model — 13,000 human experts working alongside AI, handling the edge cases and providing the accountability layer — strikes me as a thoughtful template for how this plays out in regulated domains. Not AI replacing humans, but AI handling the high-volume routine work while humans focus on the genuinely complex, consequential, and judgment-dependent cases. Companies and platforms that can facilitate that hybrid model well may actually have a more defensible position than those trying to build fully autonomous end-to-end agentic workflows, particularly in areas like tax, legal, healthcare, and financial advice.

There is also a quieter point worth making: the organizations that will be most successful in the agentic transition are probably the ones with the deepest domain expertise embedded in their people and processes — not just the ones with the best AI technology. AI amplifies what already exists. If what already exists is deep expertise and clear process, AI makes that organization dramatically more capable. If what exists is confusion and ambiguity, AI will scale the confusion. That's worth sitting with.

A thought on this: The human expertise layer isn't going away — it's being repositioned. The most interesting enterprise AI deployments aren't the fully autonomous ones, but the ones that combine agent speed and scale with human judgment in the moments that matter most.

Now What? 

I started this series with a simple question: If AI agents can do much of the work that enterprise software was built to organize, how does that impact the value proposition of SaaS? Three posts in, I think the answer to that question is changing - at different speeds for different companies and different categories.

What I keep coming back to is that the agentic transition doesn't destroy value in enterprise software so much as it redistributes it. The value that lived in the interface — in the thoughtful design of how humans navigate complex work — moves down the stack, into the data, the governance, the orchestration, and the domain expertise that makes agents actually useful rather than just technically impressive.

The companies that seem best positioned for this redistribution are the ones that always had their real value in those deeper layers, even if the interface was what customers interacted with most visibly. The ones with the hardest road ahead are the ones where the interface genuinely was the product — and where there isn't an obvious, deep, irreplaceable asset underneath it.

I want to be clear that these are observations from someone who is genuinely curious about this space — not verdicts from someone who has all the answers. I've worked in this industry for a while and I'm fascinated by the pace of change, especially over the last 18 months. I've learned that the confident predictions almost always miss something important, and the right response to a transition this large and this fast is to stay curious, stay humble, and ask better questions.

Rotimi Olumide

Thought leader, speaker, multifaceted business leader with a successful track record that combines consumer & product marketing, strategic business planning, creative design and product management experience.

Connect on LinkedIn

Explore our other blog posts

View all blog posts