Why a framework, not a list

There are roughly 200 vendors selling agentic AI features for marketing in 2026. A list of which ones to deploy goes stale before it is published. The list of vendors is also not the right unit of analysis for the decision — the same vendor may have one feature worth deploying and three features worth ignoring.

The decision framework below applies to capabilities, not vendors. Each capability gets evaluated against three filters. The filters are independent — a capability can pass one and fail another — so each filter's answer informs the decision differently.

The framework is descriptive of how we make these recommendations to clients. It is not a copyrighted method. Use it; modify it; ignore it if you have a better one.

Filter one: is this category still defensible?

Some AI capability categories are commoditizing. The model behind them is becoming a utility — every major foundation-model provider can produce equivalent quality, and the differentiation between vendors is on UI and workflow integration rather than on the AI itself.

For commoditizing categories, the right move is usually to buy whichever vendor offers the best workflow fit and minimal switching cost, and to plan for the line item to shrink over time. Do not build in commoditizing categories. The capability your engineering team builds in 2026 will be matched by every off-the-shelf product in 2027, and your build investment will be a sunk cost.

For defensible categories, the math is different. A defensible category is one where the value comes from data, integration depth, or workflow specificity that a generic vendor cannot match. In defensible categories, building can make sense — not because you are competing with the foundation-model providers, but because the vendor product cannot reach the depth your specific data and workflow require.

In 2026, our read is that copy generation, image generation, and basic chat are commoditizing; workflow-specific orchestration, retrieval over proprietary data, and decision agents tied to brand-specific KPIs are defensible.

Filter two: does your data substrate support it?

AI features perform on the data they have access to. Most enterprise data substrates are not in shape to support production AI in the categories that matter.

The question to ask before evaluating any capability: does this capability require data we have, in the shape it needs?

Six capability categories vs. three filters Each capability evaluated for defensibility, substrate fit, and operational readiness DEFENSIBLE SUBSTRATE OPERATE VERDICT Copy generation Image generation Chat & customer service Decision agents (KPI) Workflow orchestration Personalization agents N Y Y BUY N Y Y BUY ~ N N WAIT Y N N INVEST FIRST Y ~ ~ BUILD Y N N INVEST FIRST Y = yes · N = no · ~ = partial "Invest first" means: the capability is valuable, but build the data & ops substrate before deploying.
Fig 1. The three filters mapped against six common 2026 capability categories. Each row shows whether the category is defensible, whether typical enterprise data substrates support it, and whether typical enterprise teams can operate it. "Yes/yes/yes" means buy now. Any "no" means either wait or invest in the missing prerequisite first.

For some capabilities the answer is yes. Copy generation needs brand voice samples and product information — most brands have these. For others the answer is no. Decision agents tied to KPI optimization need event-level customer behavior data with reliable identity resolution across channels. Most brands do not have this in clean form.

If the data substrate does not support a capability, deploying the capability is going to produce demos that pass and production that fails. The agent will work on the curated demo dataset and underperform on the real one. The right move is to invest in the substrate first — even if it means waiting 12-18 months on the capability — and then deploy on a substrate that supports it.

This is the most common reason we recommend "wait" to clients in 2026. The capability is defensible and valuable. The substrate is not ready. Deploying now would burn trust and produce a failure that makes the next deployment harder.

Filter three: can your team operate it?

The third filter is the one buyers most often skip. Most agentic capabilities require specific operational muscle: monitoring, authority envelopes, rollback paths, evaluation cadence. We have written separately about what those requirements look like. The summary: deploying an agentic capability without the operational substrate produces incidents.

The right question is not "can our team learn to operate this?" but "is our team going to operate this in steady state, six months after deployment, when the launch attention has worn off?" Teams that say "we will figure out operations after launch" almost never do. The features get deployed, drift slowly out of alignment, and produce one quiet failure per quarter until someone notices.

For capabilities our team's clients ask about, the operational-readiness assessment usually surfaces the gating constraint. Many brands cannot operate decision agents in production because they do not have a monitoring substrate. Many can operate copy-generation features because the consequences of a bad output are small and human-reversible.

If the team cannot operate a capability today, the right move is either to invest in the operational substrate, narrow the deployment to a lower-stakes surface where the operating muscle is less needed, or wait.

Applying the framework to six common categories

1. Copy generation. Defensible: no, commoditizing. Substrate: usually yes. Operational: usually yes. Buy. Pick the workflow that fits your authoring tool, plan for the line item to shrink.

2. Image generation. Defensible: no, commoditizing. Substrate: yes. Operational: yes. Buy. Same pattern as copy. Brand-consistency is a meaningful workflow concern but not a build justification.

3. Chat and customer service agents. Defensible: partial. The model is commoditizing; the retrieval over proprietary support data is defensible. Substrate: most brands need work here. Operational: requires monitoring substrate most brands have not built. Wait or invest in substrate first. Pilots are fine; full deployment is premature for most.

4. Decision agents tied to KPIs. Defensible: yes. Substrate: most brands not ready. Operational: most brands not ready. Wait, invest in substrate. This is the highest-value category for those who get it right and the highest-cost failure mode for those who deploy too early.

5. Workflow orchestration. Defensible: yes, when tied to brand-specific workflows. Substrate: depends. Operational: depends. Build for the defensible workflows, buy for the generic ones. Most brands have a few high-value workflows worth investing in and many low-value ones better served by off-the-shelf.

6. Personalization agents. Defensible: yes. Substrate: typically not ready (identity resolution, behavioral data). Operational: requires monitoring. Wait, invest in substrate. This is the category where vendor demos most overpromise relative to deployment reality. We have written a mid-flight working note from a personalization rebuild engagement that walks what the substrate work actually looks like.

The capability is defensible. The substrate is not ready. This is the most common recommendation our team makes in 2026.

What the framework changes about the buying conversation

Using this framework changes the buying conversation in three ways.

First, it slows down deployment on defensible capabilities. The instinct to "ship something agentic this quarter" tends to fade when the substrate work becomes visible. This is uncomfortable for marketing leadership that has been promised an AI launch. It is right anyway.

Second, it speeds up deployment on commoditizing capabilities. The instinct to "wait until the market settles" tends to fade when the analysis shows the category is settling into utility pricing. Buy now, plan for cheaper substitutes later.

Third, it changes the build-versus-buy conversation from "what would our engineering team build" to "what would our engineering team build that the vendor cannot match." Most defensible capabilities have a vendor-buyable version that handles 70% of the workflow. The build question is whether the remaining 30% justifies a build investment. For most brands the answer is "build the 30% as an extension of the vendor, do not rebuild the 70%."

Our team is happy to walk this framework against your specific 2026 roadmap. Reach out if you are sequencing AI capability deployments and want a second perspective.