Why the assessment exists
Most digital consulting engagements start with discovery — a 4-to-6 week phase where the team interviews stakeholders, inventories the current state, and produces a deck of observations. The deck is usually long, usually polished, and usually does not change what the engagement actually does in the months that follow.
Our team replaced this with the Digital Health Assessment in 2024, after watching too many discovery decks produce no calibration value. The Assessment is different in three structural ways. First, it is measurement-driven rather than interview-driven. Stakeholder interviews surface opinions; the assessment reads the brand's digital posture from outside, the way a sophisticated buyer would. Second, it resolves to scores. Vague statements like "the personalization is immature" become specific scores against specific signals. Third, it is calibrated to be honest. Most maturity assessments come back at 4.7 out of 5; ours typically come back in the 30s and 40s on a 0-100 scale. That is the calibration we want.
Every CXOntology engagement starts with the Assessment. It is offered pro-bono alongside managed-services contracts because the engagement structure that comes out of an honest read is materially better than the structure that comes out of a flattering one.
The five pillars
The Assessment reads five pillars. Each is scored 0-100; the composite is weighted by what matters most for the brand's stated strategic priorities, not as a simple mean.
Experience. How the brand performs on its own digital surfaces. Page-speed at p75, accessibility compliance, mobile-first behaviors, conversion-funnel hygiene, content quality from a user-task perspective. Roughly 14 signals.
Findability. How reliably the brand surfaces in the moments customers search. Organic search posture, structured-data hygiene, content depth on long-tail intent, paid-search relevance, AI-search readiness. Roughly 12 signals.
Operations. The engineering muscle behind delivery. Deployment cadence and reliability, integration substrate quality, observability coverage, vendor management posture, time-to-publish for new content. Roughly 16 signals.
AI Maturity. Genuine capability rather than vendor checkbox. Data substrate readiness, model evaluation discipline, agentic feature governance, AI-touching workflow documentation. Roughly 10 signals.
Trust. Security, privacy, governance, brand integrity. Identity-management hygiene, consent and privacy compliance, content-governance review processes, brand-voice consistency across channels. Roughly 11 signals.
How the signals get scored
Each signal has a calibrated scoring rubric — what does "70 out of 100" look like for this specific signal, at this brand size, in this industry. The rubrics are maintained by our research team and updated quarterly as best-practice benchmarks shift.
Most signals are scored from observable data rather than from interviews. We measure the brand's page-speed at p75 directly. We pull the brand's structured-data hygiene from public sources. We instrument the brand's deployment cadence by reading the public deploy markers. The signals that require internal data — primarily the Operations and AI Maturity pillars — are scored from documented evidence rather than from claims.
This matters because the scoring is reproducible. Another team running the assessment on the same brand at the same time would produce results within a few points on each pillar. Most maturity assessments in the industry are not reproducible at this level, which is part of why they tend to inflate.
What the deliverable looks like
At the end of the 30 days, the brand receives a written report — typically 35-50 pages — plus a debrief session with the senior sponsor.
The report has four sections. One: the scored composite, with the pillar-by-pillar breakdown and the calibration context (what a 40 means for a brand of this size in this industry). Two: the signal-by-signal detail, with the observable data behind each score and the specific gap from benchmark. Three: the prioritized findings — the 6-10 highest-impact gaps, with the rationale for prioritization. Four: a proposed remediation roadmap — what would the next 6-12 months look like if the brand wanted to close the prioritized gaps.
The debrief is a working session, not a presentation. We walk the senior sponsor through the report in real time and update findings based on context the sponsor brings that the external assessment could not see. About one in three sessions surfaces something material that revises the priority order. The fourth section — the roadmap — is then refined post-debrief based on what the conversation surfaced.
When the finding contradicts the expectation
A meaningful percentage of assessments produce findings that contradict the executive sponsor's prior expectation. The most common pattern is the one we wrote about in our Q1 2026 research note: AI Maturity scores significantly lower than the sponsor expected, often by a factor of 2-3. Findability gaps are similarly underestimated.
These are the most consequential moments in the debrief. The sponsor has been told by their vendor partners that the brand is AI-ready. The assessment says otherwise. The temptation is to soften the finding, to find a way to make the conversation more comfortable. We try hard not to.
The discipline we hold is that the assessment is honest first, useful second, and flattering never. The sponsor who needs to hear that their AI readiness is overstated will not benefit from a softened version of the finding. They will benefit from the specific evidence, the specific gap from benchmark, and the specific path to close the gap. The debrief is where this honest conversation happens, and it is the most valuable single hour of the engagement.
Honest first, useful second, flattering never.
Why we offer it pro-bono with managed services
The Assessment is offered at no cost alongside any managed-services engagement. The structural reason is that the engagement that comes out of an honest read is much better than the engagement that comes out of a vague one.
Without the assessment, the engagement starts on the client's framing of the problem, which is usually a hypothesis about what is broken rather than a measured read. The first three months get spent confirming, contradicting, or revising the framing. With the assessment, the first three months get spent executing against a calibrated baseline.
The cost to us of running the assessment is real — typically 80-120 hours of senior team time over the 30 days. The value it returns is larger, because the managed services engagement that follows is structured against findings the team and client both believe. Engagements built on shared understanding compound; engagements built on contested framing drift.
If you want to see your brand read against this framework, the Digital Health Assessment is the entry point to every CXOntology engagement.