The pattern we kept seeing
Before starting CXOntology, the team here spent the better part of two decades inside larger digital consultancies. The pattern that kept showing up was uncomfortable enough to name plainly: most engagements were determined by what happened in the first four weeks. Not influenced. Determined. The engagements that produced durable outcomes had a specific kind of opening month. The ones that drifted into scope-creep, late delivery, and unhappy clients had a different opening month.
The difference was not the discovery deck. Every consultancy produces a discovery deck. The difference was whether the team and the client actually used the first month to make decisions: what does success look like, who decides what, what is in scope versus aspirational, how will we know when we are off-track, who escalates to whom and when.
When those decisions get made in week three, the engagement is calibrated. When they get deferred to "we will figure that out as we go," the engagement starts losing months by month four. By month nine, it is rebuilding the foundation it should have built in month one.
Why the first month gets underused
The structural problem is that the first month is also when the consultancy is most pressured to demonstrate value. Senior client stakeholders want to see deliverables. The PMO wants Gantt-chart green. The vendor partner wants engineering tickets being closed.
So the first month gets filled with activity that looks like progress: stakeholder interviews, current-state documentation, capability inventory, vendor reference calls. All of these are useful artifacts. None of them is a decision. The engagement leaves month one with a stack of documents and no shared answer to "what does success look like for this work" beyond the contract's stated scope.
The contract's stated scope is almost never enough. Every digital engagement of meaningful size has at least three implicit success criteria that were never written into the SOW because nobody knew to write them. Surfacing those is the actual work of month one. It does not show up as a deliverable. It shows up as a quieter, more confident team in months two through twelve.
What we do differently
When we started this firm, we decided the first month of every engagement would be unbilled. The decision was structural, not promotional. Removing the billing pressure removes the pressure to produce deliverable-looking artifacts and creates space to do the decision-making work that actually calibrates the engagement.
The first month with us looks like this. Week one: pair the senior pod leader with the client's engagement sponsor for what we call a calibration sprint — three working sessions covering goal, scope, decision rights, and risk register. Week two: shadow the client team's existing operating cadence and read the current state from inside, not from interview transcripts. Week three: draft a 30-page diagnostic and walk it with the senior sponsor in a single working session, not as a deck presentation. Week four: lock the operating model — who is on the engagement, what cadence, what does success look like in measurable terms.
At the end of week four, we and the client know whether the engagement should continue. About one in seven of these does not continue past month one. Sometimes the right answer is that the client does not need us. Sometimes the right answer is that we cannot help in the way they need. Either of those is a better outcome than continuing into a contract that does not have the foundation to succeed.
What it costs us, and why we still do it
Doing the first month unbilled is expensive. On a typical mid-sized engagement, the first month is roughly 8% of the total contract value. For an engagement that does not continue, we have absorbed that cost. We do it anyway because the alternative — billing for a month that does not calibrate the engagement properly — costs us more in the long run through scope creep, unhappy clients, and reference-erosion.
The economic argument: an engagement that fails in month nine costs more to a consultancy's reputation than the revenue from any single engagement justifies. Front-loading the calibration work prices a small known cost (our first-month investment) against a larger unknown cost (the cost of a misaligned year-long engagement). The math works in our favor over a portfolio of engagements, even though it does not work on every individual one.
The cultural argument is the one that matters more, though. The team here came from places where the first month was sold and underused, and we left because we did not want to keep doing that to clients. Doing the first month at no cost is not a marketing tactic. It is what we wished our previous employers had done. Now we get to.