› Approach

The framework, written down.

Microsoft modern work, agentic AI, and the modern endpoint share a delivery problem: emerging technology stacks reach enterprise scale faster than the methods to deploy them safely. The framework below is what holds when the technology under consideration is new enough that vendor playbooks are incomplete or wrong.

Four phases, ten named steps. Each phase produces an artifact the next phase depends on. Skipping ahead is how engagements that look successful in the first quarter unwind in the second.

Phase 1

Foundations

The first two to four weeks of every engagement. The artifact at the end of these weeks is the gate to everything else — and the moment where it becomes clear, to both sides, whether the engagement is well-scoped or whether the original brief needs a reset.

1

Gather without assumptions

Discovery is the first decision. Most engagements arrive with a vendor recommendation, an internal champion's opinion, or a hypothesis about what the tooling should be — those are inputs, not constraints. The first work is documenting the actual environment: tenant configuration, identity posture, group-policy and Conditional Access state, license entitlements, the cohort map, the data the workflows will touch, the regulatory surface in scope. Nothing is assumed.

2

Decision points and political climate

Every project has a decision tree shaped by people, not just technology. Review where decisions actually get made — the budget approvals, the security sign-offs, the change-management gates — and document the political climate around each one. A decision made without acknowledgment of who has authority over it ships once and gets reversed in a quarter. The artifact from this step is a written map of the decisions ahead and which stakeholders carry weight on each.

3

Project health, scope, timeline

If the engagement is in flight when Protime arrives, assess where it actually is — not where the project plan says it is. Persons involved in delivery, scope as understood by each role, timeline as believed by sponsors versus operators. Most rescue engagements need this step done before any technical work; the gap between official status and operational status is where the risks have been hiding.

Phase 2

Definition

What gets built, and what doesn't. These determinations cannot be deferred past Phase 2 — they bound everything Phases 3 and 4 produce.

4

MVP criteria with PMO and sponsors

Determinations on the minimum viable shape go to the PMO and project sponsors directly: which test use cases mark success, what feature parity with the prior or vendor-default state is required, which break-glass and rollback scenarios have to exist, what risk profile is being signed off on. These are sponsor-level decisions; landing them in writing this early is what makes mid-engagement scope-shifts negotiable rather than unilateral.

5

Architecture in scope, and what's deliberately not

Architect across all areas in scope, then deliberately add the adjacent areas that aren't yet — and put them in scope. Adjacent surfaces are where blockers actually land: the integration that wasn't called out in the original brief, the dependency on a system the Phase 1 discovery surfaced as undocumented, the policy boundary nobody was tracking. Identifying these as in-scope before the build begins prevents the late-engagement scope-shift that tanks delivery confidence.

6

Tools and features posture

Every technology component is classified: approved by the organization, exploratory and pending approval, or deprecated and being removed. Build decisions sit on top of this classification. Building against an exploratory tool requires explicit risk acknowledgment with the sponsors; building against an unapproved tool isn't done until the approval path is named.

Phase 3

When AI is in scope

An AI-specific track inside the engagement. The questions here are different from the rest of the modernization stack — they involve data flow boundaries, vendor lock-in posture, and the static-versus-agentic decision that determines what kind of system is actually being built.

7

Service boundaries and AI policies

If the engagement involves AI — which more of them now do — the first AI-specific question is the organization's posture. Service boundaries: what data classes can flow to which providers, where the regulated boundary is, what audit logging requirements are non-negotiable. Existing policies for AI usage: what's approved for general use, what requires per-case review, what is explicitly prohibited. Capacity to absorb new tools and design patterns: is there a governance committee that can move at the engagement's pace, or does the work need to wait for an existing review cycle.

7a

Platform, model, cost, safety

The platform decision (Anthropic, OpenAI, Vertex, Bedrock, Azure AI Foundry) is rarely separable from the model decision (frontier API, hosted SLM, local open-source weights). Each combination carries cost barriers and safety parameters that have to be modeled before commit. Local SLM and open-source-weight use cases especially — Phi-4, Qwen, Mistral on edge or in-tenant — need explicit safety parameter establishment because the vendor-side guardrails of frontier APIs are not present.

7b

Build — static or agentic

Within the established parameters, the foundational build decision is static or agentic. Static workflows use AI as a utility: drafting, summarization, structured extraction, predictable cost, deterministic interfaces. Agentic workflows give the system tool-use authority, multi-step reasoning, and decision latitude. The two have materially different build approaches, testing cycles, governance surfaces, and cost trajectories. Choosing wrong here is the single largest source of mid-engagement rebuilds in enterprise AI work.

Phase 4

Delivery discipline

How the engagement actually ships. Three operating principles run continuously across every release: surface misalignment immediately, transfer skills as a deliverable, and run in measured rollouts with a written plan B.

8

Fail fast on the unsupported path

When something doesn't align inside the tech stack — an integration that doesn't exist, a feature that's documented but not GA in the relevant cloud, a vendor capability that turns out to be a roadmap item — the discipline is to surface it the day it's discovered and identify whether there's a more recommended path to enablement. Most engagement post-mortems trace back to a misalignment that was visible in week two and addressed in week ten.

9

Skills alignment within the function group

Emerging-technology engagements are also organizational engagements. The core function group's skills have to align with what's being deployed, or the deployment lives only as long as the consultant is in the building. Identify the skill gaps deliberately, design the alignment work into the engagement (training, mentoring through the cutover, paired work), and treat skills transfer as a deliverable, not a side effect.

10

Crawl, walk, run — with plan B

Slow release to SME groups validates the build before broader rollout. The pattern isn't milestone gating — it's continuous validation. Each release widens the user base, surfaces operational signal, and either confirms or invalidates the prior decisions. When a release surfaces a misalignment, the engagement returns to fail-fast (step 8). Every phase carries an explicit plan B — the alternative path documented in writing before the primary path is committed to. Engagements without a written plan B are the engagements that stall when the primary path breaks.

› Where this lands

The framework’s value isn’t in any individual step. It’s in the discipline of running them in sequence, with each phase’s artifact gating the next. Skipped phases compound — the engagements that go sideways in month four almost always trace back to something that was supposed to land in Phase 1 or Phase 2 and didn’t. The work is the sequence, not the substance of any single step.

When Protime engages, Phase 1 is the first two to four weeks regardless of how the engagement was scoped. The artifact at the end of those weeks is the gate to everything else — and the moment where it becomes clear, to both sides, whether the engagement is well-scoped or whether the original brief needs a reset. That conversation, held early and in writing, is the difference between an engagement that ships and one that doesn’t.

Tell us what you’re working on