AI in our toolkit. Judgment in our chair.
We're a firm that uses AI in delivery. That choice is deliberate, governed, and transparent. This page tells you exactly how.
WHERE WE STAND
Our position on AI in professional services.
AI is the most consequential shift in professional services since the spreadsheet. It's also the most misunderstood. Most firms are doing one of two things — bragging about being “AI-first” without saying what that means, or staying quiet about AI altogether and hoping no one asks.
Neither serves clients well.
We chose a third path. We use AI deliberately in our own delivery, we govern it strictly, and we're explicit about where it shows up and where it doesn't. That practitioner experience is also what we bring to the AI conversations our clients are now having about their own businesses. We've felt the friction of deploying AI in regulated work, the trade-offs between different tools, and the discipline required to keep humans in charge of consequential decisions. When we advise on your AI strategy, that experience is what we bring to the table — not theory, but practice.
This page documents how we work. It will be updated when our tooling or practices change, and the change history will be visible here.
THREE PRINCIPLES
Three principles guide our work.
Humans decide. AI accelerates.
Senior practitioners hold every meaningful judgment in our work. AI is the productivity layer beneath that judgment, never above it. We use AI to read more documents, model more scenarios, draft cleaner analysis, and free our practitioners for the work that actually requires them. The decisions, the recommendations, and the accountability stay with the person.
Your data stays yours.
We use enterprise AI tools configured for zero data retention. Your information is never used to train external models, never shared across our other engagements, and never retained beyond your project. We sign NDAs by default and BAAs, DPAs, or industry-specific data-handling addenda when your work calls for them.
Transparency by default.
We're explicit about where AI is used in our work and where it isn't. Every engagement letter names the delivery mode and the tooling involved. You'll never wonder whether the analysis came from a person or a prompt — because we'll tell you, in writing, at the start.
DELIVERY MODES
Two ways to work with us. The choice is yours.
Every engagement is delivered in one of two modes. The client picks the mode at the start. The choice is documented in the engagement letter. Modes can be switched mid-engagement when the work calls for it.
Mode 1 — Default
AI-Amplified Delivery
Who it's for: Most engagements. Clients who want maximum velocity without losing senior judgment or data control.
- ·Senior practitioners use enterprise AI tools internally throughout the engagement
- ·AI handles document review, scenario modeling, first-draft analysis, code analysis, research synthesis
- ·Senior practitioners review every output before it leaves our shop
- ·Faster turnaround compared to fully human delivery — speed-up varies by engagement type
Pricing: Standard productized engagement pricing.
Mode 2 — On Request
AI-Free Delivery
Who it's for: Clients with regulatory caution, internal AI policies, sensitive data classifications, or personal preference for human-only delivery.
- ·Same senior practitioners as Mode 1
- ·AI tooling disabled in the practitioner's working environment for the duration
- ·Named AI tooling disabled for the engagement
- ·Documentation of the systems used to produce the work, on request
Pricing: Premium pricing reflects the operational cost of delivering without AI tooling. Specifics confirmed at engagement scoping.
Either direction, at any point, with notice. Clients sometimes start in Mode 2 and move to Mode 1 once they see how the firm handles AI governance. Others start in Mode 1 and switch to Mode 2 for a specific phase — typically regulatory submissions, audit support, or sensitive transactions.
WHERE AI SHOWS UP
What AI does in our work.
AI accelerates the parts of professional services work where speed, scale, and pattern-recognition matter more than judgment. Specifically:
Document review and synthesis
Reading through contracts, financial statements, audit work papers, regulatory filings, board materials, technical documentation, and case files faster than a human can — and surfacing the relevant signal for practitioner review.
Scenario modeling
Running more financial scenarios, sensitivity analyses, valuation models, and forecast variants than would be feasible by hand — so the practitioner has a wider range of options to evaluate.
First-draft analysis
Producing initial drafts of memos, summaries, comparison tables, and structured analyses that the practitioner then refines, corrects, and signs off on.
Code analysis
For our technology engagements — reading codebases, identifying technical debt, scanning for security patterns, evaluating architectural choices, and benchmarking against best practices.
Research synthesis
Aggregating and synthesizing market research, regulatory updates, competitive intelligence, and industry data so the practitioner spends time on interpretation, not collection.
Workflow automation
For our managed delivery work — automating repetitive operational tasks within carefully scoped boundaries, with human oversight on all consequential decisions.
WHERE AI NEVER SHOWS UP
What AI doesn't do in our work.
The line between AI-assisted and human-only is drawn deliberately and held firmly.
Final decisions
No deliverable leaves our firm without senior practitioner review and sign-off. AI output is input to the practitioner's judgment, never a replacement for it.
Audit conclusions and regulatory submissions
When we support audit work or prepare materials for regulatory filings, the analysis and conclusions are human-generated and human-defended.
High-stakes recommendations
Strategic advice, transaction recommendations, M&A judgments, and investment-grade analyses are produced by senior practitioners. AI may inform the underlying research; it never carries the recommendation.
Sensitive data classification
Decisions about how to handle, classify, or protect sensitive client information are human-made. AI tooling is configured to honor those decisions, never to set them.
Client communication
Direct client communication — strategic conversations, sensitive feedback, judgment calls in real time — is human. AI may help us prepare; it does not replace us.
Anything we wouldn't sign our name to
This is the underlying principle. If a practitioner wouldn't put their name on it personally, AI doesn't produce it. Period.
WHAT WE USE
What we use, and how it's configured.
We use enterprise-grade AI tooling exclusively. Consumer tools — public-facing chatbots, free-tier AI services, unmanaged plug-ins — have no place in our delivery work.
Our standard tooling stack:
- ·Microsoft Copilot for Microsoft 365 — configured for commercial data protection with no training on client data
- ·Claude Enterprise (Anthropic) — configured under our approved enterprise data controls; no client data used for model training
- ·Azure OpenAI Service — for engagement-specific AI workflows, deployed with client-approved access, tenancy, and network controls
- ·Specialized AI tools for code analysis, document review, and finance work — each evaluated against our data-handling standards before adoption
What we don't use in client delivery:
- —Free or consumer-tier AI tools
- —AI tools that train on user inputs
- —AI tools that share data across organizations or customers
- —AI tools without enterprise-grade data residency and retention controls
Tooling decisions are reviewed continuously. As the AI landscape evolves, our tooling will too. When it changes, this page updates.
DATA GOVERNANCE
How your information is handled.
Three commitments cover the data side of our AI use.
Zero data retention by default
Our enterprise AI tools are configured so that client information processed during an engagement is not retained by the AI provider after the session. Where the tool offers retention by default, we disable it. Where the tool doesn't offer disable, we don't use it for client work.
No training on client data
None of our enterprise AI tools use client conversations, documents, or analyses to train external models. This is contractual with our AI vendors, not just configuration-level.
Data residency on request
Data residency is supported on a per-tool basis (Azure OpenAI, Microsoft 365, Claude Enterprise where available). Residency is scoped at engagement start; constraints may affect tool selection and pricing.
NDAs, BAAs, DPAs as standard
We sign whatever data-handling agreements your work requires. NDAs are standard at first contact. BAAs and DPAs are signed before any sensitive data exchange.
Audit trail on request
For any engagement, we can produce a record of which AI tools were used, how they were configured, and what data passed through them. Useful for client compliance files and regulatory documentation.
YOUR RIGHTS
What you can ask of us.
Across every engagement, you have the following rights:
- ·Mode selection — choose AI-Amplified or AI-Free at any point
- ·Tooling exclusion — request that specific AI tools not be used on your engagement
- ·Data residency — specify geographic constraints on where your data is processed
- ·Audit trail — request documentation of AI use on your engagement at any time
- ·Opt-out of AI advisory — engage us purely for traditional services without any AI strategy component
- ·Mid-engagement changes — adjust the AI posture as the engagement evolves
If your team has internal AI policies, regulatory constraints, or specific compliance requirements, we'll work with your legal, security, or compliance teams to align our delivery posture. If full alignment isn't achievable, we'll flag the conflict at scoping so you can decide whether to proceed.
KEEPING THIS CURRENT
When this page changes.
The AI landscape moves quickly. New tools emerge, regulations evolve, and best practices shift. This page will be updated when our tooling, governance, or delivery posture changes — and the change history will be visible here.
When meaningful changes happen, active clients will be notified directly. New engagements will operate under whatever version of this page is current at the time of contracting.
- Last updated
- Launch version
- Change history
- Maintained from launch. Substantive changes will be recorded with date and summary.
Want to talk through how this fits your engagement?
Every client conversation includes a discussion of which delivery mode fits the work, what tooling is involved, and what data governance applies. The AI posture isn't a separate sales conversation — it's part of how we scope every engagement.
Or explore our services to see what we deliver.