ai for law firms · production

AI for law firms, shipped
— not pitched.

We're the `ai for law firms` development partner you hire to ship production AI on your real stack — Relativity, Everlaw, iManage, NetDocuments, Clio, Ironclad — not the vendor selling you a deck. Contract review pipelines on your firm's clause library. E-discovery TAR + GenAI re-ranking on the responsive doc set. Matter intake + conflict-check + engagement-letter automation. Privilege-aware deployment (Bedrock + customer-managed keys, retention=0 on rings 1 and 2), citation-chain logging per ABA Op. 512, partner-in-loop sign-off on every leaf. First workflow live in 6–8 weeks behind a partner-in-loop flag.

matter lifecycle · ai-fit ribbon
high AI fit low AI fit UPL · human-only
Intake Matter intake + conflict pre-screen
Conflicts Conflicts check · entity resolution
Engagement Engagement-letter automation
Discovery TAR + GenAI re-ranking
Motion Brief-research assist
Trial Advising client · advocacy
Resolution Settlement draft + summary
Matter Close File retention + KM extraction
tap any bead above for the workflow + model + cost · the ⚠ bead is the UPL line

6–8 wk
first legal AI workflow live behind a partner-in-loop flag
ZDR
zero-data-retention tenancy on every privileged workflow we ship
$300–$2K
monthly model + infra cost band per shipped workflow
$3K
legal AI audit-to-roadmap before any build starts
why a legal ai partner, not a vendor pitch

What changed.
And why a `legal ai company` ships differently now.

`Ai in legal` has cycled three times in a decade. This one is different because per-decision economics finally work — `legal ai agents` and `agentic legal ai` aren't strategic narratives anymore, they're $0.001–$2.00 per-inference plumbing that pays back inside a quarter on the right workflow. Three things a `legal ai company` should be honest about before scoping your first build.

From law-firm software to model-per-task workflows

Yesterday's legal stack was Relativity for review, iManage for DMS, a contract-lifecycle tool, and a research subscription. Today, the same matter pulls model-per-task: Haiku 4.5 batch-classifies the responsive doc set, Sonnet 4.6 re-ranks the top-N, GPT-5.4-mini extracts the structured clause-comparison, and a partner signs off. The buy-vs-build question has shifted — Harvey, CoCounsel, Lexis+ AI, Westlaw Precision, and Clio Duo are the benchmark products you'll compare us against, and they're the right answer for some firms. We're the right answer when you want the workflow shaped to your matter intake, your KM corpus, your privilege ring 2 — not a product team's roadmap.

Privilege is a content classification, not a vendor badge

Privilege doesn't live in a SOC 2 attestation; it lives in the deployment topology. Anthropic Claude on AWS Bedrock with customer-managed keys, Azure OpenAI with retention disabled, or on-prem fine-tunes for ring-1 work-product — each is the right answer for a different ring on the privilege map in §6. As a `legal ai company` we ship the deployment pattern that matches the content classification, not a one-size-fits-all stack. FRE 502 inadvertent-waiver risk is non-trivial when prompts route through a shared-fleet API with vendor-side logging; we design around that constraint from day one.

The UPL line is the design constraint

Every shipped workflow lives on one side of the unauthorized-practice-of-law line. AI does drafting, summarization, research, classification, and intake — never client-facing legal advice. ABA Formal Opinion 512 (July 2024 GenAI) plus the California State Bar Practical Guidance (Nov 2023) and the New York State Bar Task Force Report (Apr 2024) name the supervision, competence, and confidentiality duties; they don't soften the UPL line. A partner reviews and signs every leaf node we ship. That constraint — visualized in §1 as the ⚠ bead at the Trial / advising-client phase — is the shape of the engagement.

ai for law firms, by P&L line

Six AI workflows that move firm P&L.
Ranked in the audit, not the slide deck.

These are the seven `ai legal software` workflows that consistently pay back in the audits we run. Not every firm needs all seven — most have a high-ROI candidate in three of them. The audit ranks yours so you don't have to guess which to fund first. Buyer reality: `ai contract management` is the highest-CPC keyword in this cluster ($103) for a reason — every dollar saved on contract-cycle time compounds across a year of in-house deal flow. `Ai contract review` ($74) and `ai legal assistant` ($51) sit on the same list.

AI contract review + clause analysis ($74 CPC)

`Ai contract review` and `ai contract analysis` workflows that read NDAs, MSAs, SOWs, and vendor contracts; surface non-standard clauses against your firm's clause library; draft redlines aligned to your house playbook; and queue the result for an attorney to confirm or override. Sonnet 4.6 wins on clause-level reasoning; Ironclad is the benchmark CLM product if you want a packaged tool — we ship the pipeline when your playbook needs to come from your own KM corpus and live in your DMS (iManage or NetDocuments).

AI contract management workflows ($103 CPC says everything)

`Ai contract management` is the single highest-CPC keyword in the legal cluster at $103 per click — because every dollar saved on contract-cycle time compounds across a year of in-house deal flow. We ship the pipeline: intake form → entity extraction → clause-by-clause review against playbook → redline draft → counterparty-comparison summary → routing to deal counsel for sign-off. In-house teams typically see 40–55% cycle-time reduction on NDAs and routine MSAs in the first 90 days. Complex deals still route to senior counsel — same intake form, full draft pre-attached.

E-discovery AI — TAR + GenAI re-ranking

`Ediscovery ai` and `ai ediscovery` workflows that layer GenAI re-ranking on top of TAR (Technical-Assisted Review) for the responsive doc set, plus a privilege-log triage pass that flags inconsistent privilege calls across reviewers. Relativity and Everlaw are the platforms we ship into; the re-ranker runs on the candidate set their TAR engine surfaces. Reviewer signs off on the responsive set before production — never autonomous review. See §5 for the quadrant economics; this is where AI dollars hit hardest.

AI legal research + brief-research assist

`Ai legal research` workflows that retrieve case law from your firm's research corpus (Lexis+ AI + Westlaw Precision are the benchmark research products; we integrate, we don't replace), draft a brief outline with cited authorities, and surface prior-art arguments from the firm's own prior briefs. Citation chain is auditable per ABA Op. 512 — every retrieval source logged, every cite verifiable. Associate writes the brief; AI surfaces the cases and the analogs. Hallucinated citations are the headline failure mode in this category; ours fail closed when a citation can't be verified against the source.

AI paralegal + matter intake automation ($28 CPC)

`Ai paralegal` workflows that handle the high-volume, low-judgment tasks paralegals spend hours on: matter intake form processing, conflict-check pre-screen against the firm's client + adverse-party index, engagement-letter generation from your firm's templates, and deposition-summary drafting. Paralegal signs off on every conflict hit and every engagement letter; AI never sends. ROI compounds on small firms (one paralegal-hour returned per matter, dozens of matters per week) more than on AmLaw 100 firms where the same labor pool is already specialized.

AI deposition summary + transcript analysis

`Ai deposition summary` workflows that summarize transcripts by topic, by witness, and by issue; flag inconsistencies across deponents; and draft cross-examination outlines for litigators to review. Sonnet 4.6 + retrieval over the matter's deposition corpus. Litigator signs off on the outline before any trial-prep use; the AI doesn't produce live cross-examination questions, only the analytic skeleton an attorney builds from. The pattern complements (doesn't replace) the deposition-summary work your e-discovery team already runs.

Don't see your legal workflow?

The highest-ROI legal AI workflow on your team is often one we haven't listed. Bring it to the 2-week audit — we'll rank it against the rest and tell you if it ships.

Tell us yours
discovery economics, by volume × relevance

Where TAR+GenAI re-ranking actually moves $/doc
and where it's overkill.

E-discovery isn't a single workflow — it's four. Document volume and relevance density are independent axes, and the right AI stack is different in each quadrant. We map your matter into one of these four cells in the audit, then pick the model. Cost ranges below are typical for mid-to-large matters with prompt caching warm — verify on your set before locking a vendor.

best AI ROI QA assist ML, not GenAI human-only
where AI is allowed to read

Four privilege zones
and the AI deployment pattern for each.

Privilege isn't a vendor checkbox — it's a content classification, and AI deployment changes ring by ring. The center holds attorney work-product; the outermost ring holds firm marketing. Content moves INWARD as it accumulates client identifiers, never outward. We map your matter content into these four rings during the audit, then pick the deployment pattern. FRE 502 implications noted per ring — inadvertent-waiver risk is non-trivial at rings 1 and 2.

R1 · work-product R2 · matter docs R3 · public-record R4 · firm marketing
Ring 4 · Firm marketing · KM articles · public web Ring 3 · Public-record · case law · regulatory filings Ring 2 · Matter documents · privileged Ring 1 · Attorney work-product · client communications
model picks per legal workflow

The model matrix.
Per workflow, not per vendor.

Same `ai legal software` stack runs four model picks. Sonnet 4.6 wins where clause-level reasoning or cite-grounded synthesis matters (contract review, privilege-log triage, brief research, deposition cross-deponent analysis). Haiku 4.5 is the volume-classifier for matter intake + e-discovery batch passes and is the cost-tier swap when contract volume spikes. GPT-5.4-mini is the structured-output specialist for entity extraction + intake-form processing. GPT-5.4 sits on long-form reasoning (complex MSAs, multi-deponent inconsistency surfacing). Verify on your own usage before locking a pick — vendor prices and capabilities shift quarterly.

Dimension
You're here Claude Sonnet 4.6 Anthropic · quality tier
Claude Haiku 4.5 Anthropic · cheap, fast
GPT-5.4-mini OpenAI · structured output
GPT-5.4 OpenAI · long reasoning
AI contract review · clause analysis Clause-by-clause comparison against house playbook. Attorney reviews — never autonomous send.
Claude Sonnet 4.6 Default · strongest clause-level reasoning
Claude Haiku 4.5 Workable on NDAs; drifts on MSAs
GPT-5.4-mini Strong on structured clause-extraction
GPT-5.4 Tied — pick on stack preference
E-discovery — GenAI re-rank on responsive set Re-ranks the top-N from TAR. Reviewer signs off on responsive set before production.
Claude Sonnet 4.6 Default · nuance on boundary band
Claude Haiku 4.5 Batch classifier under the re-ranker
GPT-5.4-mini Stronger on structured tagging than re-rank
GPT-5.4 Cost prohibitive at doc volume
Privilege-log triage Flags inconsistent calls across reviewers; drafts log entries. Reviewer signs every entry.
Claude Sonnet 4.6 Default · privilege-rationale reasoning
Claude Haiku 4.5 Fine for surface dedupe; drifts on calls
GPT-5.4-mini Workable on entry formatting only
GPT-5.4 Strong tie — pick on stack preference
AI legal research · brief-research assist Retrieval over case-law corpus + brief outline. Citation chain auditable per ABA Op. 512.
Claude Sonnet 4.6 Default · cite-grounded synthesis
Claude Haiku 4.5 Cite-hallucination risk on long reasoning
GPT-5.4-mini Fine on structured citation extraction
GPT-5.4 Tied · long-form reasoning strength
AI paralegal — matter intake + conflicts Form processing + entity resolution + conflict pre-screen. Paralegal signs every conflict hit.
Claude Sonnet 4.6 Overkill on most intake; reserve for complex
Claude Haiku 4.5 Default · entity resolution at volume
GPT-5.4-mini Best structured-output adherence
GPT-5.4 Cost prohibitive at intake volume
AI deposition summary · transcript analysis Topic / witness / issue summarization. Litigator reviews; never trial-ready output.
Claude Sonnet 4.6 Default · narrative + cross-deponent
Claude Haiku 4.5 Workable on single-deponent dedupe
GPT-5.4-mini Fine on issue-tag extraction
GPT-5.4 Tied · multi-deponent inconsistency surfacing
Privileged content — ring 1 / ring 2 deployment Which model the routing layer flips to when the content is ring 1 or ring 2 (see §6).
Claude Sonnet 4.6 Bedrock + KMS · default ring 1/2 pick
Claude Haiku 4.5 Bedrock + KMS · the cost-tier surge swap
GPT-5.4-mini Azure OpenAI retention=0 · OpenAI-stack ring 2
GPT-5.4 Reserve for ring 3+ on cost grounds

Cost figures are typical per-decision spend with prompt caching warm and standard legal context sizes (clause excerpt + playbook snippet, deposition section + cross-witness summary). Run your own benchmark before locking a model pick; vendor prices, retention terms, and model capabilities shift quarterly.

upl-aware ai — when it's the wrong answer

Three places we'll tell you no.
Honest scoping > pretty deck.

Most `legal ai software` pitch decks have an AI answer for every problem in the firm. A production legal AI partner should refuse three of them. If your scope touches any of these, we'll say so in the audit — and we won't bill phase 2 to find out. The three duties named across ABA Formal Opinion 512 (July 2024), California State Bar Practical Guidance (Nov 2023), and the New York State Bar Task Force Report (Apr 2024) — supervision, competence, confidentiality — are not compliance checkboxes; they're the difference between a workflow that ships and one that gets pulled in week 9.

Client-facing legal advice — the UPL line

We won't ship AI that gives legal advice to clients without an attorney in the loop. AI does drafting, summarization, research, classification, and intake — never client-facing legal advice. That sentence is verbatim our scoping rule. ABA Formal Opinion 512 (July 2024) on GenAI use names the supervision, competence, and confidentiality duties; the California State Bar Practical Guidance (Nov 2023) and the New York State Bar Task Force Report (Apr 2024) reinforce the same line under their respective rules of professional conduct. The unauthorized-practice-of-law statutes don't soften because the advice came from a model. Every leaf we ship has a partner / attorney sign-off in the loop — full stop.

Autonomous filing, signing, or sending without attorney review

We won't ship AI that files a motion, signs an engagement letter, sends an opposing-counsel email, or submits a discovery response without an attorney reviewing the output first. Even on "routine" work — a stipulated extension, a boilerplate engagement letter, an NDA at standard terms — the consequence of an error is borne by an attorney under the rules of professional conduct, not by the vendor or the model. We design the pipeline so the model produces, the attorney reviews, the attorney sends. If a workflow's ROI depends on removing the attorney from the send-step, the workflow doesn't ship.

Privilege-blurring deployment — shared-fleet prompts with vendor logging

We won't ship a workflow that routes privileged content through a shared-fleet LLM API with vendor-side prompt retention. FRE 502(b) inadvertent-waiver doctrine looks at "reasonable steps to prevent disclosure" — and pushing ring-1 work-product or ring-2 matter docs through a consumer or default-commercial API tier is not a defensible posture. Anthropic ZDR / Azure OpenAI with retention disabled / Bedrock with customer-managed KMS keys are the patterns we ship. The honest constraint: privilege-aware deployment costs more per inference than a consumer-API workflow, and we'll say so in the audit before the build.

the kind of engagement we ship

Three capability patterns.
Hypothetical — illustrative of how we ship, not real anonymized clients.

Patterns below are hypothetical illustrations of how we ship for the three buyer shapes we engage with most often — mid-market firms, AmLaw e-discovery practices, and in-house counsel teams. Numbers are modeled from comparable engagement scopes, not specific client metrics. Real references shared under NDA once we know what you're building. Stacks shown are the ones the engagement would actually run on; yours will look similar but not identical.

Mid-market firm · 40–80 attorneys · the kind of engagement we ship Pattern

Matter intake + conflict-check + engagement-letter automation

Problem

Paralegals spending 35–50 minutes per new matter on intake form processing, conflict pre-screen against the firm's client + adverse-party index, and engagement-letter generation from templates. Mid-size firms typically open 200–600 new matters a month — the paralegal time compounds, and conflict checks slip on Friday-afternoon intakes when staff are stretched thin.

Approach

Inbound matter intake form (via firm website + email-to-case parsing) → entity-resolution + party-name normalization → conflict pre-screen against the firm's client/adverse-party index → engagement-letter pre-draft from the firm's templates scoped to the matter type → routing to the responsible partner with the conflict report + draft letter pre-attached. Paralegal reviews every conflict hit and every engagement letter; AI never sends. Privilege deployment: ring 2 — Anthropic Bedrock with customer-managed KMS keys, retention=0, audit-log of every inference for the firm's compliance lead.

Haiku 4.5Sonnet 4.6Bedrock + KMSClio + iManageAudit log
Outcome
≈ 38 min/matter paralegal time returned per new matter (modeled)
AmLaw e-discovery practice · the kind of engagement we ship Pattern

TAR + GenAI re-ranking + privilege-log triage on responsive doc set

Problem

E-discovery practice running on Relativity / Everlaw, baseline TAR cost of $0.30–$1.20/doc on the responsive doc set, reviewer time spent on boundary cases that TAR's classifier ranks 0.55–0.75 (the "maybe-responsive" band). Privilege-log review separately taking 35% of reviewer time on the most sensitive matters, with inconsistent privilege calls across reviewers showing up in post-production challenges.

Approach

GenAI re-ranker (Sonnet 4.6) on the top-N candidates from TAR — re-ranks the boundary band with nuance the classifier misses (sarcasm, code-named projects, implied context). Privilege-log triage runs alongside: flags inconsistent privilege calls across reviewers, drafts log entries from underlying email/doc context, surfaces docs where the privilege rationale doesn't match the metadata. Reviewer signs off on responsive set + privilege log before production. Privilege deployment: ring 1 + ring 2 — on-prem Claude or Azure OpenAI customer-managed-keys for the privileged subset; ring 3 retrieval over public case law on standard Anthropic / OpenAI commercial tier with citation logging.

Haiku 4.5Sonnet 4.6Relativity / EverlawBedrock + KMSCitation-chain log
Outcome
≈ 42% reviewer time returned on the responsive boundary band (modeled)
In-house counsel team · ~12 lawyers · the kind of engagement we ship Pattern

Contract review pipeline — NDAs, MSAs, SOWs · Sonnet 4.6 clause analysis

Problem

In-house team reviewing 40–80 inbound contracts a week (vendor NDAs, MSAs, SOWs, DPAs), no Ironclad budget, contract-cycle time averaging 6–9 business days on routine NDAs. Counsel spending hours on clause-by-clause comparison against the company's house playbook — work that's high-volume, repetitive, and the same clauses fail in the same ways across counterparties.

Approach

Inbound contract via shared email → entity extraction + counterparty identification → clause-by-clause comparison against the company's house playbook (loaded from the in-house team's own clause library, not a vendor's) → redline draft generated with house-language substitutions + risk-tier annotations per clause → routing to the responsible counsel with the redline + counterparty-summary pre-attached. Counsel reviews and sends; AI never returns redlines to the counterparty autonomously. Privilege deployment: ring 2 — Sonnet 4.6 on Bedrock with customer-managed keys, retention=0, audit-log retained inside the company's tenant.

Sonnet 4.6Bedrock + KMSMicrosoft 365 + SharePointAudit logHouse playbook RAG
Outcome
≈ 50% contract-cycle-time reduction on routine NDAs / MSAs (modeled)
Read the full case study
how we ship legal AI in 6–8 weeks

Four stages.
With a kill point at week 6.

Every legal AI engagement we run uses the same loop: audit, pilot, ship, scale. The pilot has an explicit walk-away point at week 6 — if the metric won't move, we stop before production hardening and you don't pay phase 2. No retainer trap, no scope-creep into year-long implementations.

  1. Weeks 1–2

    Legal AI audit

    Two-week shadow with the responsible partner / GC, the practice-management lead, the e-discovery lead, and (where relevant) the firm's CISO. We rank candidate workflows by attorney/paralegal hours returned × time-to-ship × privilege-ring exposure, list the per-workflow cost band each will run at, and call out the ones that won't pay back so you don't fund them. Privilege-ring mapping (per §6) signed off by the firm's compliance lead before any build.

    90-day legal AI roadmap, ranked, with cost bands + privilege-ring assignment per workflow
  2. Weeks 3–6

    Pilot — one workflow, partner-in-loop

    We build the single highest-ROI candidate against your real stack (Relativity / Everlaw / iManage / NetDocuments / Clio / Ironclad — we integrate, we don't replace). Live behind a partner / attorney sign-off flag; citation-chain logging per ABA Op. 512 validated end-to-end; FRE 502 audit-log review with your compliance lead. Walk-away point at week 6 if the metric won't move.

    One workflow live behind a partner-in-loop flag with eval data + citation-chain audit log
    Walk-away point
  3. Weeks 7–8

    Ship to production

    Production hardening: Langfuse traces, retry + fallback policies, privilege-ring routing config (ring 1 vs ring 2 vs ring 3 deployment per content class), eval suite gated in CI, audit-log retention aligned to your firm's privilege-doc retention. Walk-through with the responsible partner + e-discovery lead + compliance.

    Production workflow + privilege-ring routing config + audit-log retention plan
  4. Ongoing

    Scale to next workflow

    Most `legal ai company` engagements run 3–5 workflows by month 6. Same eval harness, same citation-chain log, same cost-reporting cadence. Compounding learning across contract review → e-discovery → matter intake → research assist → KM Q&A. Privilege-ring mapping reviewed before each new workflow ships.

    3–5 legal AI workflows live by month 6, all under the same privilege-aware deployment topology
engagement models

Three ways to engage.
Hire us at the tier that fits where you are.

Most `ai for law firms` clients start with the 2-week audit, hire us to ship one workflow on a pilot, then move to monthly for the next three to five. Cost-per-decision reported monthly on every shipped workflow — no per-decision number, no engagement.

1–2 weeks

Legal AI audit

Find which AI workflows pay back on your firm's stack — before you commit a budget.

$3K fixed
  • Operator shadow with partner / GC / e-discovery / compliance
  • Workflow scoring: attorney/paralegal hours × time-to-ship × privilege-ring risk
  • Per-workflow cost band ($300–$2,000/mo)
  • 90-day legal AI roadmap with privilege-ring assignment per workflow
  • Honest list of workflows that won't pay back yet
Book the legal AI audit
Most teams start here
4–8 weeks

Pilot to production

Hire us to ship one legal AI workflow end-to-end, privilege-aware, partner-in-loop.

$10–25K fixed price
  • Build, integrate, deploy on Relativity / Everlaw / iManage / NetDocuments / Clio / Ironclad
  • Privilege-ring routing config tested pre-launch
  • Citation-chain logging (per ABA Op. 512) validated end-to-end
  • Eval suite, Langfuse traces, FRE 502-conscious audit log
  • Walk-away point at week 6 — no phase 2 if the metric won't move
Hire us for the pilot
Monthly

Continuous legal AI team

Embedded legal AI engineers shipping the next workflow on your roadmap.

from $5K per month
  • PM + AI engineer + legal-ops analyst, embedded
  • Per-workflow monthly cost-of-ownership report
  • Privilege-ring review before each new workflow ships
  • Cancel any time — no annual contract
Talk to a legal AI engineer
Privilege-ring routing on every workflow Zero-retention model tenancy on rings 1 + 2 Citation-chain logging per ABA Op. 512 No annual contract
frequently asked — legal ai

Questions law firms ask first.
Real answers, no hedging.

What does "AI for law firms" actually mean — what do you build?

An `ai for law firms` engagement with us ships production AI workflows on your firm's stack — not slide decks, not pilots that die at month 4. The day-to-day: scope which workflow moves a P&L line (most often contract review, e-discovery re-ranking, matter intake + conflict-check automation, legal research assist, deposition summary, or KM Q&A), assign each workflow to a privilege ring (per §6), build the integration against your Relativity / Everlaw / iManage / NetDocuments / Clio / Ironclad stack, pick the right model per workflow (Sonnet 4.6 for clause reasoning + cite-grounded research, Haiku 4.5 for high-volume classification, GPT-5.4-mini for structured extraction), deploy in a privilege-aware topology (Anthropic Bedrock with customer-managed keys + retention=0 on rings 1 and 2, standard commercial tier on ring 3 with citation logging), ship behind a partner / attorney sign-off flag, then operate the workflow long enough to prove cost-of-ownership before scaling. We don't sell a product — Harvey, CoCounsel, Lexis+ AI, Westlaw Precision, Clio Duo, and Ironclad are the benchmark products you should compare us against. We're the right answer when the workflow needs to be shaped to your matter intake, your KM corpus, and your privilege topology rather than a product team's roadmap.

Are you a Harvey / CoCounsel / Lexis+ reseller? Do you replace them?

Neither — we're a development partner, not a reseller, and we integrate with the legal AI products you already run rather than replace them. Harvey is strong for AmLaw 100 generalist legal work; CoCounsel (Thomson Reuters) is strong for research + drafting integrated with Westlaw Precision; Lexis+ AI is the LexisNexis-stack equivalent; Clio Duo is the practice-management-embedded option for solo + small firms; Ironclad is the leading CLM. Each is the right answer for some firms. We build when the firm needs a workflow that those products don't ship — your firm's own clause library as the playbook, your firm's KM corpus as the retrieval source, your firm's privilege-ring topology rather than a vendor's default, your firm's DMS (iManage / NetDocuments) as the source-of-truth, or a workflow surface (matter intake from a specific channel, a custom e-discovery re-rank against a specific Relativity field schema) the product roadmap doesn't cover. We'll say in the audit if a packaged product is the better answer for your firm; we've recommended Harvey + Ironclad to firms whose scope wasn't worth a custom build.

How do you handle privilege and FRE 502 inadvertent-waiver risk?

Privilege is a content classification, not a vendor checkbox — and we ship four deployment patterns mapped to four privilege rings (see §6). Ring 1 (attorney work-product · client communications): on-prem or zero-data-retention tenancy only, no third-party logs, citation chain auditable. Ring 2 (matter documents · privileged): vendor BYOK encryption with Anthropic Claude on AWS Bedrock + customer-managed KMS keys + retention=0, or Azure OpenAI with retention disabled. Ring 3 (public-record · case law · regulatory filings): standard commercial APIs OK on Anthropic / OpenAI tiers, with citation logging required per ABA Op. 512. Ring 4 (firm marketing · KM articles · public web): any model, any vendor, normal commercial terms. The FRE 502(b) inadvertent-waiver doctrine looks at whether the firm took reasonable steps to prevent disclosure — routing ring 1 or ring 2 content through a shared-fleet API with vendor-side prompt retention is not a defensible posture, and we won't ship a workflow with that topology. The honest constraint: privilege-aware deployment costs more per inference than a consumer-API workflow. We'll quantify it in the audit before any build commits.

What about UPL — the unauthorized practice of law line?

Every workflow we ship sits on one side of the UPL line, and the rule we use to draw it is verbatim: AI does drafting, summarization, research, classification, and intake — never client-facing legal advice. ABA Formal Opinion 512 (July 2024) on the GenAI use of duties names supervision, competence, and confidentiality as the operative obligations under Model Rules 1.1, 1.6, and 5.3. The California State Bar Practical Guidance (Nov 2023) covers the same ground under California's professional-conduct rules, and the New York State Bar Task Force Report (Apr 2024) addresses the practitioner-facing duties under New York's rules. None of these soften the UPL statutes — they reinforce that AI-assisted work is the lawyer's work product, with the lawyer accountable for competence and supervision. In practice this means: a partner or responsible attorney reviews and signs every output that has client-facing consequence, the model never sends to a client or files autonomously, and the firm's supervision policy is documented in the workflow's audit log. We won't ship a workflow whose ROI depends on removing the attorney from the sign-off step.

Can you integrate with Relativity, Everlaw, iManage, NetDocuments, Clio, or Ironclad?

Yes — all six, and the integration patterns differ enough to flag upfront. Relativity: kCura API + Relativity One stored fields are the workhorses; GenAI re-rank typically runs on the candidate set surfaced by Relativity's TAR engine, with the re-ranker's confidence score written back to a custom field. Everlaw: more open API surface but smaller schema flexibility — re-rank workflows ship faster, custom field schemas need more upfront mapping. iManage: cloud + on-prem variants matter; cloud (iManage Cloud) ships in 2–4 weeks for ring-2 workflows, on-prem (Work 10 Server) adds 2–4 weeks for VPN + storage-bucket setup. NetDocuments: similar shape, with the ndOffice add-in often the right surface for the attorney-facing UI. Clio: Clio Manage API + Clio Duo coexistence is the most common scope — we build adjacent to Clio Duo rather than replacing it. Ironclad: strong API for clause-library extraction; common scope is enriching Ironclad's CLM with your firm's own playbook + a custom redline model rather than replacing Ironclad. We do not certify against every vendor's full schema; we'll show you the integration scope in the audit before the pilot.

What does AI contract review actually cost to run per contract?

An `ai contract review` workflow runs in two cost buckets. Model layer on Sonnet 4.6 with prompt caching warm: roughly $0.12–$0.45 per NDA (3–8K tokens), $0.30–$1.20 per MSA (8–20K tokens), $0.40–$2.00 per SOW (10–25K tokens) depending on length + playbook size. Infrastructure layer (DMS integration, audit-log writes, privilege-ring routing, eval-suite hits): typically $0.08–$0.25 per contract. Total: roughly $0.20–$2.25 per contract depending on type and depth of clause-library comparison. For an in-house team reviewing 200 contracts a month (mix of NDAs + MSAs), model + infra typically sits in the $300–$900/month range; counsel time returned is the economic story, not the per-contract spend. Honest cost ceiling is engineering: getting the firm's house playbook loaded right, the redline format aligned to the firm's house style, and the counsel-facing UI to a place counsel actually trusts. That work is the engagement, not the per-contract token spend.

How does AI fit into e-discovery — does it replace TAR?

No — AI sits on top of TAR, not instead of it. TAR (Technical-Assisted Review, predictive coding) is well-suited to the bulk relevance pass on a high-volume responsive doc set — see §5 quadrant Q2 — and its $0.30–$1.20/doc baseline is hard to beat at scale. GenAI's job is the boundary band: the docs TAR ranks 0.55–0.75 where the classifier is uncertain, and where the nuance (sarcasm, code-named projects, context implied across an email thread) is exactly what a transformer model is better at than a logistic-regression classifier. We re-rank that band, add $0.05–$0.20/doc on top of the TAR baseline, and typically see 40–55% reviewer-time reduction on the boundary cases that drive most of the actual review spend. Privilege-log triage runs on the same set — flagging inconsistent privilege calls across reviewers, drafting log entries from the underlying content, surfacing rationale mismatches before production. Relativity and Everlaw are the platforms we ship into; Reveal is workable but less common in our engagements. The wrong answer is GenAI on the full corpus — see §5 quadrant Q3 (junk / dedupe / NIST), where pure ML wins on cost.

How much does a legal AI project cost and how long does it take?

Three tiers, pricing-locked across the cluster. (1) Legal AI audit: $3K fixed, 1–2 weeks. We shadow partner / GC / e-discovery / compliance, score candidate workflows by attorney + paralegal hours returned × time-to-ship × privilege-ring exposure, deliver a 90-day roadmap with per-workflow cost bands + privilege-ring assignment + an honest "these won't pay back yet" list. (2) Pilot to production: $10–25K fixed, 4–8 weeks. One workflow shipped end-to-end on your stack, privilege-ring-aware, citation-chain-logged, partner-in-loop, with a walk-away point at week 6 — if the metric won't move, we stop before production hardening and you don't pay phase 2. (3) Continuous legal AI team: from $5K/month, no annual contract. Embedded PM + AI engineer + legal-ops analyst shipping the next workflow on your roadmap, with per-workflow monthly cost-of-ownership reporting and privilege-ring review before each new workflow ships. Most `legal ai company` engagements we run start with the audit, ship the first workflow on the pilot, then move to monthly for workflows two through five. Cost-per-decision reported monthly on every shipped workflow — no per-decision number, no engagement.

Ready to ship

Stop running another vendor pilot that dies at month 4.
Hire a legal AI development partner that ships.

Book a free 30-minute legal AI audit. We'll identify two or three high-ROI candidates from your firm's stack, map each to a privilege ring, give you a per-workflow cost band, and tell you which ones won't pay back yet. No deck, no obligation to build.

30 min, async or live Privilege-ring scoping included You leave with a written roadmap