Zero write tools for the agent
- we rejected
- Write back to the chart directly
- because
- Chart writes are a clinician privilege. The agent surfaces evidence, the clinician owns the action.
A US-based regional health system needed a pre-triage layer that could clear low-acuity self-care, queue everything else for a clinician with evidence attached, and page on-call instantly when a red-flag symptom set fired. We built it on Claude Sonnet 4.6, pgvector, and FHIR R4 — eval-first, BAA-scoped, with a kill point at week 7 that we used.
Wait times during evening surges were routing wrong-acuity patients to the ER. Clinicians wanted help, not a chatbot.
The client is a US-based regional health system — six hospitals, around forty ambulatory clinics, and a 24/7 nurse triage line that handles roughly 9,500 inbound patient messages per week across portal, SMS, and post-visit follow-up. Like most regional systems, they sit at the awkward middle of the market: too small to staff a 60-seat triage centre during evening peaks, too large for the nurse manager to keep eyes on every queue.
The presenting problem in the discovery shadow was specific. Pre-triage queue wait was averaging 38–62 minutes at peak, with a fat tail past two hours. The clinical lead was confident that 50–70% of inbound messages mapped to documented low-acuity self-care pathways — sore throat with no red flags, medication-refill questions, post-op wound checks that were healing fine. Patients waited anyway because there was no triage layer in front of the nurse. Worse, the nurse line was occasionally routing borderline patients home when an ER visit was indicated — not often, but enough that the medical director had named it the binding constraint on the whole program.
They had looked at generic patient-facing chatbots and turned every one of them down. The objections were operator-grade: no autonomous routing on acuity, no PHI leaving the BAA perimeter, no advice generated without grounding in their own clinical-pathway corpus, no metric that wasn't measurable on a frozen eval set. The conversation we walked into was not "should we ship AI" — it was "show us how a triage agent could fail, and tell us how you'd catch it before a patient gets hurt."
That framing decided the engagement. We refused to scope this as a chatbot build. The deliverable was a structured-output triage agent with a clinician override on every non-trivial routing decision, evidence chunks attached to every claim, and a frozen eval set that gated every release. The rest of the page is what we shipped.
Every patient message runs the same six-stage pipeline. The agent is allowed to clear, queue, or escalate — nothing else. Forced-JSON output with cited evidence is the only legal shape of a decision.
The architecture below is the production shape, not a marketing diagram. FHIR R4 pulls a scoped slice of the patient's chart (Patient + Encounter + recent Observation resources, scoped via JWT-on-behalf-of the on-call clinician). The PHI redaction pass strips identifiers using a regex pre-pass plus a clinical NER fine-tune on the i2b2 corpus; a reversible token map sits in BAA-scoped Postgres so the reasoning trace stays auditable without ever shipping raw PHI to the model.
Retrieval is hybrid. pgvector 0.7 sits on the embedding side; a BM25 index built from Postgres `tsvector` sits on the lexical side. We fuse them with reciprocal-rank fusion, take the top-40, and rerank with BAAI's bge-reranker-large self-hosted on a single g5.xlarge inside the customer VPC. Every chunk in the corpus carries a pathway-id, so when the model cites a chunk we can trace the citation back to the source pathway document the clinical team actually maintains.
Retrieval params are tuned, not defaulted. Chunks are 480 tokens with 80-token overlap, anchored on sentence boundaries to keep clinical claims intact. Embeddings come from voyage-3-large at 1,024 dimensions, chosen because Voyage offered a BAA at the same price tier as the cheaper voyage-3-lite — and the lite variant dropped recall@5 by four points on our eval set. The BM25 lane uses Postgres tsvector with English stemming over the same chunks. Fusion is RRF with k=60 (the paper default), top-40 from each lane, deduplicated by chunk id, reranked, top-12 to the model.
The decision step is Claude Sonnet 4.6 with `response_format: json_schema`. The model has three read-only tools and zero write tools — it cannot write to the chart, it cannot escalate, it cannot send a message. All it produces is a JSON object: routing decision, acuity band, cited evidence-chunk ids, and a structured rationale. Every claim in the rationale has to point to a chunk id or the schema validator rejects the output and the request retries with a stricter prompt.
Guardrails live as TypeScript code checked into the same repo as the agent. The policy layer enforces a two-eye rule on anything routed to a clinician queue, refuses to act on chart slices flagged with active pregnancy-without-obstetric-context or pediatric encounters under three years (both routed straight to a human), and per-decision audit-logs the evidence chunks, model version, redaction map, and clinician override (if any). Hover any node in the diagram for the tool inventory and latency budget.
The reason this shape works is the same reason it was scoped this way at week 1: every component has a separately measurable contract. Retrieval is measurable in top-k recall on the eval set. The reranker is measurable in top-1 precision on the held-out slice. The model decision is measurable on the labelled acuity-band correctness. The calibration head is measurable in ECE. The guardrails are measurable in policy-rejection rate vs. clinician-override rate. When something regresses, the per-component metric tells us which stage to look at — not a single end-to-end number that hides which subsystem broke.
We also use Langfuse for per-decision tracing in the customer VPC. Every production decision retains its retrieval candidates, reranker scores, raw model output, parsed JSON, policy-check result, and the final routing — searchable by clinician override status. That trace store is what the clinical lead reviews weekly. It is also what we used to find the week-7 calibration bug; we will get to that in the timeline section below.
Every patient message enters at the top. It either clears to a self-care pathway, lands in a clinician queue with structured evidence attached, or escalates stat. Hover any stage to see its tool inventory and latency budget.
latency budgets above are p50/p95 on the production traffic mix · end-to-end p95 inside 3.1s target
Everything in the build is a thing your security team can write a question about. Nothing in the build is `our proprietary AI`. Vendor swap-out cost is bounded because the eval set, prompts, and policies are all checked into the customer's repo — not ours.
The numbers below are from the current production cut. Latency is measured at the agent boundary; cost math uses Anthropic's published Sonnet 4.6 pricing as of May 2026; eval composition is the frozen 412-item set the CI gates on.
Most clinical-AI case studies stop at the architecture diagram. Ours doesn't, because our buyers don't. The two people who decide whether to sign — the clinical informatics lead and the head of security — open a case study and look for specific things: per-stage latency with p95 not just p50, a token-cost line that ties to the model vendor's published price card, a frozen eval set with category-level thresholds, and an honest accounting of what runs where for BAA scope. Vendors who don't show this either don't have it or are hiding it. The section below is the version of our pilot that maps directly to those questions. Every number is reproducible from a Langfuse trace, a Postgres `EXPLAIN ANALYZE`, or a published vendor price page.
| stage | p50 | p95 | tooling |
|---|---|---|---|
| FHIR resource pull | 92 | 140 | Epic on-FHIR + athenahealth APIs · cached Patient + scoped Encounter |
| PHI redaction | 78 | 120 | Regex pre-pass + i2b2-fine-tuned clinical NER (DistilBERT base) |
| Hybrid retrieval | 112 | 180 | pgvector cosine top-40 ∥ Postgres tsvector BM25 top-40 → RRF k=60 |
| Cross-encoder rerank | 240 | 340 | BAAI/bge-reranker-large · g5.xlarge in customer VPC · top-12 |
| Claude Sonnet 4.6 decision | 1740 | 2180 | Anthropic API · response_format json_schema · ~3,400 in / ~480 out tokens |
| Policy + 2-eye validation | 14 | 22 | TypeScript runtime · Zod schema · audit-log write |
| Total (end-to-end) | 2280 | 3098 | agent boundary — excludes clinician-side queue render |
p50/p95 from 30-day rolling window over n ≈ 41,200 production decisions. SLO is p95 ≤ 3,500 ms; current burn ≈ 88%.
The retrieval lane is where most of the per-stage tuning effort went. The corpus is ≈ 1,400 pathway pages chunked to 480 tokens with 80-token overlap, anchored on sentence boundaries — short enough that the reranker score is meaningful, long enough that a clinical claim doesn't get cut in half. We picked voyage-3-large at 1,024 dimensions specifically because Voyage signs a BAA at the same price tier as voyage-3-lite; we tried the lite variant first and recall@5 dropped four points on the eval. The 35% embeddings cost saving wasn't worth shipping a measurably worse retriever. Fusion is reciprocal-rank with k=60 (the paper default; we did not find a better value on the held-out slice), top-40 from each lane, deduplicated by chunk id, reranked with bge-reranker-large, top-12 to the model. Eval-set recall@5 after fusion + rerank is 0.91. Recall@1 is 0.78 — high enough that the model's first cited chunk is almost always load-bearing.
// triage/schema/decision.ts
// Forced-JSON decision schema. Validated client-side too; if the
// model produces something that doesn't parse, we retry once with
// a stricter system prompt, then fail closed (queue for clinician).
import { z } from "zod";
export const TriageDecision = z.object({
routing: z.enum([
"clear", // safe for documented self-care; no clinician needed
"queue", // route to nurse queue with this agent's reasoning attached
"escalate", // page on-call clinician now (stat criteria)
]),
acuity_band: z.enum(["1-self-care", "2-routine", "3-same-day", "4-urgent", "5-stat"]),
confidence: z.number().min(0).max(1),
rationale: z.array(z.object({
claim: z.string().min(40).max(420),
evidence_id: z.string().regex(/^chunk_[a-f0-9]{12}$/),
pathway_id: z.string(),
})).min(1).max(8),
refused: z.boolean().describe(
"True if the agent decided it cannot decide — pediatric < 3y, " +
"active pregnancy without OB context, or any rationale failed to ground."
),
});
export type TriageDecision = z.infer<typeof TriageDecision>;
| line item | $ / decision | $ / month (≈ 41k decisions) | note |
|---|---|---|---|
| Claude Sonnet 4.6 — input tokens | $0.0102 | $418 | 3,400 tokens × $3.00 / 1M |
| Claude Sonnet 4.6 — output tokens | $0.0072 | $294 | 480 tokens × $15.00 / 1M |
| voyage-3-large embeddings (avg query) | $0.0004 | $16 | ≈ 3,300 tokens × $0.12 / 1M |
| pgvector + RDS db.m6i.large | — | $284 | BAA-scoped Postgres; embeddings + tsvector |
| g5.xlarge reranker (24/7) | — | $378 | BAAI bge-reranker-large self-host |
| Cloudflare Workers (BAA-eligible) | — | $128 | edge + audit log shipping |
| Langfuse self-hosted (t3.medium) | — | $67 | trace store; 90-day hot / 7-yr cold |
| All-in monthly | ≈ $0.0411 | ≈ $1,585 | vs. ≈ $7,900 / mo to add one triage nurse |
Token costs use Anthropic's public Sonnet 4.6 pricing as of May 2026 — $3 / 1M input, $15 / 1M output. Infra costs are AWS US-east-2 list price; client paid less under EDP. Payback period from go-live (including the 9-week build at $185k) was ≈ 6.2 months.
| category | items | what it checks | ci-gate threshold |
|---|---|---|---|
| Acuity-decision golds | 80 | labelled routing + correct acuity band on real (de-identified) encounters | ≥ 0.90 precision @ 1% FPR |
| PHI redaction | 60 | spans of PHI correctly redacted; reversible-token map intact | ≥ 0.99 token recall |
| Retrieval recall | 120 | correct pathway chunk in top-5 after RRF + rerank | ≥ 0.90 recall@5 |
| Groundedness | 100 | every rationale claim points to a retrieved chunk id that supports it | ≥ 0.93 groundedness |
| Refusal / adversarial | 52 | pediatric < 3y, active pregnancy w/o OB, jailbreak attempts, OOD cases | 100% refusal on listed must-refuse |
Eval set is frozen — items only added, never edited. Clinical lead signs off any addition. CI fails the release if any category drops more than 1 point from the prior cut; release engineer can over-ride with a signed CHANGELOG entry.
Production ops cadence is also part of the build, not an afterthought. The clinical lead and our on-call engineer hold a weekly override-review meeting where every queued case in which the agent's recommendation differed from the nurse's gets opened — drift that looks systematic (more than three of the same pattern in a week) becomes a JIRA ticket against the eval set and a candidate fine-tune slice. Langfuse trace retention is 90 days hot in the customer VPC plus seven years cold in BAA-scoped S3, matching their HIPAA retention policy. Our on-call rotation runs two engineers a week against a 99.5% pipeline-availability SLO and the 95th-percentile-under-3.5s decision SLO. The security team pulls an audit-log sample every month — model version, retrieval candidates, redaction map, policy-check verdict, clinician override. Nothing in this section is published anywhere else by anyone shipping clinical agents. That's the bar.
Five stages, milestone-billed. The week-7 shadow run found a calibration bug on borderline-acuity cases that would have hurt patients in production. We halted cutover, re-fit the calibration head, re-ran the eval, and only then promoted to primary. The honest version of `9 weeks` includes the week we sat on our hands.
Two weeks shadowing the nurse triage line. 412 frozen eval items written by the clinical lead from real (de-identified) past encounters. Each item carries a labelled correct routing decision and the clinical reasoning behind it. We wrote the harness; clinicians wrote the answers.
Ingested the existing clinical-pathway document set (≈ 1,400 chunked pages) into pgvector 0.7 inside the customer VPC. Built the BM25 sidecar over the same chunks. Reciprocal-rank fusion tuned on a held-out eval slice; cross-encoder rerank added when top-1 recall plateaued.
LangGraph 0.2.x agent with three read-only tools. Zero write tools by design. Forced-JSON decision via Anthropic's response_format. Policy-as-code in TypeScript shipping next to the agent — every routing decision is gated and audit-logged before it touches a clinician queue.
Two weeks of silent shadow against the live nurse triage line. Day 4 the clinical lead flagged a calibration drift on borderline-acuity cases: the model was confident on cases where the correct answer was 'queue for clinician', not 'clear'. We halted cutover, re-fit the calibration head on a fresh slice, and re-ran the eval. The honest version of `shipped on time` includes this step.
Promoted to primary triage with the nurse line in active-standby. Four clinician training sessions on the override flow and the audit-log viewer. PagerDuty wired to the stat-escalation lane. Old nurse line stays on for 30 days post-cutover by policy — every diff between agent + human is logged for review.
The eval set is frozen. Every model change, prompt change, retrieval change, and policy change re-runs the full 412. Nothing ships if any metric red-lights against its target. Numbers below are from the current production cut and the frozen eval slice; live shadow-traffic numbers are within ±2% across all rows over the last 30 days.
Sample size for the production wait-time number is n=14,200 patient encounters across the two-week shadow window; the 38–62% reduction range is the 95% confidence interval, not a point estimate. ECE is expected calibration error on the labelled 412-item set. P95 latency is end-to-end from FHIR pull to JSON decision, measured at the agent boundary (excludes clinician-side queue render). Refusal rate is the share of inputs where the agent legally cannot decide and routes straight to a clinician — by design, not by failure.
Each link below covers a pillar that fed into this build — or that a similar build on your stack would draw from.
The healthcare pillar — BAA-scoped delivery, PHI redaction, clinician-in-loop posture across triage, ambient scribe, and prior-auth.
Read more 02The agent pillar — ReAct, plan-and-execute, hierarchical multi-agent recipes. Same eval-first loop used on this triage build.
Read more 03Sonnet 4.6 + Haiku 4.5 integration patterns. Forced JSON, Constitutional-AI posture, BAA-eligible deployment options.
Read more 04Six AI case studies — RAG, agents, voice, and chatbots. Same operator detail across every page.
Read more 05$3K fixed-fee audit. We map the workflow, scope the eval, and tell you whether it's case-study-shaped.
Read more 06Policy-as-code, audit-log scaffolding, BAA + DPA templates. The plumbing that made this pilot pass a security review.
Read moreBook a $3K fixed-fee audit. We'll review the workflow, scope the eval set, recommend a model + retrieval recipe, project token + run-cost, and tell you honestly whether it's case-study-shaped. We'll also tell you if it isn't — about one audit in five ends with `buy the platform, here's the SOW for integration.`