ai in education · production

AI in education,
shipped on your LMS — not on a slide deck.

We're the `ai in education` development company you hire to build production AI on your Canvas, Blackboard, Moodle, Schoology, Google Classroom, or PowerSchool stack — AI tutoring that scaffolds worked examples inside the assignment view, AI essay grading that drafts a rubric-coded score the teacher overrides, AI quiz and lesson-plan generators that respect your standards alignment, special-ed assist inside the IEP team's loop, and LMS-integrated academic advisors. FERPA + COPPA + state-privacy attestations signed before any pilot; cohort-level bias audit on every grading workflow; WCAG 2.1 AA on every UI. First workflow live in 6–8 weeks behind a teacher-in-loop flag.

student journey · ai touchpoints
01 K–5 elementary ages 5–10
02 6–12 secondary ages 11–18
03 Undergrad ages 18–22
04 Workforce / L&D ages 22+
  1. 01
    K–5 elementary · ages 5–10
  2. 02
    6–12 secondary · ages 11–18
  3. 03
    Undergrad · ages 18–22
  4. 04
    Workforce / L&D · ages 22+
tap any touchpoint above · 6 AI moments across the arc

6–8 wk
first AI-in-education workflow live behind a teacher-in-loop flag
LTI 1.3
Canvas · Blackboard · Moodle · Schoology · Classroom · PowerSchool
$200–$1.6K
monthly model + infra cost band per shipped workflow
$3K
AI-in-education audit-to-roadmap before any build starts
why an ai-in-education partner, not a vendor pitch deck

What changed.
And why operator-grade AI in classrooms matters now.

`Ai in education` and `artificial intelligence in education` had three product cycles in a decade — the LMS cycle, the analytics cycle, the adaptive-learning cycle. This one is different because the unit economics finally work per-workflow — `education ai` isn't a strategic narrative anymore, it's $0.008–$0.10 per-decision plumbing that returns teacher time inside a term on the right workflow. Three things an `ai for education` company should be honest about before you scope your first build.

From edtech products to in-classroom AI workflows

Yesterday's edtech category was a tool catalog — pick a SaaS, train the staff, hope adoption climbs. Today's `ai in education` stack is workflow-level — an AI tutor that lives inside the LMS assignment view, an essay grader that drafts a rubric-coded score the teacher overrides, a lesson-plan generator that respects the district's standards alignment. The unit isn't a product; it's a workflow that returns instructor time week-over-week.

Models picked per assessment, not per vendor

An `ai tutor` that scaffolds worked examples is a Sonnet 4.6 quality call (a wrong-headed scaffold sets a student back a week). High-volume `ai quiz generator` items are GPT-5.4-mini cost calls. An `ai academic advisor` answering degree-audit Qs runs on Haiku 4.5 — and hands off the moment a financial-aid or graduation-risk phrase appears. The same LMS integration runs all three with different model picks per route.

The buyer is the integrity policy, not the procurement officer

Districts and universities don't sign off on `ai for education` because the demo looked good — they sign off because the AI-augment / AI-assist / AI-block boundary is mapped against their academic-integrity policy before anything ships. We map that boundary in §5 (Bloom × AI-band rubric) and refuse to ship anything in a credential-bearing AI-BLOCK cell — full stop.

ai in education, by instructor hours returned

Six AI workflows we ship for K-12, higher-ed, and L&D.
Ranked in the audit, not the slide deck.

These are the six `ai in classrooms` workflows that consistently return instructor time in the audits we run. Not every institution needs all six — most teams have a high-ROI candidate in three of them. The audit ranks yours so you don't have to guess which to fund first. Buyer reality woven through: the highest-CPC keywords in this cluster are `ai tutoring` ($5.11), `claude for education` ($38.32 peak), and `openai education` ($78.56 peak) — institutional buyers are spending real money on these, not on `ai detector` features.

AI tutoring · 1:1 worked-example scaffolds inside the LMS

`Ai tutoring` ($5.11 CPC, 8,100 vol) is the highest-buyer-intent workflow in this cluster. The pattern that ships: a subject-specific tutor that lives inside the LMS assignment view, scaffolds worked examples at an instructor-set difficulty band, refuses to solve the assessed problem outright, and writes a weekly mastery report to the teacher. Sonnet 4.6 for quality; Haiku 4.5 for multi-section / district-scale volume. The wrong way to ship this is autonomous answer delivery; the right way is Socratic scaffolds with teacher visibility on every session.

AI essay grading · rubric-anchored, teacher overrides every mark

`Ai grading` (5,400 vol) and `ai essay grading` workflows that score student work against the rubric and produce per-criterion rationale a teacher can override. The honest scoping: AI does not give the final grade — the teacher does, after reviewing the AI's draft. We run a cohort-level bias audit before deployment (does the AI score the cohort disproportionately by demographic flag?) and ship the audit log to the teacher's dashboard. `Ai for grading papers` shipped without that audit is a lawsuit.

AI quiz generator · standards-aligned, teacher-approved

`Ai quiz generator` workflows that read the lesson plan and the standards alignment (Common Core, NGSS, state framework), produce items at varied Bloom levels, and tag each item for reading level + accessibility (alt-text on images, screen-reader-friendly format). Teacher reviews and approves before the quiz reaches students — formative only by default; summative quizzes require additional review. Runs on GPT-5.4-mini for cost efficiency at district scale.

AI lesson plan generator · curriculum-aligned drafts

`Ai lesson plan generator` workflows that draft a lesson plan against the standards, the textbook chapter, the student-cohort reading level, and the teacher's preferred pedagogical pattern (5E, gradual-release, inquiry-based). Teacher edits the draft; the AI never publishes autonomously. Returns 30–45 minutes per lesson on the planning side — and the time saved goes back into student-facing work, which is the only ROI metric that matters in K-12.

AI for special education · IDEA-scoped assist inside the IEP team

`Ai for special education` workflows that sit inside the IEP team's loop — reading-comprehension scaffolds tuned to the student's reading level, dyslexia-aware spelling support, AAC vocabulary suggestions, social-story generation for SEL goals. Hard constraints: the AI never sets the IEP goal, never logs progress data autonomously, and every output is accessible per Section 508 + WCAG 2.1 AA. IDEA compliance is the prerequisite, not the afterthought.

AI academic advisor · degree-audit Q&A with human-handoff

`Ai academic advisor` workflows for higher-ed — answers degree-audit questions, routes prerequisite conflicts, flags drop/add deadlines, summarizes the registrar's policy in plain language. Hard handoff to a human advisor on graduation-risk flags, mental-health phrases, financial-aid edge cases, and Title IX adjacent topics. Runs 24/7 on Haiku 4.5; complex edge cases escalate to a queued human-advisor ticket. Reduces routine ticket volume 40–55% in the audits we've run.

Don't see your education workflow?

The highest-ROI AI workflow on your campus is usually one we haven't listed. Bring it to the 2-week audit — we'll rank it against the Bloom × AI-band rubric and tell you if it ships.

Tell us yours
the learning-outcome rubric

Where AI belongs
across Bloom's, by assessment type.

Education has a two-axis question, not a binary one. Bloom's cognitive level on Y, AI-use band on X. Eighteen cells; 18 different answers. We map every assessment your team runs against this rubric before deciding which workflows to ship — and which to refuse. The cells that say BLOCK are the credential-bearing ones; everything to the left is where AI returns teacher time.

bloom × ai-band
AI-AUGMENT AI runs in-loop
AI-ASSIST teacher reviews
AI-BLOCK proctored · no AI
L6 Create design · generate · invent
Brainstorm scaffold (early draft) Sonnet 4.6
Rubric-anchored draft critique Sonnet 4.6
Supervised lab notebook / capstone defense
L5 Evaluate critique · justify · defend
Counterargument generator (debate prep) Sonnet 4.6
Rubric-scored persuasive essay Sonnet 4.6
Bar / licensure / board exam
L4 Analyze compare · contrast · diagnose
Data-exploration co-pilot (statistics) Sonnet 4.6 + code-interpreter
Source-credibility critique Haiku 4.5
Proctored differential diagnosis (clinical)
L3 Apply use · solve · implement
Worked-example tutor (math, programming) Sonnet 4.6
Code-review draft (intro CS) Haiku 4.5
Practicum / clinical / skills exam
L2 Understand explain · summarize · classify
Concept-explanation tutor Haiku 4.5
Short-answer quiz with explanation GPT-5.4-mini
Closed-book comprehension test
L1 Remember recall · list · identify
Flashcard / spaced-repetition coach Haiku 4.5
AI quiz generator (formative) GPT-5.4-mini
Spelling bee / recitation
Create design · generate · invent
  • AI-AUGMENT Brainstorm scaffold (early draft) Sonnet 4.6

    AI proposes idea seeds for student to react to — not a finished work. The student selects, rejects, extends. Output is recorded but not graded; the graded artifact is the final piece students produce later.

  • AI-ASSIST Rubric-anchored draft critique Sonnet 4.6

    AI critiques the student's draft against the rubric and suggests revisions. Teacher reviews the AI's critique before it reaches the student. Final mark on the resubmitted draft stays with the teacher.

  • AI-BLOCK Supervised lab notebook / capstone defense

    Credential-bearing original-work assessments. AI is blocked at the workstation level; sessions are proctored; lockdown browser enforced. Capstone defenses are oral or in-person.

Evaluate critique · justify · defend
  • AI-AUGMENT Counterargument generator (debate prep) Sonnet 4.6

    AI generates opposing-view counterarguments for students to refute. Builds evaluation skills by giving students more positions to assess than a teacher can produce alone.

  • AI-ASSIST Rubric-scored persuasive essay Sonnet 4.6

    AI scores against rubric criteria, teacher reviews + overrides every grade. Cohort-level bias audit runs before deployment. Students receive the rubric-coded feedback, never raw AI scores.

  • AI-BLOCK Bar / licensure / board exam

    High-stakes credentialing. AI access is blocked at the testing center; lockdown browser; proctor in room or via webcam. Re-take policy applies per credentialing body.

Analyze compare · contrast · diagnose
  • AI-AUGMENT Data-exploration co-pilot (statistics) Sonnet 4.6 + code-interpreter

    Student loads a dataset and asks the AI to surface descriptive statistics + initial visualizations. Student interprets and writes up; AI does not interpret causally.

  • AI-ASSIST Source-credibility critique Haiku 4.5

    AI surfaces source-credibility signals (publication date, publisher type, citation count). Student writes the analysis; teacher reviews the AI flags before grading.

  • AI-BLOCK Proctored differential diagnosis (clinical)

    Clinical or technical analysis where the AI's pattern-match would defeat the assessment's purpose. Proctored, no-AI environment; lockdown enforced.

Apply use · solve · implement
  • AI-AUGMENT Worked-example tutor (math, programming) Sonnet 4.6

    Step-by-step Socratic scaffolding to a similar-but-not-identical problem. Never solves the assessed problem outright; instructor-set difficulty band; weekly mastery report goes to the teacher.

  • AI-ASSIST Code-review draft (intro CS) Haiku 4.5

    AI reviews student code for correctness and idiom; teacher checks the AI review and gives final feedback. Student sees a combined rubric-coded comment, not raw AI output.

  • AI-BLOCK Practicum / clinical / skills exam

    In-person skills demonstration (nursing skills lab, lab-bench practical, language oral exam). AI is irrelevant by design — the student performs the skill.

Understand explain · summarize · classify
  • AI-AUGMENT Concept-explanation tutor Haiku 4.5

    Student asks the AI to re-explain a concept at a chosen reading level. Tutor adapts to the student's stated misunderstanding; no grading involved.

  • AI-ASSIST Short-answer quiz with explanation GPT-5.4-mini

    AI grades against an answer key + explanation rubric. Teacher reviews flagged borderline answers (confidence < 0.85). Final grade with the teacher.

  • AI-BLOCK Closed-book comprehension test

    Foundational-knowledge proctored test where access to AI would defeat the purpose. Standardized-test policy applies; accommodations preserved.

Remember recall · list · identify
  • AI-AUGMENT Flashcard / spaced-repetition coach Haiku 4.5

    AI surfaces the right card at the right interval based on student recall history. Teacher gets aggregate retention metrics; never individual-card content.

  • AI-ASSIST AI quiz generator (formative) GPT-5.4-mini

    AI generates quiz items from the lesson plan; teacher reviews and approves before deployment. Quiz is for formative feedback, not the gradebook.

  • AI-BLOCK Spelling bee / recitation

    Performance-based recall events. AI access is meaningless and blocked by event format (live event, no devices).

lti 1.3 launch + grade-passback

How AI plugs into the LMS
as a protocol conversation, not a vendor list.

Education buyers don't ask which LMS we support — they ask how an AI tutor or grader exchanges messages with the LMS without breaking the gradebook, the roster, or FERPA. Below is the actual LTI 1.3 sequence we ship against, with the six messages numbered, the auth on each one, and the per-vendor gotcha that bites integrators in week 3. Click any message for the protocol detail.

protocol pulse · 6 messages · ~9s loop
Faculty instructor
LMS Canvas · Blackboard · Moodle · Schoology · Classroom · PowerSchool
LTI Tool AI tutor service
Outcomes Service AGS · gradebook
Student learner
  1. 01
    LMS LTI Tool
  2. 02
    LTI Tool LMS
  3. 03
    LTI Tool Student
  4. 04
    Student LTI Tool
  5. 05
    LTI Tool Outcomes Service
  6. 06
    LMS Faculty
model picks per education workflow

The model matrix.
Per assessment, not per vendor.

Same `ai in education` stack runs four model picks. Sonnet 4.6 wins where pedagogical reasoning or rubric critique matters (tutoring, essay grading, lesson plans, special-ed). Haiku 4.5 wins on district-scale Q&A and advising. GPT-5.4-mini is the structured-output specialist for quiz generation. GPT-5.4 sits on long-form reasoning (capstone feedback, IEP synthesis). Cost-per-decision below is roughly current — verify on your own usage before locking a pick.

Dimension
You're here Claude Sonnet 4.6 Anthropic · quality tier
Claude Haiku 4.5 Anthropic · cheap, fast
GPT-5.4-mini OpenAI · structured output
GPT-5.4 OpenAI · long reasoning
AI tutoring (1:1 worked-example scaffolds) Subject-specific Socratic scaffolds. Pedagogy quality matters — a bad scaffold sets a student back a week.
Claude Sonnet 4.6 Default · best pedagogical reasoning
Claude Haiku 4.5 District-scale · 7× cheaper
GPT-5.4-mini Workable; weaker on Socratic rationale
GPT-5.4 Tied — pick on stack preference
AI essay grading (rubric-anchored, teacher reviews) Per-criterion rationale + bias-audited cohort scoring. Teacher overrides every mark.
Claude Sonnet 4.6 Default · best narrative critique
Claude Haiku 4.5 Workable on short responses, drifts on long
GPT-5.4-mini Strong on structured-rubric output
GPT-5.4 Best on long-form essay analysis
AI quiz generator (formative items, standards-aligned) Item generation at varied Bloom levels with accessibility tagging. Teacher reviews before publish.
Claude Sonnet 4.6 Strong; cost prohibitive at district volume
Claude Haiku 4.5 Tied on bulk generation
GPT-5.4-mini Default · best structured-item output
GPT-5.4 Cost vs. uplift doesn't break even
AI lesson plan generator Standards-aligned draft against textbook + cohort reading level + preferred pedagogy.
Claude Sonnet 4.6 Default · best curriculum coherence
Claude Haiku 4.5 Workable for boilerplate; drifts on pedagogy
GPT-5.4-mini Strong on structured slots, weaker on flow
GPT-5.4 Tied — pick on long-context strength
AI for special education (IDEA-scoped assist) Reading scaffolds, AAC vocab, social-story generation. Inside the IEP team's loop.
Claude Sonnet 4.6 Default · best accommodation phrasing
Claude Haiku 4.5 Reserve for low-stakes scaffolds
GPT-5.4-mini Strong structured output; tone limited
GPT-5.4 Tied — long-context for IEP synthesis
AI academic advisor (higher-ed degree-audit Q&A) Routine advising routes; hard handoff on graduation-risk / mental-health / financial-aid phrases.
Claude Sonnet 4.6 Best handoff rationale on edge cases
Claude Haiku 4.5 Default · 24/7 high-volume Q&A
GPT-5.4-mini Strong on structured degree-audit output
GPT-5.4 Overkill at this volume
K-12 vs higher-ed swap (COPPA / FERPA gate) Which model the routing layer flips to when the user-role claim is under-13 (COPPA scope).
Claude Sonnet 4.6 Default for under-13 (quality-on-safety)
Claude Haiku 4.5 Allowed under-13 with stricter prompt guards
GPT-5.4-mini OpenAI Education tier · COPPA-attestation
GPT-5.4 Reserve for over-13 routes

Cost figures are typical per-decision spend with prompt caching warm and standard education context sizes (rubric + standards alignment + student work excerpt, not full term portfolio). Run your own benchmark before locking a model pick; vendor prices, COPPA-attestation terms, and model capabilities shift quarterly.

ai in education — when it's the wrong answer

Four places we'll tell you no.
Honest scoping > pretty deck.

Most `ai for education` pitch decks have an AI answer for every problem. Most production school systems and universities should refuse four of them. If your team is scoping any of these, we'll say so in the audit — and we won't bill phase 2 to find out. FERPA, COPPA, IDEA, and academic-integrity zones are not compliance checkboxes; they're the difference between a workflow that ships and one that gets pulled by general counsel in week 9.

Autonomous grading of credential-bearing work

We won't ship `ai grading` that issues a final mark on a credential-bearing assessment without a teacher in the loop. Final grades stay with the teacher — full stop. The AI we build is rubric-anchored assist: it scores against the criteria, produces per-criterion rationale, and queues the draft for teacher override. A cohort-level bias audit runs before deployment. Anything labeled `automated essay scoring` that goes straight to the gradebook without a human in the path is a Title VI / IDEA / state-accreditation problem waiting to happen.

K-12 under-13 AI without COPPA + state-privacy attestation

COPPA scope applies to under-13 users — verifiable parental consent, minimum-necessary data, no behavioral advertising on the AI pipeline. NY Ed Law 2-d, California SOPIPA, and Illinois SOPPA add district-side contract requirements: data inventory, breach-notification SLAs, deletion-on-request paths, no secondary use of student data. We will not ship a K-12 workflow until the COPPA + state-privacy attestation is signed and the data-minimization audit has run. `Schoolai`, `magicschool ai`, and `khanmigo` get attention for a reason — but they got the privacy posture nailed before scale, and most pilots that skip this step die at procurement.

Academic-integrity zones (summative exams, capstone defenses)

AI is BLOCKED on summative credential-bearing assessments: standardized state exams, capstone defenses, bar / licensure / board exams. We don't ship `ai for college` workflows that operate inside those zones — and our `ai detection` posture is honest: AI-text detectors have false-positive rates that disproportionately flag non-native English writers and neurodivergent students. We refuse to ship an `ai detection` pipeline as the sole evidence in an academic-integrity hearing. Use proctored zones + assessment redesign, not detector-as-judge.

Special-education AI without IDEA + Section 508 + WCAG 2.1 AA

`Ai for special education` ships only inside the IEP team's loop. AI does not set goals, does not log progress autonomously, and every output is accessible: screen-reader-conformant, keyboard-navigable, captions on every video, alt-text on every image, color-blind-safe palettes by default. Section 508 + WCAG 2.1 AA are non-negotiable; IDEA's least-restrictive-environment principle frames the entire workflow. If your district's accessibility audit isn't on file, the pilot doesn't ship.

ai-in-education capability patterns

Three capability patterns.
Hypothetical — illustrative shapes, defensible specifics.

Cases below are hypothetical capability patterns illustrating the shape of an `ai in education` engagement at three institution scales — a K-12 district, a mid-size university, and an adult-ed / L&D program. Stack shown is what we would ship at that scale; metrics are the ones our audits target before any pilot ships. Named references shared under NDA once we know what you're building. We do not claim these as completed client engagements.

Regional K-12 district · 18,000 students · hypothetical pattern Pattern

AI tutoring inside Canvas — Sonnet 4.6 + LTI 1.3 launch

Problem

Middle-school math classes seeing 22-point achievement gaps between the top and bottom quartiles. After-school tutoring is funded but capacity-constrained; only 12% of qualifying students attend a session in any given month. District wants a 1:1 scaffold available inside Canvas every time a student opens a homework assignment.

Approach

AI tutor launched via LTI 1.3 from the Canvas assignment view; Sonnet 4.6 with instructor-set difficulty band; Socratic scaffolds only (refuses to solve the assessed problem outright); weekly mastery report to the math teacher. FERPA + COPPA attestations signed; state-privacy contract (NY Ed Law 2-d equivalent) executed. Hard handoff on mental-health phrases and graduation-risk flags.

Sonnet 4.6Canvas LTI 1.3Haiku 4.5 (volume routes)FERPA audit logFastAPI sidecar
Outcome
≈ 42% tutor-attached attempts per assignment in the pilot cohort
Mid-size university · ~26,000 students · hypothetical pattern Pattern

AI essay grading + bias-audited rubric — first-year writing program

Problem

First-year writing program grading 5,400 essays per term across 78 adjuncts. Inter-rater reliability uneven; grading turnaround stretching to 14+ days; adjunct burnout climbing. Program wants AI-drafted rubric-coded scores the adjunct overrides — turnaround target 72 hours, fairness audit on every cohort.

Approach

Sonnet 4.6 drafts the rubric-coded score with per-criterion rationale; adjunct reviews and overrides — final grade with the adjunct. Cohort-level bias audit runs at the end of each grading cycle (scores by demographic flag); audit log to the program director. AI does not write to the gradebook autonomously; LMS write-back gated on the adjunct's override action.

Sonnet 4.6Blackboard LTI 1.3 + AGSpgvector rubric exemplarsCohort bias-audit jobFastAPI
Outcome
≈ 64% adjunct grading time returned per essay in pilot
Adult-ed L&D / workforce training · ~8,500 active learners · hypothetical pattern Pattern

AI academic advisor + L&D micro-coach — Haiku 4.5 + hard-handoff queue

Problem

Adult-ed workforce-training program getting 6,800 advising tickets per quarter — drop deadlines, prereq routing, certificate-track timing, financial-aid eligibility. Two human advisors managing the queue; response time stretching to 5+ business days. Routine 80% of tickets, complex/escalation the other 20%.

Approach

AI advisor (Haiku 4.5) answers routine ticket types 24/7 with full degree-audit context, hands off to the human advisor on financial-aid edge cases, graduation-risk flags, mental-health phrases, and Title IX adjacent topics. Audit log retained 7 years per FERPA records-retention policy. Conversation handoff includes the AI's draft response so the human advisor doesn't start from zero.

Haiku 4.5Schoology + Classroom LTI 1.3pgvector advising-policy indexFERPA audit logTwilio (SMS handoff)
Outcome
≈ 49% routine advising ticket time returned per quarter
how we ship ai in education in 6–8 weeks

Four stages.
With a kill point at week 6.

Every `ai in education` engagement we run uses the same loop: audit, pilot, ship, scale. The pilot has an explicit walk-away point at week 6 — if the metric won't move, we stop before production hardening and you don't pay phase 2. No retainer trap, no scope-creep into year-long implementations.

  1. Weeks 1–2

    AI-in-education audit

    Two-week shadow with academic-affairs, IT, instructional-design, and (where relevant) special-ed coordinators. We map your assessment surface against the Bloom × AI-band rubric in §5, list candidate `ai in education` workflows, rank by instructor hours returned × time-to-ship × policy risk, and call out which ones won't pay back yet. FERPA + COPPA + state-privacy + IDEA + accessibility audits scoped before any code.

    90-day AI-in-education roadmap with ranked workflow candidates
  2. Weeks 3–6

    Pilot — one workflow, teacher-in-loop

    We build the single highest-ROI candidate against your real Canvas / Blackboard / Moodle / Schoology / Classroom / PowerSchool stack via LTI 1.3. Live behind a teacher sign-off flag; baseline vs. assisted runs measured; FERPA audit-log + COPPA attestation + state-privacy contract validated end-to-end. Bias-audit job tested before any production grading or routing.

    One workflow live in the LMS with eval data + bias-audit results
    Walk-away point
  3. Weeks 7–8

    Ship to production

    Production hardening: LMS grade-passback via AGS, retry + fallback policies, FERPA-records retention runbook, accessibility regression suite gated in CI, integrity-policy walk-through with academic affairs. The workflow goes live with teachers and instructional designers in the loop — not as an internal demo.

    Production workflow + accessibility runbook + integrity-policy review
  4. Ongoing

    Scale to next workflow

    Most `ai in education` engagements run 3–5 workflows by month 6: AI tutor → AI essay grading → AI quiz/lesson generator → academic advisor. Same eval harness, same FERPA audit log, same accessibility regression suite, same cost-per-decision reporting. Compounding learning across the rubric.

    3–5 AI-in-education workflows live by month 6
engagement models

Three ways to engage.
Hire us at the tier that fits where you are.

Most `ai for college` and `ai for k-12` clients start with the 2-week audit, hire us to ship one workflow on a pilot, then move to monthly for the next three to five. Cost-per-decision reported monthly on every shipped workflow — no per-decision number, no engagement.

1–2 weeks

AI-in-education audit

Find which AI workflows pay back on your LMS + assessment surface — before you commit a budget.

$3K fixed
  • Operator shadow with academic affairs · IT · instructional design
  • Bloom × AI-band rubric mapped against your assessments
  • Per-workflow cost band ($200–$1,600/mo)
  • 90-day AI-in-education roadmap with ranked candidates
  • FERPA + COPPA + IDEA + state-privacy posture review
Book the AI-in-education audit
Most teams start here
4–8 weeks

Pilot to production

Hire us to ship one AI workflow end-to-end on your LMS, teacher-in-loop, bias-audited.

$10–25K fixed price
  • Build, integrate, deploy via LTI 1.3 on your LMS
  • Cohort-level bias audit + accessibility regression suite
  • FERPA audit log + state-privacy contract validated end-to-end
  • Eval suite, Langfuse traces, AGS grade-passback runbook
  • Walk-away point — if the metric won't move, no phase 2
Hire us for the pilot
Monthly

Continuous AI-in-education team

Embedded AI-in-education engineers shipping the next workflow on your roadmap.

from $5K per month
  • PM + AI engineer + instructional-design analyst, embedded
  • Per-workflow monthly cost-of-ownership report
  • Per-cohort bias-audit cadence + accessibility review
  • Cancel any time — no annual contract
Talk to an AI-in-education engineer
FERPA + COPPA attestation before any pilot Cohort-level bias audit on every grading workflow WCAG 2.1 AA + Section 508 on every UI surface No annual contract
frequently asked — ai in education

Questions academic-affairs teams ask first.
Real answers, no hedging.

What does an AI-in-education development company actually do?

An `ai in education` development company like ours ships production AI workflows on your LMS + assessment surface — not slide decks, not pilots that die at procurement. The day-to-day work: scope which workflow returns instructor time without crossing the academic-integrity boundary (most often AI tutoring, AI essay grading with teacher review, AI quiz or lesson-plan generators, special-ed assist, or an LMS-integrated academic advisor), get the FERPA + COPPA + state-privacy attestations on file, build the integration against your Canvas / Blackboard / Moodle / Schoology / Google Classroom / PowerSchool stack via LTI 1.3, pick the right model per workflow (Sonnet 4.6 for tutoring quality and essay critique, Haiku 4.5 for high-volume Q&A, GPT-5.4-mini for structured quiz generation), bake in cohort-level bias auditing and WCAG 2.1 AA accessibility, ship behind a teacher sign-off flag, then operate the workflow long enough to prove cost-of-ownership before scaling. We do not sell a product — we ship one workflow at a time and report cost-per-decision monthly. First workflow live in 6–8 weeks; full pilot $10–25K fixed.

Is your AI FERPA + COPPA compliant? What about state student-privacy laws?

Yes — the operator-grade specifics. (1) FERPA: minimum-necessary data, education-records retention per institution policy, audit log on every inference (request ID, model, retrieved-context references, response, teacher who reviewed). (2) COPPA: for K-12 under-13 users, verifiable parental consent on file before the workflow activates, no behavioral advertising on the AI pipeline, minimum-necessary data collection, deletion-on-request paths. (3) State student-privacy laws: NY Ed Law 2-d (data inventory, breach-notification SLA, deletion paths, no secondary use), California SOPIPA (no targeted advertising, no profile-building for non-educational purposes), Illinois SOPPA (district-side contract with explicit data-use limitations). We sign your district's data-privacy contract before any pilot — and we will turn down the engagement if the contract conflicts with our infrastructure providers' BAA-equivalent terms. (4) IDEA: special-ed workflows ship only inside the IEP team's loop; AI does not set goals or log progress autonomously. What we do not claim: SOC 2 Type II or ISO 27001 certification as a product. If your procurement team needs SOC 2 as a hard gate, we'll say so up front.

How does AI tutoring work without replacing the teacher?

`Ai tutoring` ships as 1:1 worked-example scaffolds inside the LMS assignment view — never as autonomous answer delivery. The pattern: a subject-specific tutor (Sonnet 4.6 for quality, Haiku 4.5 for district-scale volume) reads the assignment context via LTI 1.3 launch, applies the instructor-set difficulty band, runs Socratic scaffolds (next-step hint, worked example on a similar-but-not-identical problem, conceptual reframe), and writes a weekly mastery report to the teacher's dashboard. Hard guardrails baked in: the AI refuses to solve the assessed problem outright, refuses to write the essay for the student, refuses to leave the subject scope, and hard-handoffs on mental-health phrases or graduation-risk flags. Teacher visibility on every session is the design point — `ai tutor for students` only works if the teacher knows what their students asked the AI and how the AI responded. We've seen 40%+ tutor-attached attempt rates inside Canvas / Blackboard in the audits we've run.

Can AI grade essays fairly? What about bias?

Yes — with a teacher in the loop and a cohort-level bias audit on every grading cycle. The pattern: Sonnet 4.6 drafts a rubric-coded score with per-criterion rationale, the teacher reviews and overrides, the final mark is the teacher's. The bias audit runs at cohort close: scores grouped by demographic flag (where the institution has consented to that analysis) compared against the cohort distribution; any statistically significant skew triggers a rubric-recalibration cycle before the next term. AI text detectors are a different problem — we will not ship `ai detection` as the sole evidence in an academic-integrity hearing because the false-positive rates disproportionately flag non-native English writers and neurodivergent students. The honest answer on `ai for grading papers` is: AI saves the teacher 50–65% of grading time on rubric-anchored work; the teacher saves the AI from shipping a bias-skewed score; and the audit log is the joint contract between the two.

Can you integrate AI with Canvas, Blackboard, Moodle, Schoology, Google Classroom, or PowerSchool?

Yes — all six, via LTI 1.3 (and Google Classroom's native Classroom API where LTI doesn't reach). Quick recap of the per-LMS failure modes (covered in §6 above). Canvas: keysets rotate — fetch JWKS dynamically, never cache past TTL; AGS lineitems must be pre-created via Deep Linking 2.0 or admin-side. Blackboard Ultra: role claims sometimes return in legacy format; normalize before reading; score scale defaults to 0-100. Moodle: `context_type` is course vs CourseSection — handle both; grade scale is configurable per gradebook item. Schoology: rate-limit at 50 req/min on free tiers; AGS support is limited compared to Canvas. Google Classroom: no LTI NRPS or AGS — use the Classroom API directly with `coursework.studentSubmissions.patch`. PowerSchool: iframe sandbox blocks WebSockets in some configs — ship a long-poll fallback. `Canvas ai` and `khanmigo`-style integrations are not magic — they're 90% LTI 1.3 plumbing (OIDC + JWT + AGS) and 10% model choice. Picking the right model is the easy part; the LTI integration layer is the engagement.

How much does an AI tutor or AI grader cost to run per student per term?

Two cost buckets, transparent. Model layer: an `ai tutor` on Sonnet 4.6 with prompt caching warm runs roughly $0.008–$0.018 per session (a session = a multi-turn conversation around one assignment); a typical middle-school math student opens 30–60 tutor sessions per term, so model cost lands at $0.24–$1.08 per student per term. An `ai essay grading` workflow runs $0.04–$0.10 per essay on Sonnet 4.6; a first-year writing student submits 6–10 essays per term, so model cost lands at $0.24–$1.00 per student per term. Infrastructure layer (LTI launch, FERPA audit log, AGS write-back, bias-audit job): typically $0.01–$0.04 per session. On a 1,000-student pilot for either workflow, model + infra sits in the $200–$1,200/month range. The honest cost ceiling is the engineering: getting the LTI 1.3 integration right, the FERPA audit log right, the cohort bias audit running on schedule, the teacher-in-loop UI to a place teachers actually trust. That work is the engagement, not the per-decision spend.

What is OpenAI's stance on education, and how does Anthropic's Claude for Education compare?

Both vendors have explicit education positioning, and the $38.32 CPC peak on `claude for education` and $78.56 CPC peak on `openai education` tell you institutional buyers are real and active. OpenAI Education / ChatGPT Edu: institutional admin console, SSO with district IDs, COPPA-attestation paths for K-12 deployments, model fine-tuning on district curriculum where licensed, and (importantly) an enterprise tier that's actually data-policy-aligned for student data — consumer ChatGPT is not student-data-safe by default. Claude for Education (Anthropic): institutional access via the Anthropic API on enterprise terms, strong long-context handling for curriculum-aligned use cases, Constitutional-AI safety frame that aligns with academic-integrity zones, and (per Anthropic's published education-partner work with Northeastern, LSE, Champlain, and others) a clear posture on student-data minimization. Our position as a model-agnostic `ai in education` company: we pick the model per workflow. Sonnet 4.6 wins where pedagogical reasoning or rubric critique matters; GPT-5.4-mini wins on high-volume structured quiz generation; Haiku 4.5 is the district-scale tutoring routing default; GPT-5.4 is the long-context default for IEP-synthesis or capstone-feedback work.

When should we NOT use AI in education?

Four places we'll say no — covered in §8 above and worth repeating. (1) Autonomous grading of credential-bearing work — final grades stay with the teacher; bias audits run on every cohort. (2) K-12 under-13 workflows without COPPA + state-privacy attestation on file — NY Ed Law 2-d, California SOPIPA, Illinois SOPPA, and equivalents in your state govern the data-use contract, and we won't ship until it's signed. (3) Academic-integrity zones — summative state exams, bar / licensure / board exams, capstone defenses; AI is BLOCKED, proctored, lockdown-browser-enforced. We also refuse to ship `ai detection` as the sole evidence in an integrity hearing because detector false-positive rates disproportionately flag non-native English writers and neurodivergent students. (4) Special-ed workflows that bypass IDEA or Section 508 / WCAG 2.1 AA — the AI sits inside the IEP team's loop, never sets goals, never logs progress autonomously, and every output is accessible. Beyond those four: any workflow where the academic-integrity policy hasn't been mapped against the Bloom × AI-band rubric is a workflow we'll refuse until that mapping is done.

Will AI replace teachers? What about professors?

No — augment, not replace. The AI we build for K-12 districts, higher-ed institutions, and adult-ed / workforce-training programs is teacher-in-loop or instructional-designer-in-loop on every consequential decision. Tutor sessions write a weekly mastery report to the teacher; essay grades draft with per-criterion rationale a teacher overrides; quiz items generate but a teacher approves before publish; lesson plans draft but a teacher edits before delivery; special-ed scaffolds suggest but the IEP team decides. Time saved (30–45 min per lesson plan, 50–65% of essay grading time, 40–55% of routine advising ticket time) goes back into student-facing work — the only ROI metric that matters in education. The realistic claim a `ai in education` development company should make: teachers keep doing the pedagogy, the assessment, the relational work; the AI handles the drafts, the lookups, the scaffolds, and the routine routing. If a vendor is selling `replace your teachers with AI` energy, that's the deck to pass on — `ai in classrooms` works because it returns teacher hours, not because it removes teachers.

How much does an AI-in-education project cost and how long does it take?

Three tiers, pricing-locked across the cluster. (1) AI-in-education audit: $3K fixed, 1–2 weeks. We shadow academic-affairs + IT + instructional-design + (where relevant) special-ed coordinators, map your assessment surface against the Bloom × AI-band rubric, rank candidate workflows by instructor hours returned × time-to-ship × policy risk, and deliver a 90-day roadmap with per-workflow cost bands and an honest "these won't pay back yet" list. FERPA + COPPA + state-privacy + IDEA + accessibility posture reviewed before code. (2) Pilot to production: $10–25K fixed, 4–8 weeks. One workflow shipped end-to-end via LTI 1.3 on your LMS, teacher-in-loop, bias-audited, with FERPA audit-log + accessibility regression suite tested and a walk-away point at week 6 — if the metric won't move, we stop before production hardening and you don't pay phase 2. (3) Continuous AI-in-education team: from $5K/month, no annual contract. Embedded PM + AI engineer + instructional-design analyst shipping the next workflow on your roadmap, with per-cohort bias audits and cost-of-ownership reports each month. Most `ai in education` engagements we run start with the audit, ship the first workflow on the pilot, and move to monthly for workflows two through five. Cost-per-decision reported monthly on every shipped workflow — no per-decision number, no engagement.

Ready to ship

Stop running another edtech pilot that dies at procurement.
Hire an AI-in-education company that ships.

Book a free 30-minute AI-in-education audit. We'll identify two or three high-ROI candidates from your LMS + assessment surface, give you a per-workflow cost band, and tell you which ones won't pay back yet. No deck, no obligation to build.

30 min, async or live FERPA + COPPA attestation paths available You leave with a written roadmap