AI for HR, done right — without the EEOC, AEDT, or ADA risk.
We're the `ai hr company` you hire to ship production AI on your real stack — Workday, BambooHR, SuccessFactors, UKG, ADP, Rippling, Gusto on the HRIS side; Greenhouse, Lever, Ashby, iCIMS on the ATS side. `Ai recruiting platform` sourcing + screening with NYC LL144 bias-audit harnesses, `ai interview scheduling` agents, `ai onboarding` workflows, `people analytics software` pipelines, and `ai hr services` policy Q&A — every candidate-facing surface with an ADA accessible alternative one click away. First workflow live in 4–6 weeks behind a recruiter / HRBP sign-off flag.
talent lifecycle · 3 quarterly turns · ai touchpoints
stageAI touchpointquarterly recurrence (3 turns)
JD rewrite↳ Req opened
agent sourcer↳ Sourced
screen eval↳ Screened
interview scheduler↳ Interview
onboarding QA bot↳ Onboard
people analytics↳ Engage
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
tap any peg above for the stage + AI touchpoint + workflow detail
4–6 wk
first HR AI workflow live behind a recruiter / people-ops sign-off flag
6 regs
NYC LL144 · IL AIVA · CA AB 2930 · EEOC Title VII · ADA · GDPR/state-PII — guardrailed on every workflow
$200–$1.5K
monthly model + infra cost band per shipped HR AI workflow
$3K
HR AI audit-to-roadmap before any build starts
three hr buyer pools, one pillar
Who hires us. And the AI-for-HR shape they buy.
`Ai in hr` ($45.03 CPC) and `ai for human resources` aren't a single market — they're three. TA leaders shipping an `ai recruiting platform` ($97.39 CPC) and `ai sourcing tool` ($83.72 CPC). People-ops leaders shipping `ai onboarding` ($51.99 CPC) and internal-mobility workflows. People-analytics leaders shipping `people analytics software` ($95.35 CPC) and `hr ai software` ($84.05 CPC) decision pipelines. Three pools, one pillar — three things an `ai hr company` should be honest about before scoping your first build.
01
TA leaders: from ATS to model-per-task pipelines
If you run talent acquisition on Greenhouse, Lever, Ashby, iCIMS or SmartRecruiters, you've already lived through ATS automation. The next layer is model-per-task: an `ai sourcing tool` ($83.72 CPC) drafts personalized outreach against your ICP, an `ai recruiting platform` ($97.39 CPC) screens resumes against a job-specific rubric, an `ai interview scheduling` agent ($79.49 CPC) coordinates panels across timezones, and a partner-in-loop signs every advance. We're the dev partner you hire when your TA stack needs the workflow shaped to your hiring funnel, your level bands, your DEI guardrails — not a vendor's roadmap. Workday Recruiter AI, Eightfold, HireVue, Paradox / Olivia are the benchmark products you'll compare us against; they're right for some teams. We're right when the workflow is custom.
02
People Ops: onboarding, mobility, and the lifecycle loop
Once someone's hired, the lifecycle compounds: `ai onboarding` ($51.99 CPC) workflows that run the first-30-day checklist (doc collection, system provisioning, manager 1:1 scheduling, training assignments), internal-mobility matching against open reqs, and engagement signal extraction from 1:1 notes + survey data. The right HR information system stack — Workday HCM, BambooHR, SuccessFactors, UKG Pro, ADP Workforce Now, Rippling, Gusto — is the substrate; AI workflows sit on top, never replace it. We integrate; we don't rebuild the system of record. New hires sign every doc; HRBPs drive every action.
03
People Analytics: from dashboard to decision
`People analytics software` is the highest-CPC keyword in the cluster at $95.35 — because every dollar moved on retention or attrition compounds annually. The shift: yesterday's people analytics was a Workday Prism / Visier dashboard; today, it's an `hr ai software` ($84.05 CPC) workflow that pulls HRIS + survey + 1:1 notes + Slack signal, surfaces attrition-risk per cohort, highlights skills gaps, drafts retention plays, and routes to the responsible HRBP. The radar in §6 shows the shape of one signal (skills gap by role). We're the dev partner you hire to wire the rest — and to be honest about which signals are noise.
ai for hr, by P&L line
Six AI workflows that move HR P&L. Ranked in the audit, not the slide deck.
These are the six `hr ai software` workflows that consistently pay back in the audits we run. Not every team needs all six — most have a high-ROI candidate in three of them. The audit ranks yours so you don't have to guess which to fund first. Buyer reality: `ai hr services` ($101.36 CPC), `ai recruiting platform` ($97.39 CPC), and `people analytics software` ($95.35 CPC) are the three highest-CPC keywords in the cluster — and they sit on this list for a reason.
AI sourcing tool + outbound outreach agent ($83.72 CPC)
`Ai sourcing tool` workflows that ingest the req + ICP, search LinkedIn / GitHub / public talent pools, and draft personalized outreach for each candidate. Sonnet 4.6 wins on personalization quality; Haiku 4.5 runs the volume-classifier upfront. Recruiter reviews and sends — AI never sends autonomously. We ship into Greenhouse, Lever, Ashby, iCIMS, SmartRecruiters and the candidate-CRM layer (Gem, Beamery). TA leaders typically see 3–5× response-rate lift on top-of-funnel within 60 days; honest constraint is that the gain plateaus once your ICP is well-defined — the AI compounds where the human judgment is repetitive, not where it's strategic.
AI recruiting platform — screening + rubric scoring ($97.39 CPC)
`Ai recruiting platform` is the single highest-CPC keyword in the cluster at $97.39 per click — because every hour returned to a recruiter on screening compounds across a hiring plan. We ship the pipeline: resume + intake-call transcript run against a job-specific rubric, AI suggests advance / hold / decline with rationale, recruiter signs every decision, NYC LL144 bias-audit logged per candidate. The benchmark `ai recruiting software` products (Eightfold, Phenom, Beamery, Findem, HireEZ) are right for some teams; we ship when the rubric needs to come from your level bands and your loss-pattern history, not a vendor's default.
AI interview scheduling + transcription agent ($79.49 CPC)
`Ai interview scheduling` workflows that coordinate panel calendars across timezones, draft the interview kit per stage, run a transcription on each loop (Sonnet 4.6 + Whisper or AssemblyAI), and produce a transcript-grounded scorecard summary post-loop. Interviewers sign every scorecard — AI never advances autonomously. Paradox / Olivia and Goodtime are the benchmark scheduling products; we ship when the workflow has to touch your ATS field schema, your custom interview-kit structure, and your panel-balance DEI rules.
AI onboarding agent — first-30-day workflow ($51.99 CPC)
`Ai onboarding` workflows that run the first-30-day checklist: doc collection (I-9, tax, benefits enrollment), system provisioning across Workday HCM + Okta + Slack + GitHub + Salesforce, manager 1:1 scheduling, training assignments, and a status digest to people-ops. New hires sign every doc; AI never finalizes. ROI compounds on high-volume hiring (50+ hires/quarter) more than on small executive cohorts — we'll say so in the audit. Common stack: BambooHR / Rippling / Gusto for SMB; Workday / SuccessFactors / UKG for enterprise.
People analytics software workflows ($95.35 CPC)
`People analytics software` is the highest-CPC keyword for a reason — retention math compounds. We ship the pipeline: HRIS + survey + 1:1 notes + Slack signal → attrition-risk model per cohort → skills-gap radar (§6) per role → drafted retention plays → routing to the responsible HRBP. Visier and Workday Prism are the benchmark dashboards; we build when you need the analytics shaped to your headcount-planning rhythm, your engagement-survey schema, and your manager-effectiveness signal — not a vendor's defaults. `Workforce analytics software` ($28.79 CPC) is the adjacent search; same workflow shape.
AI HR services — intake, ER triage, policy Q&A ($101.36 CPC)
`Ai hr services` workflows that handle the high-volume, low-judgment requests an HR-services team fields: policy questions (PTO accrual, parental leave, benefits eligibility), intake routing for ER cases (with hard-escalation rules for harassment / discrimination — never autonomous), expense-policy clarifications, and benefits-enrollment Q&A. RAG over your firm's policy corpus + HRIS context; HRBP reviews and routes; AI never closes ER cases. Common stack adjacency: ServiceNow HR Service Delivery, Workday Help, Rippling. Worth the build when ticket volume is 200+/month and policy-corpus updates lag behind questions.
Eight stages, three quarterly turns.
The lifecycle recurs — your AI touchpoints compound across it.
Talent lifecycle is not a one-shot pipeline; it recurs every quarter as reqs reopen, sourcing cycles refresh, and onboarding cohorts compound. The 3D helix below shows the same eight stages — req → sourced → screened → interview → offer → hire → onboard → engage — appearing three times across three quarterly turns. AI touchpoints (`ai sourcing tool`, `ai interview scheduling`, `ai onboarding`, `people analytics software`) bolt onto specific stages without changing the geometry. Tap any peg for the workflow + model + cost detail.
talent lifecycle · 3 quarterly turns · ai touchpoints
stageAI touchpointquarterly recurrence (3 turns)
JD rewrite↳ Req opened
agent sourcer↳ Sourced
screen eval↳ Screened
interview scheduler↳ Interview
onboarding QA bot↳ Onboard
people analytics↳ Engage
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
tap any peg above for the stage + AI touchpoint + workflow detail
skills gap · radar
Where AI closes the role-gap. Pick a role; see the shape of the gap.
`People analytics software` ($95.35 CPC) shouldn't stop at engagement scores. The highest-ROI signal is the skills-gap shape between the role you're hiring for and the team you actually have — and where AI workflows close that shape without a new headcount. Three role examples below; the same radar runs on yours from your HRIS + 1:1 + survey data.
auto
Role-target
Current team
AI-augmented
tap an axis to see the AI workflow that closes the gap
hris + ats stack · ai-workflow fit
The HRIS + ATS matrix. Per workflow, per stack.
Same `hr ai software` workflows ship across four common stack shapes. Workday HCM + Recruiter sits on the enterprise side with the cleanest event surface for onboarding + the strongest people-analytics adjacencies (Workday Prism + Skills Cloud). Greenhouse + BambooHR is the mid-market workhorse for TA + SMB HRIS. Lever / Ashby + Rippling is the growth-stage default — fast to integrate, modern UI surface. iCIMS + SuccessFactors is the enterprise-ATS-plus-HCM combination where field-schema mapping is the upfront work. We integrate; we don't replace. Named vendors below are the benchmark stacks our engagements run against — no partnership framing, just editorial scope.
Dimension
You're here
Workday HCM + RecruiterEnterprise · HRIS + ATS
Greenhouse + BambooHRMid-market · ATS + HRIS
Lever / Ashby + RipplingGrowth · ATS + HRIS-EOR
iCIMS + SuccessFactorsEnterprise · ATS + HCM
AI sourcing tool · outbound outreach agentIngest req + ICP, draft personalized outreach. Recruiter sends — AI never sends.
Workday HCM + RecruiterRecruiter integrates · Connections API solid
Greenhouse + BambooHRDefault · Harvest API mature; Gem/Beamery adjacent
Lever / Ashby + RipplingStrong API · CRM-style workflow native
Vendor fit notes reflect typical engagement scope as of 2026; APIs, field schemas, and product capabilities shift quarterly. Run your own integration-scope review in the audit before committing to a stack. We do not sell or resell any of the products named.
regulatory exposure · ledger
The HR-AI regulatory ledger. Six regulations we ship guardrails around.
Every HR AI workflow we ship touches at least three of the six rows below. Status dot tells you which posture we hold — green = guardrail in place, amber = active monitoring (regulation is evolving fast), red = we refuse the use-case. Tap a row to see the representative eval / log / consent-line pattern we ship.
auto
guardrail in place
actively monitoring (evolving regulation)
we refuse the use-case
status
regulation
what it means for an HR AI workflow
our guardrail (with artifact)
Any automated employment decision tool (AEDT) used to screen, score, or rank candidates for jobs in NYC must pass an independent bias audit yearly, with results published, and candidates must be notified before the AEDT is used on their application.
Per-tool bias-audit harness wired into CI; candidate-notice email template lives next to the screening agent; audit results stored in the firm's tenant and date-stamped.
representative pattern · not literal client code
# AEDT bias-audit harness (representative pattern, not literal client code)
audit_input = read_resumes("./screening_set.jsonl") # n ≥ 500
group_keys = ["sex", "race", "intersection(sex,race)"]
scores = aedt_model.score(audit_input)
impact_ratio = bias_metrics.impact_ratio(scores, group_keys)
log({ tool: "ai-screen-v3", impact_ratio, date: today() })
assert impact_ratio[g] >= 0.80 for g in group_keys, "LL144 fails"
If you use AI to analyze video interviews of candidates for positions based in Illinois, you must notify the candidate, get consent, explain how the AI works in plain language, and (if AI is the sole basis for an in-person interview decision) report demographic data annually.
Consent flow gates every video-interview AI workflow; plain-language disclosure template embedded in the candidate portal; sole-basis branch routes to human-reviewer override before any in-person decision.
representative pattern · not literal client code
# Illinois AIVA consent gate (representative pattern)
if candidate.location == "IL" and workflow == "video_interview_ai":
require_consent(form="aiva_disclosure_v2")
if not candidate.has_consented:
route_to_human_only_flow()
if decision.basis == "ai_sole":
require_human_reviewer_signoff(decision)
add_to_annual_demographic_report(decision)
Pending / evolving California legislation targeting automated decision tools that materially affect employment, housing, credit, etc. Imposes impact-assessment, notice-to-affected-individuals, and right-to-explanation obligations. Scope and effective date have shifted in successive sessions — actively monitored.
Impact-assessment template generated automatically before any CA-resident-touching workflow ships; notice-and-explanation surface lives on the candidate dashboard; pre-deployment review with your CA employment counsel.
Federal employment discrimination law: a screening tool that disproportionately excludes a protected group (the four-fifths rule is the EEOC's rule-of-thumb adverse-impact trigger) can create disparate-impact liability even without intent. AI screening is squarely in scope per 2023 EEOC technical assistance.
Adverse-impact ratio computed pre-launch and re-computed monthly; four-fifths rule alert wired to people-ops + counsel; per-tool documentation of validation methodology retained.
representative pattern · not literal client code
# Title VII adverse-impact monitor (representative pattern)
monthly_rolling = compute_selection_rates(window="30d")
for group in protected_groups:
ratio = monthly_rolling[group] / monthly_rolling.majority
if ratio < 0.80:
page("people-ops + counsel", group, ratio)
halt_tool_pending_review()
AI hiring tools cannot disadvantage qualified individuals with disabilities — including via screening mechanisms (timed assessments, video interviews, voice-tone scoring) that effectively exclude. Employers must offer accommodations and an accessible path even when AI runs the front door.
Every candidate-facing AI workflow ships with an accessible alternative path (human reviewer / extended-time / non-video) plus an accommodation-request entry-point one click away; logged for ADA recordkeeping.
representative pattern · not literal client code
# ADA accessible-alternative gate
if surface == "video_interview_ai" or surface == "timed_assessment":
show_link("Request an accommodation", href="/accommodations/")
if request.received:
route_to_human_reviewer(extended_time=True, format=request.format)
log_accommodation(request) // ADA recordkeeping
Candidate and employee data is personal data under GDPR (Art. 22 also restricts solely-automated decisions producing legal/significant effects) and personal information under US state PII laws — purpose-limited processing, data-minimization, right to access/delete, and (for high-risk processing) a DPIA before deployment.
Per-workflow DPIA produced before deployment; prompts and outputs PII-minimized via pre-prompt scrubbing; candidate / employee right-to-access endpoint wired into the workflow's audit log; data-retention windows enforced.
representative pattern · not literal client code
# DPIA + PII-minimization (representative pattern)
prompt = strip_pii(input, level="strict") # regex + NER
audit_log.write({ workflow, scrubbed_count, ts: now })
if jurisdiction in ["EU", "EEA"]:
assert dpia.exists(workflow), "GDPR Art. 35 DPIA required"
if decision.is_solely_automated and decision.effects_significant:
route_to_human_reviewer() // GDPR Art. 22
ai for hr — when it's the wrong answer
Three places we'll tell you no. Honest scoping > pretty deck.
Most `hr ai software` pitch decks have an AI answer for every problem. A production HR AI partner should refuse three of them. If your scope touches any of these, we'll say so in the audit — and we won't bill phase 2 to find out. The duties named across EEOC Title VII, ADA, NYC LL144, Illinois AIVA, and California AB 2930 are not compliance checkboxes; they're the difference between a workflow that ships and one that gets pulled in week 9.
Autonomous hire/fire decisions — the legal line
We won't ship AI that finalizes a hire / no-hire decision, ends an employment relationship, or denies an accommodation request without a human in the loop. Even on "routine" decisions — declining a candidate whose resume scores below a rubric threshold, advancing a candidate the model is confident on — the consequence of an error is borne by the employer under EEOC Title VII disparate-impact, ADA accommodation, and a stack of state laws. AI suggests with rationale; the recruiter / hiring manager / HRBP signs. If a workflow's ROI depends on removing the human from the decision-step, the workflow doesn't ship.
Sole-basis screening in NYC, Illinois, and California
We won't ship an automated employment decision tool (AEDT) used as the sole basis for screening, hiring, or promotion decisions in jurisdictions that ban or constrain it. NYC Local Law 144 requires a yearly bias audit + candidate notice for AEDTs; Illinois' AI Video Interview Act requires consent + disclosure + (for sole-basis use) demographic reporting; California's AB 2930 is evolving (see §8 ledger). The workflow surface in those jurisdictions always routes to a human reviewer when the decision is significant — non-negotiable.
Sentiment / personality / video-tone scoring as a hiring signal
We won't ship facial-expression scoring, voice-tone affect detection, or personality-from-language models as a hiring signal. The research base does not support that these signals predict job performance reliably, and the EEOC + ADA + state AEDT statutes have started naming them explicitly as adverse-impact risks. We'll surface what a transcript says (skills mentioned, experience claimed), not what a model believes a tone implies. If a vendor pitch leans on "AI personality fit" or "AI culture match," that's the workflow we'll refuse in scoping.
the kind of engagement we ship
Three capability patterns. Hypothetical — illustrative of how we ship, not real anonymized clients.
Patterns below are hypothetical illustrations of how we ship for the three buyer shapes we engage with most often — mid-market TA teams, scale-up people-ops orgs, and regulated-jurisdiction TA+DEI teams. Numbers are modeled from comparable engagement scopes, not specific client metrics. Real references shared under NDA once we know what you're building. Stacks shown are the ones the engagement would actually run on; yours will look similar but not identical.
Mid-market TA team · 50–80 reqs/quarter · the kind of engagement we ship Pattern
TA team running on Greenhouse, ~60 active reqs/quarter, sourcers spending 18–25 hours/week on top-of-funnel outreach with response rates in the 6–10% band on cold InMail. Screening lag of 3–5 days at peak hiring; loss-of-interest drop-off compounding. The team had piloted an off-the-shelf `ai recruiting software` product but couldn't shape the screening rubric to their level bands or wire it cleanly into Greenhouse custom fields.
Approach
Sourcing agent ingests the req + ICP + competitor-poaching graph → drafts personalized outreach per candidate (Sonnet 4.6 for personalization, Haiku 4.5 for volume classification) → sourcer reviews + sends from their inbox (AI never sends). Screening eval: resume + intake-call transcript run against a job-specific rubric loaded from the firm's level bands → suggested advance / hold / decline with rationale → recruiter signs every decision → NYC LL144 bias-audit harness logs per-candidate scores + monthly impact-ratio computed. Greenhouse custom-field write-back is the integration surface.
≈ 3.2×outbound response rate · ≈ 12 hrs/sourcer/wk returned (modeled)
People-ops team · 200+ hires/quarter · the kind of engagement we ship Pattern
AI onboarding agent across Workday HCM + Okta + Slack · 30-day-checklist automation
Problem
People-ops team supporting an aggressive hiring plan (200+ hires/quarter across go-to-market + engineering), running on Workday HCM + Okta + Slack + Salesforce. Manual first-30-day checklist (doc collection, system provisioning, manager 1:1 setup, training assignments) consuming 90–120 minutes per hire across HR coordinator + IT + manager. Onboarding NPS sliding because the friction was upstream of the new hire's actual work.
Approach
Onboarding agent triggered on Workday hire event → runs the first-30-day checklist: doc-collection chase (I-9, benefits, tax) with reminder cadence + state-specific compliance routing, Okta provisioning + Slack channel adds + Salesforce / GitHub seat creation, manager 1:1 scheduling against their calendar, training-assignment routing per role family. New hire signs every doc; HR coordinator sees a status digest + escalations only. ADA accessible-alternative path embedded for any required assessment.
≈ 65 min/hirepeople-ops + IT coordinator time returned per new hire (modeled)
TA + DEI team · regulated jurisdiction (NYC / IL / CA) · the kind of engagement we ship Pattern
AEDT bias-audit harness + Illinois AIVA consent flow · regulatory deployment as the workflow
Problem
TA + DEI lead at a mid-size firm hiring across NYC, Chicago, and SF — three jurisdictions where AEDT-style screening tools are explicitly regulated (NYC LL144 yearly bias audit + candidate notice, IL AIVA consent + disclosure, CA AB 2930 evolving). Existing screening workflow was vendor-stack and they couldn't get a defensible bias-audit artifact or a candidate-notice flow that their employment counsel would sign off on.
Approach
Bias-audit harness wired into CI: every screening model version runs a per-tool impact-ratio computation against representative test sets (n≥500), grouped by sex/race/intersection — fails closed on <0.80 four-fifths threshold and pages people-ops + counsel. Candidate-notice email template lives next to the screening agent (NYC LL144) + Illinois AIVA consent gate routes video-interview AI to a human-only flow if consent absent + ADA accessible alternative one click away on every candidate-facing surface. Pre-deployment DPIA template per AB 2930 evolving scope.
0 audit findingsacross NYC + IL deployments · counsel-signed (modeled)
how we ship hr ai in 4–6 weeks
Four stages. With a kill point at week 6.
Every HR AI engagement we run uses the same loop: audit, pilot, ship, scale. The pilot has an explicit walk-away point at week 6 — if the metric won't move, we stop before production hardening and you don't pay phase 2. No retainer trap, no scope-creep into year-long implementations.
01
Weeks 1–2
HR AI audit
Two-week shadow with the responsible CHRO / VP People / TA lead / People-Ops lead / DEI lead + (where relevant) employment counsel. We rank candidate workflows by recruiter/HRBP hours returned × time-to-ship × regulatory-exposure (per §8 ledger), list the per-workflow cost band each will run at, and call out the ones that won't pay back so you don't fund them. Regulatory mapping (NYC LL144 · IL AIVA · CA AB 2930 · EEOC · ADA · GDPR/PII) signed off by your employment counsel before any build.
90-day HR AI roadmap, ranked, with cost bands + regulatory-exposure assignment per workflow
02
Weeks 3–6
Pilot — one workflow, partner-in-loop
We build the single highest-ROI candidate against your real stack (Workday / BambooHR / SuccessFactors / UKG / ADP / Rippling / Gusto on the HRIS side; Greenhouse / Lever / Ashby / iCIMS on the ATS side — we integrate, we don't replace). Live behind a recruiter / HRBP sign-off flag; bias-audit + consent + ADA alt-path validated end-to-end. Walk-away point at week 6 if the metric won't move.
One HR AI workflow live behind a sign-off flag with eval data + regulatory audit log
Walk-away point
03
Weeks 7–8
Ship to production
Production hardening: Langfuse traces, retry + fallback policies, regulatory routing config (jurisdiction-aware deployment surface), eval suite gated in CI, audit-log retention aligned to your firm's recordkeeping policy, monthly impact-ratio cron. Walk-through with HR leadership + employment counsel + IT security.
Production workflow + regulatory routing config + audit-log retention plan
04
Ongoing
Scale to next workflow
Most `ai hr company` engagements run 3–5 workflows by month 6. Same eval harness, same audit log, same cost-reporting cadence. Compounding learning across sourcing → screening → interview → onboarding → people-analytics. Regulatory ledger reviewed before each new workflow ships — exposure changes per surface.
3–5 HR AI workflows live by month 6, all under the same regulatory-aware deployment topology
engagement models
Three ways to engage. Hire us at the tier that fits where you are.
Most `ai for hr` clients start with the 2-week audit, hire us to ship one workflow on a pilot, then move to monthly for the next three to five. Cost-per-decision reported monthly on every shipped workflow — no per-decision number, no engagement.
1–2 weeks
HR AI audit
Find which AI workflows pay back on your firm's HR stack — before you commit a budget.
$3Kfixed
Operator shadow with CHRO / VP People / TA / People-Ops / DEI
Bias-audit harness on every candidate-touching workflow Recruiter / HRBP sign-off on every advancing decision ADA accessible alternative on every candidate-facing surface No annual contract
frequently asked — ai for hr
Questions HR leaders ask first. Real answers, no hedging.
What does "AI for HR" actually mean — what do you build?+
An `ai for hr` engagement with us ships production AI workflows on your firm's HR stack — not slide decks, not pilots that die at month 4. The day-to-day: scope which workflow moves a P&L line (most often AI sourcing, AI screening / `ai recruiting platform`, `ai interview scheduling`, `ai onboarding`, people-analytics / `people analytics software`, or `ai hr services` / policy Q&A), assess each workflow against the §8 regulatory ledger (NYC LL144, IL AIVA, CA AB 2930, EEOC Title VII, ADA, GDPR/state-PII), build the integration against your HRIS + ATS (Workday HCM + Recruiter, BambooHR, SuccessFactors, UKG Pro, ADP Workforce Now, Rippling, Gusto on the HRIS side; Greenhouse, Lever, Ashby, iCIMS, SmartRecruiters on the ATS side), pick the right model per workflow (Sonnet 4.6 for personalization + screening rationale, Haiku 4.5 for volume classification + intake, GPT-5.4-mini for structured extraction), ship behind a recruiter / HRBP sign-off flag, then operate the workflow long enough to prove cost-of-ownership before scaling. We don't sell a product — Eightfold, Phenom, Beamery, Findem, HireEZ, Paradox/Olivia, Visier, Lattice, Culture Amp, ServiceNow HRSD are the benchmark products you should compare us against. We're the right answer when the workflow needs to be shaped to your level bands, your loss-pattern history, your engagement-survey schema, and your jurisdictional surface — not a product team's roadmap.
Are you an Eightfold / Phenom / Workday Recruiter AI reseller? Do you replace them?+
Neither — we're a development partner, not a reseller, and we integrate with the HR AI products you already run rather than replace them. Eightfold is strong on talent-matching + internal mobility at enterprise scale; Phenom is the talent-experience-platform leader; Beamery and Findem are strong on sourcing intelligence; HireEZ is the workhorse sourcing tool for many TA teams; Paradox / Olivia owns conversational scheduling; Visier and Lattice + Culture Amp + 15Five own the engagement-analytics layer. Each is right for some teams. We build when a workflow needs your level-band rubric (not a vendor's), your engagement-survey schema (not a default), your jurisdictional routing (NYC + IL + CA simultaneously), your custom ATS-field schema, or your specific integration surface (Workday HCM event → Okta + Slack + GitHub + Salesforce provisioning, for example). We'll say in the audit if a packaged product is the better answer — we've recommended Eightfold + Workday Recruiter AI to teams whose scope wasn't worth a custom build.
How do you handle NYC LL144, Illinois AIVA, California AB 2930, and EEOC Title VII?+
Every candidate-touching workflow we ship has the relevant guardrails wired in pre-launch — see §8 for the ledger. NYC LL144: per-tool bias-audit harness runs every model version against representative test sets, computes impact-ratio across sex / race / intersection groups against the EEOC four-fifths rule, fails closed at <0.80 and pages people-ops + counsel; per-candidate audit-log + candidate-notice email template wired into the screening surface. Illinois AIVA: video-interview AI workflows gate on consent + plain-language disclosure; sole-basis decisions route to a human reviewer + add to annual demographic reporting. California AB 2930: pre-deployment impact-assessment template per workflow + notice / explanation surface on the candidate dashboard + counsel sign-off before deployment (status: actively monitoring — scope and effective date have shifted in successive sessions). EEOC Title VII disparate-impact: monthly rolling four-fifths-rule monitor on selection rates, alerts wired to people-ops + counsel. ADA accommodation: accessible alternative one click away on every AI surface, accommodation-request entry-point logged for recordkeeping. We do not give legal advice — your employment counsel approves the deployment, every time.
Does your `ai recruiting platform` work with Workday Recruiter, Greenhouse, Lever, Ashby, or iCIMS?+
Yes — all five, and the integration pattern differs enough to flag upfront. Workday Recruiter: Connections API + custom-field write-back is the workhorse; Skills Cloud is the strong adjacency for skills-gap signal. Greenhouse: Harvest API is mature + Scorecards are an ideal surface for rubric scoring; Gem / Beamery / Findem commonly adjacent. Lever: strong CRM-style API; Feedback forms map cleanly to rubric output; Ashby's similar with a tighter UX surface for modern TA teams. iCIMS: configurable but custom-field schema needs upfront mapping; SuccessFactors handoff adds steps when the HRIS is downstream. SmartRecruiters: workable, less common in our engagements. We do not certify against every vendor's full schema; the integration scope is sized in the 2-week audit before the pilot ships. The point of the build is the workflow, not the vendor — same `hr ai software` workflow runs across the five ATSes with different field-mapping work.
What does AI onboarding cost to run per hire, and how does it integrate with our HRIS?+
An `ai onboarding` workflow runs in two cost buckets. Model layer on Haiku 4.5 (volume) + Sonnet 4.6 (the few high-judgment branches like state-specific compliance routing or accommodation handling): roughly $0.40–$1.50 per hire across the first-30-day workflow. Infrastructure layer (HRIS event listening, Okta + Slack + Salesforce provisioning calls, audit-log writes, ADA alt-path surfaces): typically $0.20–$0.60 per hire. Total: roughly $0.60–$2.10 per hire. For a team hiring 50 people / quarter, model + infra typically sits in the $100–$350/month range; the economic story is people-ops + IT-coordinator time returned (modeled 60–90 minutes/hire on the engagements we'd build), not the per-hire model spend. HRIS integration depth varies: Workday HCM event-driven hooks are clean and the Onboarding module supports trigger-and-wait flows; BambooHR + Rippling + Gusto have simpler event surfaces ideal for SMB / growth teams; SuccessFactors Onboarding 2.0 supports event-driven with more configuration. Salesforce + Okta + Slack + GitHub seat provisioning ships off whatever the HRIS event surface gives us.
How does AI fit into people analytics — does it replace Visier or Workday Prism?+
No — `people analytics software` AI sits on top of the dashboard layer, not instead of it. Visier and Workday Prism (and Lattice / Culture Amp / 15Five on engagement) are well-suited to the dashboard pass: workforce composition, headcount math, engagement trends, manager effectiveness. The AI layer's job is the *decision* — what's the attrition risk on the SF go-to-market cohort over the next two quarters given the Q4 engagement-survey shift + the manager-1:1-notes signal + the mid-quarter promotion velocity? What's the skills-gap shape on Customer Success between the target competencies and the current team (see §6 radar) and which AI workflow closes which gap without a new headcount? We ship the pipeline: HRIS + survey + 1:1 notes + Slack signal → attrition-risk model per cohort → drafted retention plays → routed to the responsible HRBP. HRBP signs every action; AI never closes a loop autonomously. Worth the build when you have >300 employees and the analytics question is operational, not strategic — the dashboard answers strategic questions, the AI workflow answers operational ones.
What about candidate experience — won't AI feel cold or biased?+
Candidate experience is the single highest-stake design constraint in `ai recruiting`. Three patterns we use to keep it warm and defensible. First, AI suggests, recruiters communicate — the human is always the candidate's interface for any consequential message (advance, decline, offer); the AI drafts, the recruiter edits and sends. Second, accessible alternative path always available, one click away (per ADA + reasonable-accommodation duties), and we measure parity of outcome between AI-mediated and human-only paths monthly. Third, bias-audit harness runs against the rubric continuously — if the four-fifths rule fails for any protected group, the tool halts pending people-ops + counsel review. The honest constraint: cold candidate experience comes from removing humans from the wrong steps, not from using AI. The candidate gets faster response (sourcing outreach + scheduling), more thoughtful prep (interview kits drafted from their resume + role + level), and a recruiter who has time for the conversation that matters because the screening busywork is gone. We measure NPS by workflow and report monthly; if a workflow is cold by signal, we'd rather pull it than ship it.
How much does an HR AI project cost and how long does it take?+
Three tiers, pricing-locked across the cluster. (1) HR AI audit: $3K fixed, 1–2 weeks. We shadow CHRO / VP People / TA / People-Ops / DEI, score candidate workflows by recruiter + HRBP hours returned × time-to-ship × regulatory exposure (per §8 ledger), deliver a 90-day roadmap with per-workflow cost bands + regulatory assignment + an honest "these won't pay back yet" list. (2) Pilot to production: $10–25K fixed, 4–6 weeks. One workflow shipped end-to-end on your stack, regulatory-aware, audit-logged, recruiter / HRBP-in-loop, with a walk-away point at week 6 — if the metric won't move, we stop before production hardening and you don't pay phase 2. (3) Continuous HR AI team: from $5K/month, no annual contract. Embedded PM + AI engineer + people-ops analyst shipping the next workflow on your roadmap, with per-workflow monthly cost-of-ownership reporting and regulatory-ledger review before each new workflow ships. Most `ai hr company` engagements we run start with the audit, ship the first workflow on the pilot, then move to monthly for workflows two through five. Cost-per-decision reported monthly on every shipped workflow — no per-decision number, no engagement.
Ready to ship
Stop running another HR-tech pilot that dies at month 4. Hire an HR AI development partner that ships.
Book a free 30-minute HR AI audit. We'll identify two or three high-ROI candidates from your firm's HR stack, map each to the regulatory ledger, give you a per-workflow cost band, and tell you which ones won't pay back yet. No deck, no obligation to build.