ai in travel · production

AI booking, AI trip planning,
and AI in hospitality — shipped on your GDS, NDC, and OTA stack.

We're the team you hire for `ai in travel` builds that run on your real Amadeus, Sabre, Travelport, and NDC integrations — `ai booking` on the search-to-offer path, `ai trip planning` on long-tail conversion, `ai revenue management` drafted by Sonnet 4.6 with revenue-manager sign-off, `ai in hospitality` on guest messaging across portfolio, IROPS draft fanout on ops desks, and `travel chatbot` workflows on the pre-trip / in-trip / post-trip lifecycle. Human-in-loop on every consequential decision. First workflow live in 6–8 weeks.

trip journey · ai touchpoints
  1. 01
    Inspire
  2. 02
    Search & Book
  3. 03
    Pre-trip
  4. 04
    In-trip
  5. 05
    Post-trip / Loyalty
tap any AI pill above for details · 6 touchpoints on this thread

6–8 wk
first travel AI workflow live behind a human-in-loop gate
$0.003
median model cost per pre-trip / post-trip message at scale
22 s
agent decision time on a typical IROPS re-route draft
$3K
travel AI audit-to-roadmap before any build starts
why a travel ai partner, not a saas pitch deck

From legacy GDS calls,
to live AI decisions in front of them.

`Ai in travel` has had two cycles of hype-then-stall in a decade. This one is different because the unit economics finally work per-decision — `ai booking`, `ai trip planning`, `ai revenue management` aren't strategic narratives anymore, they're $0.003–$0.04 per-inference plumbing that pays back inside a quarter on the right workflow. Three things a `travel ai company` should be honest about before you scope your first build.

From legacy GDS calls to live AI decisions

Yesterday's travel stack was an Amadeus / Sabre / Travelport call, a PMS, and a stack of business rules pasted into Sabre Red. Today's `ai in travel` stack still calls the GDS — but an AI agent sits in front of it, reading the user intent, drafting the search query, scoring the GDS response against the contract of carriage, and shaping the offer. The legacy plumbing doesn't go away; the decisioning moves into the model. You don't need another channel-manager vendor; you need a team that ships the agent in front of it.

Models picked per workflow, not per vendor

AI trip planning is a Haiku 4.5 cost decision — millions of itinerary drafts a year, eval delta under 2 pts on tight tasks. AI revenue management is a Sonnet 4.6 quality decision (mis-price a peak night and the revenue manager notices in five minutes). Travel chatbot intake on the OTA side is a GPT-5.4-mini structured-output job. Same Amadeus / Sabre / Travelport / NDC integration runs all three.

Bimodal load is the design constraint

A travel-ops loop that holds on a Tuesday afternoon melts during a Friday-evening IROPS event, a Black-Friday flash sale, or a school-break booking surge. The system that costs $74/day on baseline costs $4K/day on peak if you don't swap models, cache rate cards + policy contexts, and tighten the human-in-loop gate. That bimodality — visualized in §5 — is what every `ai travel company` engagement we ship designs around from day one.

ai in travel, by P&L line

Six AI workflows we ship for travel orgs.
Ranked in the audit, not the slide deck.

These are the six `ai in travel` workflows that consistently pay back in the audits we run as a travel AI development team. Not every OTA, hotel chain, or airline needs all six — most teams have a high-ROI candidate in three of them. The audit ranks yours so you don't have to guess which to fund first. Buyer reality woven through: the three highest-CPC keywords in this cluster are `ai revenue management` ($21.35), `ai in hospitality` ($16.82), and `hotel ai` ($13.61) — they sit on this list for a reason.

AI booking — the search-to-confirm path

`ai booking` is the highest-volume keyword in this cluster (8,100/mo, $2.58 CPC) because every OTA, every direct-booker, every TMC is rebuilding the search-to-confirm path around an AI agent. The pattern: agent reads the free-text query ("Friday to Lisbon, premium economy, near a metro"), drafts the structured GDS / NDC search, scores the response, assembles the offer cards. The traveler still confirms before any segment is held or ticketed — autonomous ticketing is a regulated commercial decision we don't ship without explicit scoping.

AI trip planning + itinerary drafting

`ai trip planning` (2,400/mo, $2.52 CPC) and `ai itinerary planner` (390/mo, $4.79 CPC) workflows that draft day-by-day plans from a free-text wish + the live inventory feed. The honest version: trip planning is a draft-not-decide workflow. Agent drafts the plan, traveler edits, the booking commit happens through your real GDS / NDC adapter — never autonomously from a chat box. Done right, this returns 12–25 minutes per inquiry on TMC desks and lifts booker conversion on direct OTAs.

AI in hospitality — front-of-house + ops

`ai in hospitality` ($16.82 CPC says everything about the buyer's checkbook) covers four operator workflows in a typical hotel chain: guest-message reply drafting, front-desk request triage (extra towels vs lost passport — very different escalations), housekeeping-route optimisation as input to your existing scheduler, and pre-arrival nudges. PMS write-back stays gated. Property-level eval suite per brand keeps the tone consistent across portfolio.

AI revenue management ($21.35 CPC says it all)

`ai revenue management` (90/mo, $21.35 CPC) is the highest-CPC keyword in the entire travel cluster — because every dollar moved on a published rate compounds across the year. The shipping pattern: agent reads competitive-set rates, the pickup curve, the event calendar, and inventory position; drafts a rate + LOS-restriction recommendation per night per room-type; revenue manager reviews and approves before push to the PMS / channel manager. Never autonomous publish on rates over a configurable threshold.

Hotel AI + airline AI for ops desks

`hotel ai` ($13.61 CPC) and `airline ai` cover the ops-desk side: arrival-day exception management for hotels, IROPS draft fanout for airlines (the disruption cascade in §5 is the exact shape we ship), crew-swap candidates against duty-time limits, ancillary attach drafts on the booking confirmation. AI is the drafter; the duty manager, the gate agent, the ops controller is the approver. Every action sits behind a human-in-loop gate or an audit-logged auto-band.

Travel chatbot — pre-trip, in-trip, post-trip

`travel chatbot` covers the three lifecycle moments most travel orgs underinvest in: pre-trip readiness (check-in window, baggage, visa, day-1 brief), in-trip exception handling (gate change, schedule slip, lost-bag pre-fill), post-trip review + rebook nudge. Channel terminations vary — web, app, SMS, WhatsApp, in-app push — but the orchestrator is one (§6). Hard escalation triggers on visa-denial, medical, minor-traveling-alone, schedule-conflict.

Don't see your travel workflow?

The highest-ROI travel AI workflow on your team is usually one we haven't listed. Bring it to the 2-week audit — we'll rank it against the rest and tell you if it ships.

Tell us yours
irops · one event, six downstream loops

Disruption cascades,
the human gate is where it lives or dies.

When a flight cancels, a port closes, or a property sells out, the disruption fans out into half a dozen downstream loops that have to fire inside the next 30 minutes — passengers, connections, crew, aircraft, vouchers, loyalty. The pattern that ships: AI drafts every loop in parallel; one human-in-loop gate on the trunk reviews the routing decision; only then do the downstream actions fire. Map below is the shape every travel-ops engagement we run is built around.

root · disruption event gate · human-in-loop downstream · AI-drafted
Cancelled flight DL2241 — 187 PAX impacted
Ops-desk human gate · routing review
  1. root Cancelled flight DL2241 — 187 PAX impacted
  2. gate · human-in-loop Ops-desk human gate · routing review
  3. downstream · 6 AI-drafted loops
distribution · one orchestrator, six channels

AI orchestrator at the center,
the channels fan out from there.

Direct web, mobile app, call center, OTA, GDS, NDC — every travel buyer interaction terminates on one of six channel surfaces, each with its own protocol, latency budget, and integration personality. The pattern that ships: a single AI orchestrator at the center owns the decisioning, fans out to the channel adapter that fits the call, and reconciles state back. Click any spoke for that channel's architecture and the AI orchestration role behind it.

hub · AI orchestrator spoke · channel adapter pulse · decisioning trace
AI Orchestrator decision · routing · state
  1. hub AI Orchestrator decision · routing · state
model picks per travel workflow

The model matrix.
Per workflow, not per vendor.

Same Amadeus / Sabre / Travelport / NDC stack runs four model picks. Sonnet 4.6 wins where reviewer-trust matters (revenue management rationale, IROPS draft for ops, complex guest-reply edge cases). Haiku 4.5 wins on high-volume routing (booking + trip-planning + chatbot at OTA scale) and is the surge-mode swap when IROPS or flash-sale hits. GPT-5.4-mini is the structured-output specialist for intake + review templates. GPT-5.4 sits on long-form rationale (long-horizon mix-shift, multi-leg complex recovery). Cost-per-decision is roughly current — verify on your own usage before locking a pick.

Dimension
You're here Claude Sonnet 4.6 Anthropic · quality tier
Claude Haiku 4.5 Anthropic · cheap, fast
GPT-5.4-mini OpenAI · structured output
GPT-5.4 OpenAI · long reasoning
AI booking · search → offer Free-text query → structured search → ranked offers. Latency budget is sub-second on direct web.
Claude Sonnet 4.6 Best on complex multi-leg + premium-cabin offers
Claude Haiku 4.5 Default · 7× cheaper at OTA volume
GPT-5.4-mini Strong structured-output adherence on offer JSON
GPT-5.4 Overkill at the volume tier; cost doesn't break even
AI trip planning · itinerary draft Day-by-day plans from wish + inventory. Traveler edits before commit.
Claude Sonnet 4.6 Best on tone, balanced day arcs, brand voice
Claude Haiku 4.5 Default · scales on consumer-OTA volume
GPT-5.4-mini Solid on structured day-grids; weaker on narrative
GPT-5.4 Tied — pick on stack preference + cache strategy
AI revenue management · rate draft Compset + pickup + events → rate + LOS recommendation. RM signs off.
Claude Sonnet 4.6 Default · revenue-manager-trustable rationale text
Claude Haiku 4.5 Workable for routine nights; drifts on peak/event
GPT-5.4-mini Strong on JSON shape; weaker on rationale
GPT-5.4 Tied — long-horizon mix-shift modeling strength
AI in hospitality · guest reply Brand-voice reply on portal / SMS / WhatsApp. Front-desk approval per brand.
Claude Sonnet 4.6 Best on brand-tone retention across portfolio
Claude Haiku 4.5 Fine on FAQ; tone drift on edge cases
GPT-5.4-mini Strong on structured intake forms
GPT-5.4 Best on multi-turn complex requests
IROPS draft fanout (airline) Re-route options per cohort. Ops desk reviews. §5 is the full map.
Claude Sonnet 4.6 Default · explains the trade-offs ops trusts
Claude Haiku 4.5 Reserve for cohort-bulk drafts only
GPT-5.4-mini Strong on JSON; weaker on cohort rationale
GPT-5.4 Tied on complex multi-leg recovery
Travel chatbot · pre-trip / post-trip Routine nudges + post-stay reviews. Hard escalation on red-flag inputs.
Claude Sonnet 4.6 Best on edge-case tone
Claude Haiku 4.5 Default · scales to push-burst volume
GPT-5.4-mini Best on structured intake + review templates
GPT-5.4 Cost prohibitive at message-burst scale
Surge-mode swap (IROPS / flash sale) Which model the routing layer flips to under 10×+ baseline load.
Claude Sonnet 4.6 Cost spikes hard at surge volume
Claude Haiku 4.5 The surge swap · 7× cheaper at scale
GPT-5.4-mini Alt surge target on OpenAI stacks
GPT-5.4 Reserve for off-peak rationale only

Cost figures are typical per-decision spend with prompt caching warm and standard travel context sizes (PNR excerpt + offer snippet, not full PNR + full inventory dump). Run your own benchmark before locking a model pick; vendor prices, DPA terms, and NDC-level support shift quarterly.

ai in travel — when it's the wrong answer

Four places we'll tell you no.
Honest scoping > pretty deck.

Most `ai in travel` pitch decks have an AI answer for every problem. Most production travel teams should refuse at least one of these. If your team is scoping any of them, we'll say so in the audit — and we won't bill phase 2 to find out. PNR-grade GDPR posture, DOT consumer-protection awareness, IATA NDC level honesty, and ADA accessibility are not just compliance checkboxes — they're the difference between a workflow that ships and one that gets pulled in week 9.

Cross-border PNR data on a non-scoped model — GDPR risk

PNR (Passenger Name Record) data is regulated personal data under GDPR, with extra friction on cross-border transfers (EU↔US, EU↔UAE, etc.). Dumping a full PNR into a generic foundation-model API without scoping is a Schrems-grade exposure, full stop. The pattern we ship: strip the PNR to the minimum viable payload before the model call, route through an EU-resident inference endpoint where the data origin requires it, and audit-log every inference. If your compliance team can't tell us today which model endpoints they've approved for PNR, the audit deliverable starts with a model-endpoint approval matrix — not a chatbot demo.

Autonomous customer comms that breach DOT consumer protection

US DOT consumer-protection rules (24-hour refund rule, Tarmac Delay Rule, mandatory disclosures on baggage / cancellation / change-fee changes) are not suggestions — they're enforced. An AI agent that autonomously sends a passenger a re-route confirmation, a comp offer, or a refund denial without a human signing off on the policy interpretation is a CFR-violation waiting to happen. We won't ship autonomous customer comms on any DOT-regulated decision. The bot drafts, a human approves, the customer hears.

Marketing an NDC level the carrier doesn't actually support

IATA NDC adoption is uneven. Carrier NDC level (1 / 2 / 3 / 4) determines which offers, ancillaries, and servicing actions are actually possible — and the marketing claim "we support NDC" rarely matches the operational reality. Pitching an `ai booking` feature that depends on level-4 ancillary servicing when 60% of your carriers are at level-2 is how engagements die at month 4. We'll say so in the audit — and if the carrier list doesn't support the feature, we won't build it; we'll spec the fallback to GDS instead.

Accessibility regressions on AI-built booking flows — ADA risk

AI-generated booking UIs, AI-driven dynamic forms, and AI-personalized rendering routinely break accessibility — keyboard traps, missing labels, contrast regressions, dynamic content that screen readers can't announce. ADA Title III + DOJ guidance applies to travel-industry e-commerce, and the lawsuit volume on inaccessible booking flows has been climbing. We ship every AI-driven UI with an accessibility eval suite (axe-core in CI, screen-reader smoke test, keyboard-only path) — and we'll fail the workflow before it ships if the eval fails. "It's only AI-generated, the accessibility team will fix it later" is the failure mode we won't underwrite.

travel ai patterns we ship

Three capability patterns.
Hypothetical scopes — operator-grade specifics.

Cases below are hypothetical capability patterns shaped to the OTA / hotel-chain / regional-airline archetypes we scope most often. We're not claiming named travel-brand references; the directional metrics are drawn from operator-grade published trials and the engagement shape is the one we ship. Named references shared under NDA once we know what you're building.

Hypothetical pattern · mid-size OTA · ~3M monthly searchers Pattern

AI booking + AI trip planning — Haiku 4.5 on the search-to-offer path

Problem

Hypothetical: a mid-size OTA with a long-tail destination problem — head destinations convert well on the legacy search-to-offer path, but the long tail of "warm in February, under 8 hours, family-friendly" free-text intents leaks 60–70% of clicks before offer-render. Conversion data shows the booker abandons before any GDS / NDC call returns.

Approach

Agent at the front of the booking funnel reads the free-text wish, drafts a structured destination + date-window search, scores the GDS / NDC response against the wish ("under 8 hours" → flight-duration filter), assembles the offer cards. Inventory truth stays in the GDS / NDC adapter — agent never invents an offer. NDC-level-aware fallback to GDS when carrier capability is below level 3.

Haiku 4.5Amadeus + NDC adapterFastAPILangfuseCloudflare Worker
Outcome
≈ 4–6 pts directional uplift on long-tail conversion in similar engagements
Hypothetical pattern · mid-size hotel chain · ~40 properties Pattern

AI in hospitality + AI revenue management — Sonnet 4.6 with RM sign-off

Problem

Hypothetical: a 40-property mid-scale chain with a portfolio-level rate-discipline problem (each property RM uses different rules-of-thumb), an under-leveraged guest-messaging surface (95% of post-stay messages get a templated reply), and a flu-season + event-week peak that nobody enjoys staffing.

Approach

Two parallel workflows on the same Sonnet 4.6 stack: (a) `ai revenue management` — drafts nightly rate + LOS recommendations from compset + pickup + event calendar + position; RM reviews and approves before PMS push; never autonomous on rates over a configurable threshold; (b) `ai in hospitality` — guest-message reply drafting in brand voice, front-desk approval flow, hard escalation on red-flag terms (medical, accessibility, complaint-escalation). Per-property eval suite keeps tone consistent.

Sonnet 4.6Channel-manager webhookPMS RESTLangfusepgvector compset index
Outcome
≈ 1.5–3 pts RevPAR uplift directional in published mid-scale RM trials
Hypothetical pattern · regional airline · ~80 daily departures Pattern

IROPS draft fanout + travel chatbot — Sonnet 4.6 + Haiku 4.5 hybrid

Problem

Hypothetical: a regional carrier with a winter-IROPS problem — every weather event burns 4–6 hours of ops-desk cycles on the re-route + crew-swap + voucher draft cycle, and the customer-comms backlog routinely lags the operational decision by 90+ minutes. Loyalty leakage compounds on every event.

Approach

Two-tier stack against the disruption-cascade shape in §5: (a) Sonnet 4.6 drafts the re-route options, the connection recovery, the crew-swap candidates against FAR Part 117, and the tail re-position plan — ops desk reviews and approves at the gate; (b) Haiku 4.5 + GPT-5.4-mini handle the downstream comms fanout (voucher emails, loyalty offers, push notifications) within published policy envelopes, with hard escalation on any out-of-envelope case. Surge-mode model swap pre-tested off-peak.

Sonnet 4.6Haiku 4.5GPT-5.4-miniSabre + NDC adapterLangfuse
Outcome
≈ 35–50% directional cycle-time reduction in operator-grade IROPS pilots
how we ship travel ai in 6–8 weeks

Four stages.
With a kill point at week 6.

Every `ai travel company` engagement we run uses the same loop: audit, pilot, ship, scale. The pilot has an explicit walk-away point at week 6 — if the metric won't move, we stop before production hardening and you don't pay phase 2. No retainer trap, no scope-creep into year-long implementations.

  1. Weeks 1–2

    Travel AI audit

    Two-week shadow with commercial, distribution, revenue, and ops teams. We rank candidate `ai in travel` workflows by traveler-experience uplift × time-to-ship × regulatory risk (PNR / GDPR, DOT, IATA NDC level, ADA accessibility), list per-workflow cost band, and call out the workflows that won't pay back yet so you don't fund them. Model-endpoint approval matrix delivered before any PNR is in scope.

    90-day travel AI roadmap, ranked, with cost bands + regulatory posture
  2. Weeks 3–6

    Pilot — one workflow, human-in-loop gate

    We build the single highest-ROI candidate against your real Amadeus / Sabre / Travelport / NDC stack. Live behind a human-in-loop gate (revenue manager / ops controller / front-desk approver), baseline vs. assisted runs measured, surge-mode config (steady + IROPS / flash-sale) tested before any go-live.

    One travel workflow live behind a human-in-loop gate with eval data
    Walk-away point
  3. Weeks 7–8

    Ship to production

    Production hardening: Langfuse traces, retry + fallback policies, surge-mode runbook (IROPS / flash-sale), eval suite gated in CI, accessibility checks in CI, audit-log review with your compliance lead. Walk-through with commercial + ops + distribution — the workflow goes live with humans in the loop, not as an internal demo.

    Production workflow + surge-mode runbook + audit-log review
  4. Ongoing

    Scale to next workflow

    Most `ai travel company` engagements we run get to 3–5 workflows live by month 6. Same eval harness, same Langfuse spans, same channel orchestrator (§6), same cost-reporting cadence. Compounding learning across booking → trip planning → revenue management → chatbot → IROPS.

    3–5 travel AI workflows live by month 6
engagement models

Three ways to engage.
Hire us at the tier that fits where you are.

Most `ai for travel` clients start with the 2-week audit, hire us to ship one workflow on a pilot, then move to monthly for the next three to five. Cost-per-decision reported monthly on every shipped workflow — no per-decision number, no engagement.

1–2 weeks

Travel AI audit

Find which AI workflows pay back on your GDS / NDC / OTA stack — before you commit a budget.

$3K fixed
  • Operator shadow with commercial / distribution / revenue / ops
  • Workflow scoring: experience uplift × time-to-ship × risk
  • Per-workflow cost band ($300–$2,500/mo)
  • Model-endpoint approval matrix for PNR / GDPR scope
  • 90-day travel AI roadmap with named candidates
  • Honest list of workflows that won't pay back yet
Book the travel AI audit
Most teams start here
4–8 weeks

Pilot to production

Hire us to ship one travel AI workflow end-to-end, human-in-loop, on your real distribution stack.

$10–25K fixed price
  • Build, integrate, deploy on Amadeus / Sabre / Travelport / NDC
  • Steady-state + surge-mode (IROPS / flash-sale) config tested pre-launch
  • Accessibility eval (axe-core in CI) on any AI-driven UI
  • Eval suite, Langfuse traces, retry + fallback runbook
  • Walk-away point — if the metric won't move, no phase 2
Hire us for the pilot
Monthly

Continuous travel AI team

Embedded travel AI engineers shipping the next workflow on your roadmap.

from $5K per month
  • PM + AI engineer + travel-ops analyst, embedded
  • Per-workflow monthly cost-of-ownership report
  • Surge-readiness review before peak booking + holiday-IROPS windows
  • Cancel any time — no annual contract
Talk to a travel AI engineer
PNR-scoped model approval before any inference DOT consumer-protection-aware customer comms Accessibility eval in CI on every AI-driven UI No annual contract
frequently asked — ai in travel

Questions travel teams ask first.
Real answers, no hedging.

What does AI in travel actually ship — beyond the demo videos?

An `ai in travel` engagement we ship looks like this: scope which workflow moves a P&L line (most often `ai booking` on the search-to-offer path, `ai trip planning` on long-tail conversion, `ai revenue management` on rate decisioning, `ai in hospitality` on guest messaging, IROPS draft fanout on ops desks, or a `travel chatbot` on pre-trip / in-trip / post-trip lifecycle moments), get the model-endpoint approval matrix signed off for PNR / GDPR scope, build the integration against your real Amadeus / Sabre / Travelport / NDC tenant, pick the right model per workflow (Sonnet 4.6 for revenue + IROPS rationale, Haiku 4.5 for high-volume search + drafts, GPT-5.4-mini for structured intake), bake in audit-logging on every inference, ship behind a human-in-loop gate, then operate the workflow long enough to prove cost-of-ownership before scaling. We do not sell a product; we ship one workflow at a time and report cost-per-decision monthly. First workflow live in six to eight weeks.

What is AI booking and how does it work with our GDS?

`Ai booking` is the highest-volume keyword in the travel cluster (8,100/mo) because every OTA, TMC, and direct-booker is rebuilding the search-to-confirm path around an AI agent. The operator-grade pattern: agent sits in front of the GDS / NDC, reads the traveler's free-text query, drafts the structured search, scores the GDS / NDC response against the query intent, assembles the offer cards. Inventory truth never leaves the GDS / NDC adapter — the agent never invents an offer. Traveler confirms before any segment is held or ticketed; autonomous ticketing is a regulated commercial decision we don't ship without explicit scoping. On Amadeus / Sabre / Travelport, write per-GDS adapters — the dialects of EDIFACT and the quirks on availability vs schedule queries are different enough that a single "GDS layer" abstraction is the wrong shape. NDC adoption is uneven — carrier NDC level (1 / 2 / 3 / 4) determines what's possible, so the orchestrator (§6) handles the routing decision per call.

How does AI revenue management work and does it actually move RevPAR?

`Ai revenue management` is the highest-CPC keyword in the travel cluster ($21.35) for a reason: every dollar moved on a published rate compounds across the year. The pattern that ships: agent reads the competitive-set rates, the pickup curve, the event calendar, the inventory position; drafts a rate + LOS-restriction recommendation per night per room-type; the revenue manager reviews and approves before push to the PMS / channel manager. Never autonomous publish on rates over a configurable threshold. Realistic outcomes on mid-scale chains: a 1.5–3 percentage-point RevPAR lift directional in published trials, with the bulk of the value coming from the rate-discipline consistency across portfolio rather than peak-night home runs. The audit will tell you whether your compset + pickup data is clean enough to build an honest baseline — if it isn't, that's the first phase before any model touches a rate.

What about PNR data, GDPR, and cross-border transfers?

PNR data is regulated personal data under GDPR with extra friction on cross-border transfers (EU↔US, EU↔UAE, EU↔SG, etc.) and on retention scope. Operator-grade specifics: (1) Model-endpoint approval matrix signed off before any PNR is in scope — your compliance team confirms which model endpoints, in which jurisdictions, are approved for which data classes. (2) PNR minimisation pre-prompt: we strip the PNR to the minimum viable payload before the model call. The agent doesn't need the full passport scan + frequent-flyer-number history to draft a re-route. (3) Region-resident inference where the data origin requires it: EU-resident endpoints for EU-originated PNRs, with the vendor's data-processing addendum (DPA) and standard contractual clauses on file. (4) Audit log on every inference — request ID, model, tokens, scrubbing actions, retrieved context, the reviewer who approved the output. What we don't claim: a turnkey "GDPR-compliant AI" product — GDPR compliance is a posture of your team, your vendors, and your model-endpoint contracts, and the audit deliverable starts by mapping that posture honestly.

Can you integrate AI with Amadeus, Sabre, Travelport, and NDC?

Yes — and the failure modes are different per stack, called out in §6 above. Quick recap. Amadeus: EDIFACT-based, MQ transport, deep schemas; the quirks live in availability vs schedule queries and in PNR command structure. Sabre: similar EDIFACT shape with different dialect choices and a different command grammar on Sabre Red — per-host adapter is the right shape, not a generic "GDS layer". Travelport (Galileo / Apollo / Worldspan): three legacy hosts under one brand with three different command grammars; capability varies by host. NDC: modern XML-over-HTTPS standard but adoption is uneven — carrier NDC level (1 / 2 / 3 / 4) determines what's possible, and ancillary support is the laggard. The pattern we ship: one orchestrator at the center (§6 octopus), per-channel adapters as spokes, the AI agent owns interpretation + draft + scoring, the channel adapter owns the real call. Channel managers (SiteMinder, RateGain, Cloudbeds) abstract a lot of the OTA-side mess but introduce their own retry-semantics quirks — design idempotent on the orchestrator side.

When should we NOT use AI in travel?

Four places we'll say no — covered in §8 above and worth repeating. (1) Dumping a full PNR into a non-scoped model endpoint is GDPR exposure, not a feature. Scope the model-endpoint approval matrix before any inference. (2) Autonomous customer comms on DOT-regulated decisions (refund denial, Tarmac Delay disclosures, mandatory baggage / cancellation notifications) is a CFR-violation waiting to happen — bot drafts, human approves, customer hears. (3) Marketing an NDC level the carrier list doesn't actually support is how engagements die at month 4 — the audit checks carrier capability honestly and specs the GDS fallback. (4) Shipping an AI-built booking flow that breaks accessibility (keyboard traps, missing labels, contrast regressions, screen-reader-invisible dynamic content) is ADA exposure under DOJ guidance — every AI-driven UI ships with an accessibility eval suite in CI or it doesn't ship. Beyond those four, we'll also say no on any workflow where the data isn't clean enough for a baseline, where the regulatory posture is unclear, or where the metric won't move within the pilot window.

How does AI in hospitality differ from AI in airlines?

Both buy `ai in travel` services and they share an orchestrator topology (§6), but the workflows that pay back are different. `Ai in hospitality` ($16.82 CPC) tilts heavily toward guest-message reply drafting, revenue management ($21.35 CPC), arrival-day exception management, housekeeping-route inputs, and pre-arrival nudges — bimodal load on flu season + event weeks rather than discrete IROPS events. `Hotel ai` is brand-voice-intensive: a 40-property portfolio with one tone-eval suite per brand keeps the guest experience consistent. `Airline ai` and `aviation ai` tilt toward IROPS draft fanout (§5 is the exact shape), gate-agent assist, crew-swap drafts against FAR Part 117 duty limits, IATA NDC-level-aware booking and servicing, and ancillary attach on confirmation. The disruption-event shape is the design constraint: airlines design around a 9-times-a-year cascade event, hotels design around a 90-day surge season. Both ship behind a human-in-loop gate.

How much does an AI travel project cost and how long does it take?

Three tiers, pricing-locked across the cluster. (1) Travel AI audit: $3K fixed, 1–2 weeks. We shadow commercial + distribution + revenue + ops, score candidate workflows, deliver a 90-day roadmap with per-workflow cost bands, a model-endpoint approval matrix for PNR / GDPR scope, and an honest "these won't pay back yet" list. (2) Pilot to production: $10–25K fixed, 4–8 weeks. One workflow shipped end-to-end on your Amadeus / Sabre / Travelport / NDC stack, human-in-loop gate, accessibility eval in CI, surge-mode config tested, walk-away point at week 6 — if the metric won't move, we stop before production hardening and you don't pay phase 2. (3) Continuous travel AI team: from $5K/month, no annual contract. Embedded PM + AI engineer + travel-ops analyst shipping the next workflow on your roadmap, with per-workflow monthly cost-of-ownership reporting and a surge-readiness review before peak booking + holiday-IROPS windows. Median model cost at scale is ~$0.003 per pre-trip / post-trip message and ~$0.04 per revenue-management rate decision — verify on your own usage before locking a pick.

Ready to ship

Stop running another travel-AI pilot that dies at month 4.
Hire a travel AI team that ships.

Book a free 30-minute travel AI audit. We'll identify two or three high-ROI candidates from your distribution stack, give you a per-workflow cost band, and tell you which ones won't pay back yet. No deck, no obligation to build.

30 min, async or live Model-endpoint approval matrix on request You leave with a written roadmap