GlossaryUsage & MeteringUpdated 2026-03-16

Agentic AI

AI systems that autonomously plan, reason, and execute multi-step tasks by chaining multiple LLM calls, tool invocations, and decision loops. Agentic workflows generate unpredictable and often enormous token consumption — 10x to 100x more than single-turn queries — making them the highest-cost AI pattern in production. Without per-session monitoring and cost guardrails, agent runs can consume hundreds of dollars in minutes.

Definition

What is Agentic AI?

Agentic AI refers to AI systems that go beyond single-request, single-response interactions to autonomously plan and execute multi-step workflows. An AI agent receives a high-level goal ("research competitor pricing and produce a summary report"), decomposes it into sub-tasks, decides which tools to use (web search, database queries, API calls, code execution), executes those tools, interprets the results, adjusts its plan based on what it learns, and iterates until the goal is complete. Each step in this process involves one or more LLM calls, and the total token consumption for a single agent session can range from 10,000 tokens (simple two-step task) to 500,000+ tokens (complex research or coding task with many iterations). The defining characteristic of agentic AI from a cost perspective is unpredictability: unlike a standard API call where you control the input and output length, an agent dynamically decides how many steps to take, how many tools to invoke, and how much context to accumulate. A coding agent asked to "fix this bug" might resolve it in 3 LLM calls and 5,000 tokens, or it might explore 15 dead ends across 40 LLM calls and 200,000 tokens — the cost difference is 40x, and you cannot predict which outcome you will get. Leading agentic AI frameworks include LangChain, CrewAI, AutoGen, Claude Code, Codex, and OpenAI's Assistants API. CostHawk provides per-session agent cost tracking and budget enforcement to manage this inherent unpredictability.

Impact

Why It Matters for AI Costs

Agentic AI is simultaneously the most capable and the most expensive pattern in AI application development. The cost implications are profound and often catch teams off guard:

The cost multiplication effect: A single agent session chains multiple LLM calls together, with each call potentially including the full accumulated context from previous steps. Consider a simple three-step agent workflow:

  1. Planning step: 500 input tokens (instruction) + 300 output tokens (plan) = 800 tokens
  2. Execution step: 500 (instruction) + 300 (plan) + 2,000 (tool results) + 500 output = 3,300 tokens
  3. Synthesis step: 500 (instruction) + 300 (plan) + 2,000 (tool results) + 500 (execution output) + 800 output = 4,100 tokens

Total: 8,200 tokens for a 3-step task. A comparable single-turn query would cost 800 tokens. The agent used 10x more tokens for the same goal. Now extend this to a 15-step coding agent session where each step accumulates the full conversation history:

PatternTypical StepsTotal TokensCost (GPT-4o)Cost (Claude 3.5 Sonnet)
Single query1800$0.006$0.008
RAG query24,000$0.030$0.040
Simple agent3–515,000$0.100$0.140
Complex agent10–2080,000$0.550$0.750
Deep research agent20–50300,000$2.10$2.85
Coding agent (runaway)50–100+1,000,000+$7.00+$9.50+

A single runaway coding agent session on Claude 3.5 Sonnet can cost $10+ in minutes. If you have 50 developers using a coding agent daily with an average of 20 sessions each, and 5% of sessions run away, that is 50 runaway sessions per day at $5–$10 each — $250–$500/day in just runaway sessions, or $7,500–$15,000/month.

CostHawk's agent monitoring tracks per-session token consumption in real time, enforces per-session and per-user budgets, and alerts when sessions exceed cost thresholds — preventing runaway agents from consuming your entire monthly budget in a single afternoon.

What is Agentic AI?

Agentic AI represents a paradigm shift from AI as a tool (you ask, it answers) to AI as an autonomous worker (you assign a goal, it plans and executes). Understanding the components and patterns of agentic systems is essential for managing their costs:

Core components of an AI agent:

  • LLM backbone: The foundation model that powers the agent's reasoning, planning, and language capabilities. This is the primary cost center — every time the agent "thinks," it consumes LLM tokens.
  • Tool access: Functions the agent can invoke to interact with the outside world — web search, database queries, API calls, file system operations, code execution, browser automation. Tool invocations themselves may have costs (search API fees, compute costs), and the results are fed back to the LLM as additional input tokens.
  • Memory/context: The accumulated state from previous steps, including the original goal, plan, tool results, and intermediate reasoning. This context grows with each step and is included in subsequent LLM calls, causing token consumption to increase geometrically.
  • Planning and reasoning: The agent's ability to decompose goals into sub-tasks, choose which tools to use, evaluate results, and adjust its approach. This metacognitive capability is what makes agents powerful — and expensive, because planning requires LLM calls that consume tokens without directly producing user-visible output.
  • Feedback loops: Agents operate in loops — plan, execute, evaluate, adjust, repeat. Each iteration of the loop involves at least one LLM call, and many involve multiple calls (one for tool selection, one for parameter generation, one for result interpretation). A 10-iteration loop with 3 LLM calls per iteration means 30 LLM calls for a single agent session.

Common agentic patterns:

  • ReAct (Reasoning + Acting): The agent alternates between reasoning (thinking about what to do) and acting (executing tools). Each reasoning step consumes output tokens; each action consumes input tokens (tool results).
  • Plan-and-Execute: The agent creates a full plan upfront, then executes each step sequentially. This front-loads planning tokens but can reduce total tokens if the plan avoids unnecessary iterations.
  • Multi-agent collaboration: Multiple specialized agents work together, each handling a different aspect of the task. This increases total token consumption because each agent maintains its own context and communicates via LLM-generated messages.
  • Autonomous coding: Agents like Claude Code and Codex read code, plan changes, write code, run tests, and iterate until tests pass. These are among the most token-intensive agentic patterns, with a single session consuming 50K–500K+ tokens.

Why Agents Are Expensive

Agentic AI is expensive for five structural reasons that compound to create token consumption 10–100x higher than single-turn queries:

1. Multi-step chains multiply token usage. Each step in an agent workflow involves at least one LLM call. A 10-step workflow means 10+ LLM calls instead of 1. But the token count does not just multiply by 10 — it grows faster because each subsequent call includes the accumulated context from all previous steps.

2. Context accumulation creates geometric growth. In a typical agent session, each LLM call includes: the system prompt (constant, ~1,000 tokens), the original goal (~200 tokens), and the full conversation history (growing). By step 10, the conversation history might contain 20,000 tokens from previous steps' reasoning and tool results. By step 20, it might contain 60,000 tokens. This means step 20's input cost alone exceeds the total cost of steps 1–5 combined. The mathematical pattern:

StepInput Tokens (cumulative context)Output TokensCumulative Total Tokens
11,2003001,500
33,5004008,100
57,20035019,500
1018,00050062,000
1532,000450135,000
2048,000600240,000

3. Tool calls add input tokens. Every tool the agent invokes returns results that become part of the context. A web search might return 2,000 tokens of snippets. A database query might return 5,000 tokens of data. A file read might return 10,000 tokens. These tool results accumulate in the context, amplifying the geometric growth pattern described above.

4. Retries and error recovery waste tokens. When an agent encounters an error (a tool call fails, code does not compile, a web page returns unexpected content), it must reason about the error and try an alternative approach. Each retry is a full LLM call with the complete accumulated context. In complex coding tasks, agents may attempt 3–5 approaches before finding one that works, multiplying token consumption by the number of attempts.

5. Planning and reasoning consume invisible tokens. The agent's internal reasoning — deciding what to do next, evaluating whether a result is satisfactory, reformulating its approach — generates output tokens that are necessary for the agent's operation but provide no direct value to the user. In reasoning models (o1, Claude with extended thinking), these "thinking tokens" can exceed the visible output by 5–10x, creating a substantial hidden cost layer.

Agentic Cost Patterns

Understanding the cost distribution across different agentic patterns helps you predict costs and choose the right architecture for your use case:

Pattern 1: Simple tool-augmented query (2–3 steps)

The agent receives a question, decides to call one tool (web search, database lookup), incorporates the result, and generates a response. This is the lightest agentic pattern.

  • LLM calls: 2–3
  • Total tokens: 3,000–8,000
  • Cost range: $0.02–$0.06 (GPT-4o)
  • Cost vs single query: 3–8x
  • Example: "What were our sales last quarter?" → agent queries database → generates summary

Pattern 2: Research agent (5–15 steps)

The agent searches multiple sources, cross-references information, and synthesizes a comprehensive report. Context accumulates significantly as search results are added.

  • LLM calls: 8–20
  • Total tokens: 30,000–150,000
  • Cost range: $0.20–$1.00 (GPT-4o)
  • Cost vs single query: 25–125x
  • Example: "Research the competitive landscape for AI cost monitoring tools" → multiple searches → comparison analysis → report

Pattern 3: Coding agent (10–50+ steps)

The agent reads code, plans changes, writes code, runs tests, debugs failures, and iterates. File contents dominate the context, and debugging loops cause unpredictable iteration counts.

  • LLM calls: 15–80
  • Total tokens: 50,000–500,000
  • Cost range: $0.35–$3.50 (GPT-4o)
  • Cost vs single query: 45–440x
  • Example: "Add pagination to the campaigns list page" → read existing code → plan changes → write code → run tests → fix failures → verify

Pattern 4: Multi-agent workflow (10–30 steps across agents)

Multiple specialized agents collaborate — one plans, one researches, one writes, one reviews. Each agent maintains its own context, and inter-agent communication adds overhead.

  • LLM calls: 20–60
  • Total tokens: 80,000–400,000
  • Cost range: $0.55–$2.80 (GPT-4o)
  • Cost vs single query: 70–350x
  • Example: CrewAI workflow with researcher, writer, and editor agents producing a market analysis document

Pattern 5: Autonomous deep research (20–100+ steps)

The agent conducts open-ended research with minimal human guidance, following leads across multiple domains, evaluating source credibility, and producing a comprehensive report with citations.

  • LLM calls: 30–150
  • Total tokens: 200,000–1,500,000
  • Cost range: $1.40–$10.50 (GPT-4o)
  • Cost vs single query: 175–1,300x
  • Example: OpenAI's Deep Research or Perplexity's Pro Search conducting a comprehensive literature review

The fundamental insight is that agentic cost scales with autonomy and complexity. More autonomous agents with broader tool access and less human guidance consume more tokens because they make more decisions, encounter more uncertainty, and explore more paths.

Controlling Agent Costs

Unconstrained agents are a cost management nightmare. Implementing guardrails at multiple levels is essential for making agentic AI economically viable in production:

1. Step limits. Set a maximum number of steps (LLM calls) per agent session. If the agent has not completed its task within the limit, it should return a partial result with an explanation of what remains. Reasonable defaults: 5 steps for simple tool-augmented queries, 15 for research tasks, 30 for coding tasks, 50 for deep research. Implement this as a hard cutoff that the agent cannot override.

const MAX_AGENT_STEPS = 30
let stepCount = 0

while (!taskComplete && stepCount < MAX_AGENT_STEPS) {
  const result = await agent.step(context)
  stepCount++
  if (result.complete) break
}

if (stepCount >= MAX_AGENT_STEPS) {
  return { partial: true, message: "Step limit reached", result: agent.partialResult() }
}

2. Token budgets per session. Set a maximum token budget for each agent session. Track cumulative tokens (input + output) across all LLM calls in the session, and terminate the session if the budget is exceeded. This is more precise than step limits because it accounts for varying step sizes. Reasonable defaults: 10,000 tokens for simple tasks, 50,000 for research, 200,000 for coding.

3. Model routing per step. Not every step in an agent workflow requires the most expensive model. Use an economy model (GPT-4o mini, Gemini Flash) for routine steps like tool selection, parameter formatting, and simple classification. Reserve the expensive model (GPT-4o, Claude Sonnet) for complex reasoning, code generation, and synthesis. This per-step routing can reduce total agent costs by 40–60% without meaningfully impacting output quality.

Agent Step TypeRecommended ModelTypical Token Cost
Tool selection / routingGPT-4o mini / Gemini Flash$0.0002–$0.001
Parameter generationGPT-4o mini / Gemini Flash$0.0003–$0.001
Result summarizationGPT-4o mini / Claude Haiku$0.001–$0.005
Complex reasoningGPT-4o / Claude Sonnet$0.01–$0.05
Code generationClaude Sonnet / GPT-4o$0.02–$0.10
Final synthesisClaude Sonnet / GPT-4o$0.01–$0.05

4. Context window management. Instead of accumulating the full conversation history, implement aggressive context management: summarize earlier steps into a compact representation, drop tool results that are no longer relevant, and maintain only the most recent 3–5 steps in full detail. This prevents the geometric context growth that drives late-step costs through the roof. A well-implemented context management strategy can reduce total agent token consumption by 50–70% for long sessions.

5. Runaway detection. Monitor for patterns that indicate an agent is stuck in a loop: repeated tool calls with the same parameters, oscillating between two approaches, or context growing without meaningful progress. Automatically terminate or escalate sessions that match these patterns. CostHawk's per-session monitoring can detect runaway patterns and trigger alerts within minutes, before a single session consumes hundreds of dollars.

Monitoring Agent Spending

Agent spending requires a fundamentally different monitoring approach than standard API call monitoring. While standard monitoring tracks per-request costs, agent monitoring must track per-session costs across multiple requests and detect patterns that indicate waste or runaway behavior:

Per-session cost tracking: Every agent session should be tagged with a unique session ID that links all LLM calls within that session. This enables per-session cost aggregation — showing that "Session ABC consumed 45,000 tokens and cost $0.32" rather than just showing individual API calls. Without session-level tracking, it is impossible to identify which agent runs are expensive and why. CostHawk supports session tagging through its wrapped key metadata, allowing you to track agent costs at the session, user, and task-type level.

Cost distribution analysis: In a typical agent deployment, cost follows a heavy-tailed distribution: 80% of sessions complete cheaply, 15% are moderately expensive, and 5% are extremely expensive (runaway or complex sessions). The top 5% of sessions often account for 40–60% of total agent spend. Identifying and addressing this tail — through better prompting, step limits, or task decomposition — is the highest-leverage optimization for agent costs.

Per-user agent budgets: For developer tools (coding agents) and internal tools (research agents), set per-user daily or monthly budgets. When a user approaches their budget, notify them. When they exceed it, require manager approval for continued usage. This prevents individual users from consuming disproportionate resources and creates accountability for agent spending.

Real-time cost streaming: For long-running agent sessions, display the running cost to the user in real time. Users who can see that their coding agent session has already consumed $2.50 are more likely to intervene if the agent is going down an unproductive path than users who only see the bill at the end of the month. CostHawk's real-time cost API enables this transparency.

Comparative cost analytics: Track agent costs over time and across task types to establish baselines. If coding agent sessions average $0.45 but the average has crept up to $0.65 over the past month, investigate the cause — it might be prompt changes, model updates, or shifts in task complexity. Trend analysis reveals slow-moving cost increases that per-session monitoring misses.

Alerting thresholds: Set multi-level alerts for agent spending:

  • Session alert: Notify when a single session exceeds $1 (or your configured threshold)
  • User alert: Notify when a user's daily agent spend exceeds $20
  • System alert: Notify when total hourly agent spend exceeds 2x the baseline
  • Emergency circuit breaker: Automatically pause agent execution when system-wide spend exceeds $X in a rolling 1-hour window

These layered alerts provide defense in depth against cost overruns, from individual runaway sessions to system-wide anomalies.

Agentic AI and CostHawk

CostHawk provides purpose-built features for monitoring and controlling agentic AI costs, addressing the unique challenges that agent workloads present:

Session-level cost tracking: CostHawk groups API calls by session ID, computing per-session total tokens, total cost, step count, and duration. The session view shows the full trajectory of an agent run — how context grew over time, which steps were most expensive, where retries occurred, and what the final cost was. This visibility is essential for optimizing agent prompts and architectures because it reveals the internal cost structure of each session.

Claude Code and Codex integration: CostHawk includes dedicated integrations for Claude Code and OpenAI Codex — two of the most popular and expensive agentic AI tools. The costhawk_sync_claude_code_usage and costhawk_sync_codex_usage MCP tools automatically pull session-level cost data from these tools, providing per-developer, per-session, and per-project cost breakdowns. This is critical for teams where coding agent spend can exceed $1,000/month per developer.

Per-session token budgets: CostHawk's wrapped keys support per-session token budgets. When creating an agent session, specify a maximum token budget through CostHawk's API. If the agent's cumulative token consumption exceeds the budget, subsequent API calls through the wrapped key are rejected, forcing the agent to terminate gracefully. This provides a hard cost ceiling on any individual agent run, preventing runaway sessions from consuming unlimited resources.

Runaway detection: CostHawk's anomaly detection system monitors agent sessions in real time and flags sessions that exhibit runaway patterns: token consumption growing faster than expected, step counts exceeding historical baselines, or cost accumulating rapidly without corresponding output quality. When a potential runaway is detected, CostHawk can alert the user, notify the team, or automatically terminate the session — depending on your configured response policy.

Agent ROI analysis: For teams evaluating whether agentic AI delivers sufficient value, CostHawk provides per-task-type cost breakdowns that feed into ROI calculations. If your coding agent costs $0.45/session on average and developers run 20 sessions/day, the daily cost is $9/developer or $270/month. If each session saves 15 minutes of development time valued at $1.25/minute, the ROI is 4.2x — clearly positive. But if sessions average $1.50 due to runaway costs, the ROI drops to 1.25x — marginally positive and worth optimizing. CostHawk provides the cost data needed to make these calculations with real numbers rather than estimates.

Optimization recommendations: Based on your agent usage patterns, CostHawk recommends specific optimizations: implementing context summarization for sessions that exceed 15 steps, routing tool-selection steps to economy models, setting step limits based on your task-type cost distributions, or switching from a multi-agent to a single-agent architecture for tasks where the coordination overhead exceeds the specialization benefit. Each recommendation includes an estimated monthly savings figure based on your historical usage data.

FAQ

Frequently Asked Questions

Why are AI agents so much more expensive than regular API calls?+
AI agents are expensive because of three compounding factors. First, multi-step execution: an agent makes 5–50+ LLM calls per session instead of 1, multiplying the base cost. Second, context accumulation: each subsequent LLM call includes the full context from all previous steps, so later steps are individually more expensive than earlier ones — step 20 might consume 50,000 input tokens, while step 1 consumed 1,200. The total token consumption grows geometrically, not linearly, with step count. Third, tool overhead: tool invocations return results (web page content, database rows, file contents) that become part of the context, accelerating context growth. A 15-step agent session typically consumes 50,000–150,000 tokens total, compared to 800 tokens for a single-turn query — a 60–190x cost multiplier. Additionally, agents have higher retry rates because multi-step plans are more prone to failure than single queries, and each retry consumes the full accumulated context again.
How can I set a cost limit on an AI agent session?+
Implement cost limits at three levels for defense in depth. Step limits: Set a maximum number of LLM calls per session (e.g., 30 steps). This is the simplest guardrail and prevents open-ended iteration. Token budgets: Track cumulative token usage across all LLM calls in a session and terminate when a budget is reached (e.g., 100,000 tokens). This is more precise than step limits because it accounts for varying step sizes. CostHawk's wrapped keys support per-session token budgets enforced at the API proxy level, so even a misconfigured agent cannot exceed the budget. Dollar budgets: Convert token budgets to dollar amounts using current model pricing (e.g., $1.00 per session). Dollar budgets are easier for non-technical stakeholders to understand and approve. When a limit is reached, the agent should return whatever partial result it has generated along with a summary of what remains. This ensures the user always gets value from the tokens already consumed.
What is context accumulation and why does it drive up agent costs?+
Context accumulation is the pattern where each step of an agent session adds to the conversation history that must be included in subsequent LLM calls. In step 1, the input might be 1,200 tokens (system prompt + goal). By step 5, the input might be 8,000 tokens (system prompt + goal + 4 previous steps' reasoning and tool results). By step 15, it might be 35,000 tokens. Since you pay for input tokens on every LLM call, the cost of later steps is dramatically higher than earlier steps. The total token consumption follows a roughly quadratic curve: doubling the number of steps more than doubles the total tokens. This is why a 30-step session might cost 10x more than a 10-step session, not 3x. The mitigation is context management: summarize older steps into a compact representation ("Steps 1–10 summary: investigated 3 approaches, found that approach B works for file parsing") rather than including the full conversation history. Well-implemented context summarization can reduce total session tokens by 50–70% for long sessions.
How does model routing reduce agent costs?+
Model routing assigns different models to different steps within an agent session based on the complexity of each step. In a typical agent workflow, 60–70% of steps are routine (selecting which tool to call, formatting parameters, parsing tool results, making simple decisions) and can be handled by an economy model like GPT-4o mini ($0.15/$0.60) or Gemini Flash ($0.10/$0.40). Only 30–40% of steps require complex reasoning or generation that justifies a mid-tier model like GPT-4o ($2.50/$10.00) or Claude Sonnet ($3.00/$15.00). By routing routine steps to economy models, you can reduce average per-step cost by 40–60%. For a 20-step agent session, this might reduce total cost from $0.55 to $0.25 — a 55% savings. Implementation requires classifying each step type and maintaining separate API client configurations, but most agent frameworks (LangChain, CrewAI) support per-step model configuration natively.
How do I monitor per-developer coding agent costs?+
Per-developer monitoring requires three components. First, assign each developer a unique identifier (API key, user ID, or tag) that is included with all their agent API calls. CostHawk's wrapped keys support per-user tagging that automatically attributes costs to individual developers. Second, aggregate costs by developer at daily, weekly, and monthly granularity. CostHawk's dashboard shows per-user spend with drill-down to individual sessions, making it easy to identify which developers are consuming the most agent budget and whether their usage patterns are typical or anomalous. Third, set per-developer budgets with alerts. If the team average is $15/day per developer, set an alert at $25/day to flag outliers. CostHawk's Claude Code and Codex integrations automate this entire workflow by syncing session-level usage data directly from these tools, providing per-developer cost tracking without requiring any changes to the developer's workflow.
What is a runaway agent and how do I prevent it?+
A runaway agent is one that enters an unproductive loop — retrying failed approaches, generating output that does not make progress toward the goal, or accumulating context without converging on a solution. Runaway sessions are costly because they consume tokens at an accelerating rate (due to context accumulation) while producing no useful output. Common causes include: ambiguous goals that the agent cannot satisfy, tool errors that the agent cannot recover from, and edge cases that cause the agent to oscillate between two approaches. Prevention strategies include: hard step limits (terminate after N steps regardless of progress), token budget ceilings (terminate when cumulative tokens exceed a threshold), progress checks (after every 5 steps, evaluate whether meaningful progress has been made and terminate if not), and pattern detection (detect repeated tool calls with identical parameters or oscillating behavior and intervene). CostHawk's runaway detection uses statistical analysis of session cost curves to flag sessions that are consuming tokens faster than their progress would justify, enabling early intervention.
How much does a typical Claude Code or Codex session cost?+
Costs vary enormously depending on task complexity, but typical ranges based on production data are: Simple tasks (fix a typo, add a log statement, rename a variable): 5,000–15,000 tokens, costing $0.04–$0.15 on Claude Sonnet or $0.03–$0.10 on GPT-4o. Medium tasks (implement a new function, fix a bug, add a test): 20,000–80,000 tokens, costing $0.15–$0.65 on Claude Sonnet or $0.10–$0.45 on GPT-4o. Complex tasks (refactor a module, implement a new feature, debug a complex issue): 80,000–300,000 tokens, costing $0.65–$2.50 on Claude Sonnet or $0.45–$1.75 on GPT-4o. Very complex/runaway sessions: 300,000–1,000,000+ tokens, costing $2.50–$10.00+. The median session for an experienced developer is typically in the medium range ($0.20–$0.50), with an average daily cost of $5–$15 across 15–25 sessions. CostHawk's Claude Code and Codex integrations provide exact per-session cost data, enabling teams to track and optimize coding agent spend with precision.
How does CostHawk help manage agentic AI costs?+
CostHawk addresses agentic AI costs at four levels. Visibility: Per-session cost tracking groups all API calls within an agent session and shows total tokens, total cost, step count, and cost trajectory — making the internal economics of each agent run transparent. Prevention: Per-session token budgets enforced through CostHawk wrapped keys create hard ceilings on individual agent runs, preventing runaways from consuming unlimited resources. Detection: Real-time anomaly detection monitors active sessions and flags cost patterns that indicate runaway behavior (rapidly growing context, repeated failed tool calls, cost accumulating faster than historical norms). Alerts can notify the user, the team, or automatically terminate the session. Optimization: Historical cost analytics reveal which task types, users, and agent configurations produce the most efficient outcomes, enabling data-driven decisions about step limits, model routing, and context management strategies. CostHawk's dedicated Claude Code and Codex integrations provide zero-configuration agent cost monitoring for the two most popular coding agent tools.

Related Terms

AI Cost Glossary

Put this knowledge to work. Track your AI spend in one place.

CostHawk gives engineering teams real-time visibility into every token, every model, and every dollar across your AI stack.