GlossaryBilling & PricingUpdated 2026-03-16

Input vs. Output Tokens

The two token directions in every LLM API call, each priced differently. Output tokens cost 3-5x more than input tokens across all major providers.

Definition

What is Input vs. Output Tokens?

Every large-language-model API call splits token consumption into two categories: input tokens (the prompt, system instructions, and context you send to the model) and output tokens (the response the model generates back to you). Providers price these two directions separately, with output tokens costing 3-5x more per million tokens. For example, GPT-4o charges $2.50 per million input tokens versus $10.00 per million output tokens — a 4x multiplier. Claude 3.5 Sonnet charges $3.00 per million input tokens versus $15.00 per million output tokens — a 5x multiplier. Understanding this asymmetry is essential because most cost-optimization strategies target the more expensive side of the equation: output tokens.

Impact

Why It Matters for AI Costs

The input/output split is the single most important structural detail in AI API pricing. Teams that ignore it routinely misestimate their bills by 2-3x. A request that generates a 4,000-token response costs as much in output alone as sending 16,000 tokens of input on GPT-4o. CostHawk breaks down every request into input and output costs so you can see exactly where your money goes and target optimizations at the expensive side.

Understanding the Asymmetry

When you send a prompt to an LLM, the model processes your input tokens in parallel using a single forward pass through the neural network. This is computationally efficient — the model reads all input tokens at once and builds an internal representation. Output generation is fundamentally different. The model produces tokens one at a time in a sequential, autoregressive process: each new token depends on every token that came before it. This means generating 1,000 output tokens requires roughly 1,000 sequential forward passes, each one slightly larger than the last.

This computational asymmetry is why providers charge more for output. The GPU time required to generate a token is significantly higher than the GPU time required to process an input token. On most transformer architectures, the ratio of compute per output token to compute per input token ranges from 3x to 6x, depending on model size, batch efficiency, and hardware utilization. Providers pass this cost differential directly to customers through split pricing.

The asymmetry also has implications for capacity planning. Output-heavy workloads consume more GPU-seconds per request, which means providers can serve fewer concurrent requests. This reduced throughput is another factor in the higher output price.

Provider Pricing Comparison

The following table shows current input and output token pricing for major models as of March 2026. All prices are per 1 million tokens.

ProviderModelInput (per 1M)Output (per 1M)Ratio
OpenAIGPT-4o$2.50$10.004.0x
OpenAIGPT-4o-mini$0.15$0.604.0x
OpenAIo1$15.00$60.004.0x
OpenAIo3-mini$1.10$4.404.0x
AnthropicClaude 3.5 Sonnet$3.00$15.005.0x
AnthropicClaude 3.5 Haiku$0.80$4.005.0x
AnthropicClaude 3 Opus$15.00$75.005.0x
GoogleGemini 1.5 Pro$1.25$5.004.0x
GoogleGemini 1.5 Flash$0.075$0.304.0x
GoogleGemini 2.0 Flash$0.10$0.404.0x

Notice that every provider maintains a consistent multiplier across their model lineup: OpenAI and Google use 4x, Anthropic uses 5x. This consistency means the optimization math is the same regardless of which model tier you use — reducing output tokens is always 4-5x more valuable than reducing input tokens by the same amount.

Why Output Tokens Cost More

The cost difference comes down to three factors rooted in how transformer models work:

  • Autoregressive generation: Each output token requires a separate forward pass through the model. A 500-token response requires 500 sequential inference steps, while a 500-token input is processed in a single parallel pass. The sequential nature of output generation prevents the kind of batching optimizations that make input processing efficient.
  • KV-cache memory: During output generation, the model maintains a key-value cache that grows with each token produced. This cache consumes GPU memory proportional to the sequence length, reducing the number of concurrent requests a GPU can handle. More memory per request means fewer requests per GPU, which increases the cost per token.
  • Speculative decoding overhead: Many providers use speculative decoding to accelerate output generation, where a smaller draft model proposes tokens that the main model verifies. While this improves throughput, it adds computational overhead per output token that does not exist for input processing.

These three factors combine to make output generation 3-6x more expensive in raw compute cost per token. Providers set their pricing multipliers at 4-5x to reflect this reality while maintaining some margin.

Optimization Strategies for Output Reduction

Since output tokens cost 4-5x more than input tokens, reducing output length has an outsized impact on your bill. Here are proven strategies:

  • Set explicit max_tokens: Always set a max_tokens limit appropriate to your use case. A classification task needs 10-50 tokens, not the default 4,096. Setting max_tokens: 50 on a classification endpoint prevents the model from generating verbose explanations you do not need.
  • Request structured output: Use JSON mode or structured output schemas to force concise, machine-readable responses. A JSON object with three fields is typically 50-100 tokens. An equivalent natural-language paragraph is 200-400 tokens. That is a 2-4x reduction in output tokens.
  • Use response formatting instructions: Explicitly tell the model how to format its response. "Respond with only the category name" produces 2-5 tokens. "Explain your reasoning and then provide the category" produces 200+ tokens. Be precise about what you want.
  • Avoid chain-of-thought when unnecessary: Chain-of-thought prompting ("think step by step") dramatically increases output length — often by 5-10x. Use it for complex reasoning tasks where accuracy matters, but skip it for straightforward extraction and classification tasks.
  • Post-process with smaller models: If you need a long response for internal processing but a short summary for the user, generate the full response with a capable model and then summarize it with a cheaper model like GPT-4o-mini.

Applying these strategies in combination can reduce output token spend by 40-70% without degrading the quality of results your application delivers to users.

Measuring Input vs. Output Ratios

Your input-to-output ratio is a critical metric for cost forecasting and optimization prioritization. Calculate it as: total_input_tokens / total_output_tokens. A ratio of 10:1 means you send 10 input tokens for every 1 output token — common for classification and extraction tasks. A ratio of 1:3 means the model generates 3 tokens for every 1 you send — typical for creative writing and code generation.

Typical ratios by use case:

Use CaseInput:Output RatioOutput Share of Cost
Classification50:1 to 100:115-30%
Entity extraction20:1 to 50:120-40%
RAG Q&A5:1 to 15:140-65%
Summarization3:1 to 10:135-60%
Code generation1:1 to 1:565-90%
Creative writing1:2 to 1:1075-95%
Agent chains2:1 to 1:250-75%

Use cases with a low input-to-output ratio (meaning output-heavy) benefit most from output optimization strategies. Use cases with a high ratio (input-heavy) benefit more from prompt compression and caching strategies. CostHawk calculates this ratio automatically on the usage dashboard so you can prioritize optimizations by impact.

Tracking Token Direction with CostHawk

CostHawk provides granular visibility into input and output token consumption across every request, model, and project. The dashboard breaks down your spend into input cost and output cost columns, letting you instantly see which direction is driving your bill.

Key CostHawk features for token direction analysis:

  • Per-request breakdown: Every logged API call shows input token count, output token count, input cost, and output cost as separate fields. Sort by output cost to find your most expensive responses.
  • Aggregate ratio tracking: The usage analytics page displays your organization-wide input-to-output ratio over time. A trending shift toward more output-heavy usage signals growing costs that may need attention.
  • Model-level comparison: Compare input/output splits across models to identify routing opportunities. If your GPT-4o requests are 80% output cost, routing some to GPT-4o-mini could save 90% on those specific calls.
  • Anomaly detection on output spikes: CostHawk flags requests where output token counts are unusually high — a sign of verbose responses, infinite-loop agent behavior, or missing max_tokens constraints.

By treating input and output tokens as separate cost centers, CostHawk helps teams apply the right optimization strategy to the right side of their token spend.

FAQ

Frequently Asked Questions

Why do output tokens cost more than input tokens?+
Output tokens cost more because of the fundamental difference in how they are processed. Input tokens are processed in parallel during a single forward pass — the model reads your entire prompt at once. Output tokens are generated sequentially, one at a time, in an autoregressive loop where each new token depends on all previous tokens. This means generating 1,000 output tokens requires approximately 1,000 sequential inference steps, each consuming GPU compute and expanding the KV-cache in memory. The sequential nature prevents efficient batching, and the growing memory footprint reduces the number of concurrent requests a GPU can serve. Providers price this 3-6x computational difference as a 4-5x price multiplier. OpenAI and Google charge 4x more for output tokens; Anthropic charges 5x more. This ratio is consistent across all model tiers within each provider.
What is a typical input-to-output token ratio?+
It depends heavily on your use case. Classification and extraction tasks typically have ratios of 20:1 to 100:1 — you send a lot of context and get back a short label or structured result. RAG-based question answering usually runs 5:1 to 15:1, with substantial context retrieval and medium-length answers. Code generation and creative writing can invert the ratio to 1:2 or even 1:10, where the model generates far more tokens than you send. Agent chains tend to be roughly balanced at 1:1 to 2:1, because each step sends accumulated context and receives structured actions. CostHawk tracks your ratio automatically so you can monitor how it shifts over time and prioritize optimizations accordingly.
How can I reduce output tokens without losing quality?+
The most effective strategies are: (1) Set explicit max_tokens limits appropriate to each task — a classification endpoint needs 10-50 tokens, not 4,096. (2) Use structured output (JSON mode) to force concise, machine-readable responses instead of verbose natural language. (3) Write precise formatting instructions — "respond with only the category name" versus "explain and then categorize." (4) Skip chain-of-thought prompting for simple tasks where step-by-step reasoning adds cost without improving accuracy. (5) Use a two-model pipeline for cases where you need detailed internal processing but concise user-facing output. These strategies combined typically reduce output token spend by 40-70% without degrading result quality.
Does the input/output price ratio vary between providers?+
Yes, but the variation is narrow. OpenAI consistently uses a 4x multiplier across their model lineup: GPT-4o at $2.50/$10.00, GPT-4o-mini at $0.15/$0.60, and o1 at $15.00/$60.00. Anthropic uses a 5x multiplier: Claude 3.5 Sonnet at $3.00/$15.00, Claude 3.5 Haiku at $0.80/$4.00. Google uses a 4x multiplier for Gemini models. This means Anthropic's output tokens are proportionally more expensive relative to input, making output optimization even more valuable for teams using Claude models. The key takeaway is that regardless of provider, cutting output tokens is always 4-5x more impactful per token than cutting input tokens.
How do cached tokens affect the input/output price split?+
Prompt caching applies only to the input side. Anthropic offers a 90% discount on cached input tokens (so $0.30/MTok instead of $3.00/MTok for Sonnet), while OpenAI offers a 50% discount ($1.25/MTok instead of $2.50/MTok for GPT-4o). Output token pricing is never discounted by caching — the model still must generate each output token through sequential autoregressive decoding regardless of whether the input was cached. This means caching makes the input/output price asymmetry even more extreme. With Anthropic's 90% cache discount, the effective ratio on cached requests becomes 50:1 instead of 5:1. This further reinforces that output optimization should be your primary cost-reduction focus, especially if you are already benefiting from prompt caching on the input side.
Do reasoning models like o1 have different input/output economics?+
Reasoning models maintain the same 4x price multiplier ($15.00 input vs $60.00 output for o1), but they generate substantially more output tokens per request because of internal chain-of-thought reasoning. A request that might produce 500 output tokens on GPT-4o can produce 5,000-20,000 output tokens on o1 as the model works through its reasoning steps. This means the effective cost per request on reasoning models is heavily dominated by output — often 90-95% of the total cost. Setting max_completion_tokens is critical for reasoning models to prevent runaway generation. CostHawk flags reasoning model requests that exceed configurable output thresholds so you can identify tasks that may not need the full reasoning capability.
Can I see input vs. output costs per feature in CostHawk?+
Yes. CostHawk's attribution system tags every API request with project, feature, and environment labels via wrapped keys or MCP metadata. The usage dashboard lets you filter by any attribution dimension and then view the input/output cost breakdown for that slice. For example, you can see that your RAG pipeline spends 70% on input tokens (context retrieval is input-heavy) while your code generation feature spends 85% on output tokens (generation is output-heavy). This per-feature view helps you apply the right optimization strategy to each part of your application rather than optimizing blindly across the board.
How does batch API pricing change the input/output calculation?+
OpenAI's Batch API offers a 50% discount on both input and output tokens, so the 4x ratio between input and output is preserved. GPT-4o batch pricing is $1.25 input / $5.00 output per million tokens instead of $2.50 / $10.00. The discount is the same percentage on both sides, which means the optimization math stays the same: cutting output tokens is still 4x more valuable than cutting input tokens. Anthropic's Message Batches API also offers a 50% discount on both directions. The key benefit of batch processing is the across-the-board cost reduction, not a change in the input/output dynamic. Use batch APIs for non-real-time workloads like nightly summarization, bulk classification, and content moderation queues.
What tools can I use to count tokens before sending a request?+
For OpenAI models, use the tiktoken library (Python) or tiktoken npm package (JavaScript/TypeScript) to count tokens before sending a request. Call encoding_for_model('gpt-4o') to get the correct tokenizer, then encode(text).length for the count. For Anthropic models, use the anthropic SDK's built-in token counting endpoint or the anthropic-tokenizer package. Google provides token counting in the Gemini SDK via model.count_tokens(). Pre-counting tokens lets you estimate costs before incurring them, enforce token budgets at the application layer, and avoid surprises. CostHawk also provides estimated costs in the MCP server's response metadata so you can log projected vs. actual costs.
Should I optimize input tokens or output tokens first?+
Start with output tokens. Because output tokens cost 4-5x more per token, every output token you eliminate saves 4-5x more than an input token. Check your input-to-output ratio: if output accounts for more than 50% of your total token cost (common for generation and agent workloads), output optimization will yield the biggest savings. Strategies include setting strict max_tokens limits, using structured JSON output, and removing chain-of-thought from tasks that do not need it. Once you have optimized output, move to input optimization: prompt compression, caching, and RAG chunking strategies. CostHawk's dashboard makes it easy to see which side of the split is driving your costs so you can prioritize accurately.

Related Terms

AI Cost Glossary

Put this knowledge to work. Track your AI spend in one place.

CostHawk gives engineering teams real-time visibility into every token, every model, and every dollar across your AI stack.