Stop paying for context your model doesn’t need

Your prompts carry ~10x the tokens the model actually reads. Compresr drops the redundant 90% off the bill. Same or better answers. Hit our API, or deploy on-prem.

See how compression works on your own file — 60 seconds
Paste into Claude Code
Use compresr to show me live cost savings on my own file.

1. pip install compresr
2. Introspect the SDK to discover the API — don't guess.
3. Ask me for COMPRESR_API_KEY (open https://compresr.ai/signup if I don't have one — $10 free, no card).
4. Ask me for (a) a path to a long document (PDF/.md/.txt) and (b) a question about it.
5. Compress the document with the question, then print a receipt: tokens in/out, ratio, GPT-5.2 cost full vs compressed at $1.75/M input tokens, % saved.
6. Ask GPT-5.2 the question against the compressed context and print the answer.

Don't skip the cost receipt — that's the point.
Works in Claude Code, Cursor, or any agent harness.Open full demo

How it works

We keep the signal and drop the noise.

Your raw context
QueryProduction rate changes Boeing is forecasting for FY2023?
112,552 tokensBoeing 10-K — $0.263/query
Compresr
compresr
compressingdelivered

Keep the tokens that matter to your query.

Compression
226×
Compressed
revenue$77.8B2023
498 tokenssame answer — $0.037/query
Tokens
112,552498
226× fewer
Cost
$0.263$0.037
86% cheaper
Latency
18s13.7s
24% faster

What most teams are losing

Stop overpaying.

If you’re paying full price for your tokens, you’re leaving real money on the table.

~90%
Bill cut
FinanceBench | Opus 4.7
10×
Avg. compression
across 141 FinanceBench questions
+8pp
Accuracy uplift
on the Pax Historia benchmark
TodayWhat most teams are doing

Trimming / Truncation

  • Cuts off the tail — the answer was often in what you dropped.
  • Accuracy collapses on long docs.

Summarization

  • Lossy rewrite — nuance and exact wording are gone.
  • Costs extra LLM calls and latency for a worse context.

Question-agnostic compression

  • Compresses blindly — keeps irrelevant tokens, drops important ones.
  • Rarely gets past 5× without tanking accuracy.
With CompresrBetter accuracy at a smaller price

Question-aware compression.

Feed us the query and the context. We return only the tokens that actually matter for the answer. You pay less, the LLM responds faster, and answers get sharper.

Per query
$0.263$0.037

GPT-5.2 + latte_v1

Tokens in
112,552498

226× fewer, same answer

  • Question-aware: we compress for the task.
  • Accuracy preserved (and often improved).
  • Our API or on-prem — your call.

Independent benchmark

FinanceBench.

Baseline (GPT-5.2)latte_v1 API
+ GPT-5.2
Compression10x
Context~106Ktokens~10.5Ktokens
Accuracy72.3%74.5%
Savings76%cheaper

FinanceBench · 141 questions over 79 SEC filings · Full filings up to 230K tokens long

Two ways to deploy

Pick the one that fits your stack.

Hosted SDK

Drop-in SDK. One API key.

Install, grab a key, compress any prompt or document before it hits your LLM. Pay per million tokens — no surprise bills.

  • $10 in free credits on sign-up — no credit card required
  • TypeScript & Python clients
  • Question-aware compression
  • Transparent per-million-token pricing
Get your free credits

Sign up, get $10 of compression free — no card needed.

On-Prem Deployment

Runs inside your VPC.

Your data never leaves your network. We deploy Compresr to your infrastructure, tune it for your workload, and support you directly.

  • Private deployment in your cloud or data center
  • Custom throughput & latency SLAs
  • Tailored to your business needs
  • Dedicated support
Contact us for on-prem

Enterprise, finance, healthcare, regulated workloads.