Compress Context.
Cut Cost.
Research-backed context compression for LLM agents. Reduce token usage by up to 90% without losing semantic meaning.
Nasa has said it hopes to send astronauts on a ten-day trip around the Moon as soon as February. The US space agency had previously committed to launching no later than the end of April but said it aims to bring the mission forward. It's been 50 years since any country has flown a crewed lunar mission. Nasa will send four astronauts there and back to test systems.
NASA hopes ten day trip around Moon soon February send four astronauts test systems after 50 years no crewed lunar mission
Universal Compression
Start saving money with our SOTA cmprsr-v1 model, excelling across domains
- Compress the context once, re-use over multiple LLM queries.
- Use our SDK to integrate into your workflow in minutes.
- Check out diverse benchmarks in our paper.
- Best for: sparse generic data e.g. meeting transcripts or Wikipedia pages.
Use-Case Tailored Compression
Access pre-compressed knowledge bases or request custom compression models tailored to your needs.
- Pre-Compressed Knowledge — like a "Compressed Web". Learn more.
- Use-case tailored compression for specific domains (Finance, Legal, Healthcare...).
- Query-specific compression (allows for extreme compression rates).
- <YOUR_USE_CASE>. Contact us — We would love to hear about it!
Get started in minutes
Drop-in addition to your current context management workflow.
Get Your API Key
Create an API key from your console.
# Your API key: cmp_...
export COMPRESR_API_KEY="cmp_..."Install the SDK
Install the official Python library. Works with Python 3.8+.
pip install compresrReady to Use
Compress your context and use it with any LLM of your choice.
from compresr import CompressionClient
client = CompressionClient(
api_key="cmp_..."
)
result = client.generate(
context="Your long context...",
compression_model_name="cmprsr_v1",
target_compression_ratio=0.5
)
print(result.data.compressed_context)Ready to cut your API costs?
Join engineering teams reducing their LLM costs by up to 90%.