As conversations grow, they consume more of the model’s context window. Compaction reduces context size while preserving important information, keeping your conversations responsive and cost-effective.Documentation Index
Fetch the complete documentation index at: https://mux-sidebar-t7ry.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Approaches
| Approach | Speed | Context Preservation | Cost | Reversible |
|---|---|---|---|---|
| Start Here | Instant | Intelligent | Free | Yes |
/compact | Slower (uses AI) | Intelligent | Uses API tokens | No |
/clear | Instant | None | Free | No |
/truncate | Instant | Temporal | Free | No |
| Auto-Compaction | Automatic | Intelligent | Uses API tokens | No |
When to compact
- Proactively: Before hitting context limits, especially on long-running tasks
- After major milestones: When you’ve completed a phase and want to preserve learnings without full history
- When responses degrade: Large contexts can reduce response quality
Next steps
- Manual Compaction — Commands for manually managing context
- Automatic Compaction — Let Mux compact for you based on usage or idle time