Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mux-sidebar-t7ry.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

As conversations grow, they consume more of the model’s context window. Compaction reduces context size while preserving important information, keeping your conversations responsive and cost-effective.

Approaches

ApproachSpeedContext PreservationCostReversible
Start HereInstantIntelligentFreeYes
/compactSlower (uses AI)IntelligentUses API tokensNo
/clearInstantNoneFreeNo
/truncateInstantTemporalFreeNo
Auto-CompactionAutomaticIntelligentUses API tokensNo

When to compact

  • Proactively: Before hitting context limits, especially on long-running tasks
  • After major milestones: When you’ve completed a phase and want to preserve learnings without full history
  • When responses degrade: Large contexts can reduce response quality

Next steps