OpenClaw Skillv3.0.0

OpenClaw Token Optimizer

Asifby Asif
Deploy on EasyClawdfrom $14.9/mo

Reduce OpenClaw token usage and API costs through smart model routing, heartbeat optimization, budget tracking, and native 2026.2.15 features (session prunin...

How to use this skill

OpenClaw skills run inside an OpenClaw container. EasyClawd deploys and manages yours — no server setup needed.

  1. Sign up on EasyClawd (2 minutes)
  2. Connect your Telegram bot
  3. Install OpenClaw Token Optimizer from the skills panel
Get started — from $14.9/mo
19stars
5,770downloads
37installs
2comments
19versions

Latest Changelog

**Major update: v3.0 introduces lazy skill loading and restructures optimization logic for maximum token savings.**

- Added native support and documentation for lazy skill loading (SKILLS.md-based), now the largest single optimization.
- Removed bundled cronjob guide and multi-provider reference/config files; external strategies are now referenced but not included.
- Simplified documentation to focus on core, local-only optimizations and integration patterns.
- Updated AGENTS.md and heartbeat optimization templates for compatibility with session pruning and newer OpenClaw features.
- All scripts and documentation updated for consistency with v3.0 best practices.

Tags

cost-savings: 3.0.0latest: 3.0.0lazy-loading: 3.0.0model-routing: 3.0.0productivity: 3.0.0token-optimization: 3.0.0

Skill Documentation

---
name: token-optimizer
description: Reduce OpenClaw token usage and API costs through smart model routing, heartbeat optimization, budget tracking, and native 2026.2.15 features (session pruning, bootstrap size limits, cache TTL alignment). Use when token costs are high, API rate limits are being hit, or hosting multiple agents at scale. The 4 executable scripts (context_optimizer, model_router, heartbeat_optimizer, token_tracker) are local-only — no network requests, no subprocess calls, no system modifications. Reference files (PROVIDERS.md, config-patches.json) document optional multi-provider strategies that require external API keys and network access if you choose to use them. See SECURITY.md for full breakdown.
version: 3.0.0
homepage: https://github.com/Asif2BD/OpenClaw-Token-Optimizer
source: https://github.com/Asif2BD/OpenClaw-Token-Optimizer
author: Asif2BD
security:
  verified: true
  auditor: Oracle (Matrix Zion)
  audit_date: 2026-02-18
  scripts_no_network: true
  scripts_no_code_execution: true
  scripts_no_subprocess: true
  scripts_data_local_only: true
  reference_files_describe_external_services: true
  optimize_sh_is_convenience_wrapper: true
  optimize_sh_only_calls_bundled_python_scripts: true
---

# Token Optimizer

Comprehensive toolkit for reducing token usage and API costs in OpenClaw deployments. Combines smart model routing, optimized heartbeat intervals, usage tracking, and multi-provider strategies.

## Quick Start

**Immediate actions** (no config changes needed):

1. **Generate optimized AGENTS.md (BIGGEST WIN!):**
   ```bash
   python3 scripts/context_optimizer.py generate-agents
   # Creates AGENTS.md.optimized — review and replace your current AGENTS.md
   ```

2. **Check what context you ACTUALLY need:**
   ```bash
   python3 scripts/context_optimizer.py recommend "hi, how are you?"
   # Shows: Only 2 files needed (not 50+!)
   ```

3. **Install optimized heartbeat:**
   ```bash
   cp assets/HEARTBEAT.template.md ~/.openclaw/workspace/HEARTBEAT.md
   ```

4. **Enforce cheaper models for casual chat:**
   ```bash
   python3 scripts/model_router.py "thanks!"
   # Single-provider Anthropic setup: Use Sonnet, not Opus
   # Multi-provider setup (OpenRouter/Together): Use Haiku for max savings
   ```

5. **Check current token budget:**
   ```bash
   python3 scripts/token_tracker.py check
   ```

**Expected savings:** 50-80% reduction in token costs for typical workloads (context optimization is the biggest factor!).

## Core Capabilities

### 0. Lazy Skill Loading (NEW in v3.0 — BIGGEST WIN!)

**The single highest-impact optimization available.** Most agents burn 3,000–15,000 tokens per session loading skill files they never use. Stop that first.

**The pattern:**

1. Create a lightweight `SKILLS.md` catalog in your workspace (~300 tokens — list of skills + when to load them)
2. Only load individual SKILL.md files when a task actually needs them
3. Apply the same logic to memory files — load MEMORY.md at startup, daily logs only on demand

**Token savings:**

| Library size | Before (eager) | After (lazy) | Savings |
|---|---|---|---|
| 5 skills | ~3,000 tokens | ~600 tokens | **80%** |
| 10 skills | ~6,500 tokens | ~750 tokens | **88%** |
| 20 skills | ~13,000 tokens | ~900 tokens | **93%** |

**Quick implementation in AGENTS.md:**

```markdown
## Skills

At session start: Read SKILLS.md (the index only — ~300 tokens).
Load individual skill files ONLY when a task requires them.
Never load all skills upfront.
```

**Full implementation (with catalog template + optimizer script):**

```bash
clawhub install openclaw-skill-lazy-loader
```

The companion skill `openclaw-skill-lazy-loader` includes a `SKILLS.md.template`, an `AGENTS.md.template` lazy-loading section, and a `context_optimizer.py` CLI that recommends exactly which skills to load for any given task.

**Lazy loading handles context loading costs. The remaining capabilities below handle runtime costs.** Together they cover the full token lifecycle.

---

### 1. Context Optimization (NEW!)

**Biggest token saver** — Only load files you actually need, not everything upfront.

**Problem:** Default OpenClaw loads ALL context files every session:
- SOUL.md, AGENTS.md, USER.md, TOOLS.md, MEMORY.md
- docs/**/*.md (hundreds of files)
- memory/2026-*.md (daily logs)
- Total: Often 50K+ tokens before user even speaks!

**Solution:** Lazy loading based on prompt complexity.

**Usage:**
```bash
python3 scripts/context_optimizer.py recommend "<user prompt>"
```

**Examples:**
```bash
# Simple greeting → minimal context (2 files only!)
context_optimizer.py recommend "hi"
→ Load: SOUL.md, IDENTITY.md
→ Skip: Everything else
→ Savings: ~80% of context

# Standard work → selective loading
context_optimizer.py recommend "write a function"
→ Load: SOUL.md, IDENTITY.md, memory/TODAY.md
→ Skip: docs, old memory, knowledge base
→ Savings: ~50% of context

# Complex task → full context
context_optimizer.py recommend "analyze our entire architecture"
→ Load: SOUL.md, IDENTITY.md, MEMORY.md, memory/TODAY+YESTERDAY.md
→ Conditionally load: Relevant docs only
→ Savings: ~30% of context
```

**Output format:**
```json
{
  "complexity": "simple",
  "context_level": "minimal",
  "recommended_files": ["SOUL.md", "IDENTITY.md"],
  "file_count": 2,
  "savings_percent": 80,
  "skip_patterns": ["docs/**/*.md", "memory/20*.md"]
}
```

**Integration pattern:**
Before loading context for a new session:
```python
from context_optimizer import recommend_context_bundle

user_prompt = "thanks for your help"
recommendation = recommend_context_bundle(user_prompt)

if recommendation["context_level"] == "minimal":
    # Load only SOUL.md + IDENTITY.md
    # Skip everything else
    # Save ~80% tokens!
```

**Generate optimized AGENTS.md:**
```bash
context_optimizer.py generate-agents
# Creates AGENTS.md.optimized with lazy loading instructions
# Review and replace your current AGENTS.md
```

**Expected savings:** 50-80% reduction in context tokens.

### 2. Smart Model Routing (ENHANCED!)

Automatically classify tasks and route to appropriate model tiers.

**NEW: Communication pattern enforcement** — Never waste Opus tokens on "hi" or "thanks"!

**Usage:**
```bash
python3 scripts/model_router.py "<user prompt>" [current_model] [force_tier]
```

**Examples:**
```bash
# Communication (NEW!) → ALWAYS Haiku
python3 scripts/model_router.py "thanks!"
python3 scripts/model_router.py "hi"
python3 scripts/model_router.py "ok got it"
→ Enforced: Haiku (NEVER Sonnet/Opus for casual chat)

# Simple task → suggests Haiku
python3 scripts/model_router.py "read the log file"

# Medium task → suggests Sonnet
python3 scripts/model_router.py "write a function to parse JSON"

# Complex task → suggests Opus
python3 scripts/model_router.py "design a microservices architecture"
```

**Patterns enforced to Haiku (NEVER Sonnet/Opus):**

*Communication:*
- Greetings: hi, hey, hello, yo
- Thanks: thanks, thank you, thx
- Acknowledgments: ok, sure, got it, understood
- Short responses: yes, no, yep, nope
- Single words or very short phrases

*Background tasks:*
- Heartbeat checks: "check email", "monitor servers"
- Cronjobs: "scheduled task", "periodic check", "reminder"
- Document parsing: "parse CSV", "extract data from log", "read JSON"
- Log scanning: "scan error logs", "process logs"

**Integration pattern:**
```python
from model_router import route_task

user_prompt = "show me the config"
routing = route_task(user_prompt)

if routing["should_switch"]:
    # Use routing["recommended_model"]
    # Save routing["cost_savings_percent"]
```

**Customization:**
Edit `ROUTING_RULES` or `COMMUNICATION_PATTERNS` in `scripts/model_router.py` to adjust patterns and keywords.

### 3. Heartbeat Optimization

Reduce API calls from heartbeat polling with smart interval tracking:

**Setup:**
```bash
# Copy template to workspace
cp assets/HEARTBEAT.template.md ~/.openclaw/workspace/HEARTBEAT.md

# Plan which checks should run
python3 scripts/
Read full documentation on ClawHub
Security scan, version history, and community comments: view on ClawHub