Skip to content

ericjuta/agentmemory

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

195 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

agentmemory — Persistent memory for AI coding agents

Your coding agent remembers everything. No more re-explaining.
Persistent memory for Claude Code, Cursor, Gemini CLI, OpenCode, and any MCP client.

Design doc: 719 stars / 97 forks on the gist

The gist extends Karpathy's LLM Wiki pattern with confidence scoring, lifecycle, knowledge graphs, and hybrid search.
agentmemory is the implementation.

npm version CI License Stars

95.2% retrieval R@5 92% fewer tokens 44 MCP tools 12 auto hooks 0 external DBs 654 tests passing

agentmemory demo

Quick StartBenchmarksvs CompetitorsAgentsHow It WorksMCPViewerConfigAPIOperations


Works with every agent

agentmemory works with any agent that supports hooks, MCP, or REST API. All agents share the same memory server.

Claude Code
Claude Code
12 hooks + MCP + skills
OpenClaw
OpenClaw
MCP + plugin
Hermes
Hermes
MCP + plugin
Cursor
Cursor
MCP server
Gemini CLI
Gemini CLI
MCP server
OpenCode
OpenCode
MCP server
Codex CLI
Codex CLI
MCP server
Cline
Cline
MCP server
Goose
Goose
MCP server
Kilo Code
Kilo Code
MCP server
Aider
Aider
REST API
Claude Desktop
Claude Desktop
MCP server
Windsurf
Windsurf
MCP server
Roo Code
Roo Code
MCP server
Claude SDK
Claude SDK
AgentSDKProvider
REST API
Any agent
REST API

Works with any agent that speaks MCP or HTTP. One server, memories shared across all of them.

The included docker-compose.yml starts both iii-engine and the agentmemory-worker, mounts iii-config.yaml into the engine container, and persists iii state in the named iii-data volume.


You explain the same architecture every session. You re-discover the same bugs. You re-teach the same preferences. Built-in memory (CLAUDE.md, .cursorrules) caps out at 200 lines and goes stale. agentmemory fixes this. It silently captures what your agent does, compresses it into searchable memory, and injects the right context when the next session starts. One command. Works across agents.

What changes: Session 1 you set up JWT auth. Session 2 you ask for rate limiting. The agent already knows your auth uses jose middleware in src/middleware/auth.ts, your tests cover token validation, and you chose jose over jsonwebtoken for Edge compatibility. No re-explaining. No copy-pasting. The agent just knows.

npx @agentmemory/agentmemory

New in v0.8.2 — Security hardening (default localhost, viewer CSP nonces, mesh auth), agentmemory demo command, benchmark comparison vs mem0/Letta/Khoj, OpenClaw gateway plugin, real-time token savings in CLI + viewer.


Benchmarks

Retrieval Accuracy

LongMemEval-S (ICLR 2025, 500 questions)

System R@5 R@10 MRR
agentmemory 95.2% 98.6% 88.2%
BM25-only fallback 86.2% 94.6% 71.5%

Token Savings

Approach Tokens/yr Cost/yr
Paste full context 19.5M+ Impossible (exceeds window)
LLM-summarized ~650K ~$500
agentmemory ~170K ~$10
agentmemory + local embeddings ~170K $0

Embedding model: all-MiniLM-L6-v2 (local, free, no API key). Full reports: benchmark/LONGMEMEVAL.md, benchmark/QUALITY.md, benchmark/SCALE.md. Competitor comparison: benchmark/COMPARISON.md — agentmemory vs mem0, Letta, Khoj, claude-mem, Hippo.


vs Competitors

agentmemory mem0 (53K ⭐) Letta / MemGPT (22K ⭐) Built-in (CLAUDE.md)
Type Memory engine + MCP server Memory layer API Full agent runtime Static file
Retrieval R@5 95.2% 68.5% (LoCoMo) 83.2% (LoCoMo) N/A (grep)
Auto-capture 12 hooks (zero manual effort) Manual add() calls Agent self-edits Manual editing
Search BM25 + Vector + Graph (RRF fusion) Vector + Graph Vector (archival) Loads everything into context
Multi-agent MCP + REST + leases + signals API (no coordination) Within Letta runtime only Per-agent files
Framework lock-in None (any MCP client) None High (must use Letta) Per-agent format
External deps None (SQLite + iii-engine) Qdrant / pgvector Postgres + vector DB None
Memory lifecycle 4-tier consolidation + decay + auto-forget Passive extraction Agent-managed Manual pruning
Token efficiency ~1,900 tokens/session ($10/yr) Varies by integration Core memory in context 22K+ tokens at 240 obs
Real-time viewer Yes (port 3113) Cloud dashboard Cloud dashboard No
Self-hosted Yes (default) Optional Optional Yes

Quick Start

Compatibility: this release targets stable iii-sdk ^0.11.0 and iii-engine v0.11.x.

Try it in 30 seconds

# Terminal 1: start the server
npx @agentmemory/agentmemory

# Terminal 2: seed sample data and see recall in action
npx @agentmemory/agentmemory demo

demo seeds 3 realistic sessions (JWT auth, N+1 query fix, rate limiting) and runs semantic searches against them. You'll see it find "N+1 query fix" when you search "database performance optimization" — keyword matching can't do that.

Open http://localhost:3113 to watch the memory build live.

Upgrade / Maintenance

Use the maintenance command when you intentionally want to update your local runtime:

npx @agentmemory/agentmemory upgrade

Warning: this command mutates the current workspace/runtime. It can update JavaScript dependencies, may run cargo install iii-engine --force, and may pull Docker images.

Implementation details live in src/cli.ts (see runUpgrade around the src/cli.ts:544-595 region).

Claude Code (one block, paste it)

Install agentmemory: run `npx @agentmemory/agentmemory` in a separate terminal to start the memory server. Then run `/plugin marketplace add rohitg00/agentmemory` and `/plugin install agentmemory` — the plugin registers all 12 hooks, 4 skills, AND auto-wires the `@agentmemory/mcp` stdio server via its `.mcp.json`, so you get 44 MCP tools (memory_smart_search, memory_save, memory_sessions, memory_governance_delete, etc.) without any extra config step. Verify with `curl http://localhost:3111/agentmemory/health`. The real-time viewer is at http://localhost:3113.
OpenClaw (paste this prompt)
Install agentmemory for OpenClaw. Run `npx @agentmemory/agentmemory` in a separate terminal to start the memory server on localhost:3111. Then add this to my OpenClaw MCP config so agentmemory is available with all 43 memory tools:

{
  "mcpServers": {
    "agentmemory": {
      "command": "npx",
      "args": ["-y", "@agentmemory/mcp"]
    }
  }
}

Restart OpenClaw. Verify with `curl http://localhost:3111/agentmemory/health`. Open http://localhost:3113 for the real-time viewer. For deeper 4-hook gateway integration, see integrations/openclaw in the agentmemory repo.

Full guide: integrations/openclaw/

Hermes Agent (paste this prompt)
Install agentmemory for Hermes. Run `npx @agentmemory/agentmemory` in a separate terminal to start the memory server on localhost:3111. Then add this to ~/.hermes/config.yaml so Hermes can use agentmemory as an MCP server with all 43 memory tools:

mcp_servers:
  agentmemory:
    command: npx
    args: ["-y", "@agentmemory/mcp"]

Verify with `curl http://localhost:3111/agentmemory/health`. Open http://localhost:3113 for the real-time viewer. For deeper 6-hook memory provider integration (pre-LLM context injection, turn capture, MEMORY.md mirroring, system prompt block), copy integrations/hermes from the agentmemory repo to ~/.hermes/plugins/memory/agentmemory.

Full guide: integrations/hermes/

Other agents

Start the memory server: npx @agentmemory/agentmemory

Then add the MCP config for your agent:

Agent Setup
Cursor Add to ~/.cursor/mcp.json: {"mcpServers": {"agentmemory": {"command": "npx", "args": ["-y", "@agentmemory/mcp"]}}}
OpenClaw Add to MCP config: {"mcpServers": {"agentmemory": {"command": "npx", "args": ["-y", "@agentmemory/mcp"]}}} or use the gateway plugin
Gemini CLI gemini mcp add agentmemory -- npx -y @agentmemory/mcp
Codex CLI Add to .codex/config.yaml: mcp_servers: {agentmemory: {command: npx, args: ["-y", "@agentmemory/mcp"]}}
OpenCode Add to opencode.json: {"mcp": {"agentmemory": {"type": "local", "command": ["npx", "-y", "@agentmemory/mcp"], "enabled": true}}}
Hermes Agent Add to ~/.hermes/config.yaml or use the memory provider plugin
Cline / Goose / Kilo Code Add MCP server in settings
Claude Desktop Add to claude_desktop_config.json: {"mcpServers": {"agentmemory": {"command": "npx", "args": ["-y", "@agentmemory/mcp"]}}}
Aider REST API: curl -X POST http://localhost:3111/agentmemory/smart-search -d '{"query": "auth"}'
Any agent (32+) npx skillkit install agentmemory

From source

git clone https://github.com/rohitg00/agentmemory.git && cd agentmemory
npm install && npm run build && npm start

This starts agentmemory with a local iii-engine if iii is already installed, or falls back to Docker Compose if Docker is available. REST, streams, and the viewer bind to 127.0.0.1 by default.

Install iii-engine manually:

  • macOS / Linux: curl -fsSL https://install.iii.dev/iii/main/install.sh | sh
  • Windows: download iii-x86_64-pc-windows-msvc.zip from iii-hq/iii releases, extract iii.exe, add to PATH

Or use Docker (the bundled docker-compose.yml pulls iiidev/iii:latest). Full docs: iii.dev/docs.

Windows

agentmemory runs on Windows 10/11, but the Node.js package alone isn't enough — you also need the iii-engine runtime (a separate native binary) as a background process. The official upstream installer is a sh script and there is no PowerShell installer or scoop/winget package today, so Windows users have two paths:

Option A — Prebuilt Windows binary (recommended):

# 1. Open https://github.com/iii-hq/iii/releases/latest in your browser
# 2. Download iii-x86_64-pc-windows-msvc.zip
#    (or iii-aarch64-pc-windows-msvc.zip if you're on an ARM machine)
# 3. Extract iii.exe somewhere on PATH, or place it at:
#    %USERPROFILE%\.local\bin\iii.exe
#    (agentmemory checks that location automatically)
# 4. Verify:
iii --version

# 5. Then run agentmemory as usual:
npx -y @agentmemory/agentmemory

Option B — Docker Desktop:

# 1. Install Docker Desktop for Windows
# 2. Start Docker Desktop and make sure the engine is running
# 3. Run agentmemory — it will auto-start the bundled compose file:
npx -y @agentmemory/agentmemory

Option C — standalone MCP only (no engine): if you only need the MCP tools for your agent and don't need the REST API, viewer, or cron jobs, skip the engine entirely:

npx -y @agentmemory/agentmemory mcp
# or via the shim package:
npx -y @agentmemory/mcp

Diagnostics for Windows: if npx @agentmemory/agentmemory fails, re-run with --verbose to see the actual engine stderr. Common failure modes:

Symptom Fix
iii-engine process started then did not become ready within 15s Engine crashed on startup — re-run with --verbose, check stderr
Could not start iii-engine Neither iii.exe nor Docker is installed. See Option A or B above
Port conflict netstat -ano | findstr :3111 to see what's bound, then kill it or use --port <N>
Docker fallback skipped even though Docker is installed Make sure Docker Desktop is actually running (system tray icon)

Note: there is no cargo install iii-engineiii is not published to crates.io. The only supported install methods are the prebuilt binary above, the upstream sh install script (macOS/Linux only), and the Docker image.


Why agentmemory

Every coding agent forgets everything when the session ends. You waste the first 5 minutes of every session re-explaining your stack. agentmemory runs in the background and eliminates that entirely.

Session 1: "Add auth to the API"
  Agent writes code, runs tests, fixes bugs
  agentmemory silently captures every tool use
  Session ends -> observations compressed into structured memory

Session 2: "Now add rate limiting"
  Agent already knows:
    - Auth uses JWT middleware in src/middleware/auth.ts
    - Tests in test/auth.test.ts cover token validation
    - You chose jose over jsonwebtoken for Edge compatibility
  Zero re-explaining. Starts working immediately.

vs built-in agent memory

Every AI coding agent ships with built-in memory — Claude Code has MEMORY.md, Cursor has notepads, Cline has memory bank. These work like sticky notes. agentmemory is the searchable database behind the sticky notes.

Built-in (CLAUDE.md) agentmemory
Scale 200-line cap Unlimited
Search Loads everything into context BM25 + vector + graph (top-K only)
Token cost 22K+ at 240 observations ~1,900 tokens (92% less)
Cross-agent Per-agent files MCP + REST (any agent)
Coordination None Leases, signals, actions, routines
Observability Read files manually Real-time viewer on :3113

How It Works

Memory Pipeline

PostToolUse hook fires
  -> SHA-256 dedup (5min window)
  -> Privacy filter (strip secrets, API keys)
  -> Store raw observation
  -> LLM compress -> structured facts + concepts + narrative
  -> Vector embedding (6 providers + local)
  -> Index in BM25 + vector + knowledge graph

SessionStart hook fires
  -> Load project profile (top concepts, files, patterns)
  -> Hybrid search (BM25 + vector + graph)
  -> Token budget (default: 2000 tokens)
  -> Inject into conversation

4-Tier Memory Consolidation

Inspired by how human brains process memory — not unlike sleep consolidation.

Tier What Analogy
Working Raw observations from tool use Short-term memory
Episodic Compressed session summaries "What happened"
Semantic Extracted facts and patterns "What I know"
Procedural Workflows and decision patterns "How to do it"

Memories decay over time (Ebbinghaus curve). Frequently accessed memories strengthen. Stale memories auto-evict. Contradictions are detected and resolved.

What Gets Captured

Hook Captures
SessionStart Project path, session ID
UserPromptSubmit User prompts (privacy-filtered)
PreToolUse File access patterns + enriched context
PostToolUse Tool name, input, output
PostToolUseFailure Error context
PreCompact Re-injects memory before compaction
SubagentStart/Stop Sub-agent lifecycle
Stop End-of-session summary
SessionEnd Session complete marker

Key Capabilities

Capability Description
Automatic capture Every tool use recorded via hooks — zero manual effort
Semantic search BM25 + vector + knowledge graph with RRF fusion
Memory evolution Versioning, supersession, relationship graphs
Auto-forgetting TTL expiry, contradiction detection, importance eviction
Privacy first API keys, secrets, <private> tags stripped before storage
Self-healing Circuit breaker, provider fallback chain, health monitoring
Claude bridge Bi-directional sync with MEMORY.md
Knowledge graph Entity extraction + BFS traversal
Team memory Namespaced shared + private across team members
Citation provenance Trace any memory back to source observations
Git snapshots Version, rollback, and diff memory state

Triple-stream retrieval combining three signals:

Stream What it does When
BM25 Stemmed keyword matching with synonym expansion Always on
Vector Cosine similarity over dense embeddings Embedding provider configured
Graph Knowledge graph traversal via entity matching Entities detected in query

Fused with Reciprocal Rank Fusion (RRF, k=60) and session-diversified (max 3 results per session).

Embedding providers

agentmemory auto-detects your provider. For best results, install local embeddings (free):

npm install @xenova/transformers
Provider Model Dimensions Env Var Notes
Local (recommended) all-MiniLM-L6-v2 384 EMBEDDING_PROVIDER=local Free, offline, +8pp recall over BM25-only
Gemini gemini-embedding-2-preview 3072 full / configurable lower GEMINI_API_KEY Set GEMINI_EMBEDDING_MODEL or GEMINI_EMBEDDING_DIMENSIONS to override
OpenAI text-embedding-3-small 1536 OPENAI_API_KEY $0.02/1M tokens
Voyage AI voyage-code-3 1024 VOYAGE_API_KEY Optimized for code
Cohere embed-english-v3.0 1024 COHERE_API_KEY Free trial available
OpenRouter Any embedding model varies OPENROUTER_API_KEY Multi-model proxy

MCP Server

44 tools, 6 resources, 3 prompts, and 4 skills — the most comprehensive MCP memory toolkit for any agent.

44 Tools

Core tools (always available)
Tool Description
memory_recall Search past observations
memory_compress_file Compress markdown files while preserving structure
memory_save Save an insight, decision, or pattern
memory_patterns Detect recurring patterns
memory_smart_search Hybrid semantic + keyword search
memory_file_history Past observations about specific files
memory_sessions List recent sessions
memory_timeline Chronological observations
memory_profile Project profile (concepts, files, patterns)
memory_export Export all memory data
memory_relations Query relationship graph
Extended tools (44 total — set AGENTMEMORY_TOOLS=all)
Tool Description
memory_patterns Detect recurring patterns
memory_timeline Chronological observations
memory_relations Query relationship graph
memory_graph_query Knowledge graph traversal
memory_consolidate Run 4-tier consolidation
memory_claude_bridge_sync Sync with MEMORY.md
memory_team_share Share with team members
memory_team_feed Recent shared items
memory_audit Audit trail of operations
memory_governance_delete Delete with audit trail
memory_snapshot_create Git-versioned snapshot
memory_action_create Create work items with dependencies
memory_action_update Update action status
memory_frontier Unblocked actions ranked by priority
memory_next Single most important next action
memory_lease Exclusive action leases (multi-agent)
memory_routine_run Instantiate workflow routines
memory_signal_send Inter-agent messaging
memory_signal_read Read messages with receipts
memory_checkpoint External condition gates
memory_mesh_sync P2P sync between instances
memory_sentinel_create Event-driven watchers
memory_sentinel_trigger Fire sentinels externally
memory_sketch_create Ephemeral action graphs
memory_sketch_promote Promote to permanent
memory_crystallize Compact action chains
memory_diagnose Health checks
memory_heal Auto-fix stuck state
memory_facet_tag Dimension:value tags
memory_facet_query Query by facet tags
memory_verify Trace provenance

6 Resources · 3 Prompts · 4 Skills

Type Name Description
Resource agentmemory://status Health, session count, memory count
Resource agentmemory://project/{name}/profile Per-project intelligence
Resource agentmemory://memories/latest Latest 10 active memories
Resource agentmemory://graph/stats Knowledge graph statistics
Prompt recall_context Search + return context messages
Prompt session_handoff Handoff data between agents
Prompt detect_patterns Analyze recurring patterns
Skill /recall Search memory
Skill /remember Save to long-term memory
Skill /session-history Recent session summaries
Skill /forget Delete observations/sessions

Standalone MCP

Run without the full server — for any MCP client. Either of these works:

npx -y @agentmemory/agentmemory mcp   # canonical (always available)
npx -y @agentmemory/mcp                # shim package alias

Or add to your agent's MCP config:

Most agents (Cursor, Claude Desktop, Cline, etc.):

{
  "mcpServers": {
    "agentmemory": {
      "command": "npx",
      "args": ["-y", "@agentmemory/mcp"]
    }
  }
}

OpenCode (opencode.json):

{
  "mcp": {
    "agentmemory": {
      "type": "local",
      "command": ["npx", "-y", "@agentmemory/mcp"],
      "enabled": true
    }
  }
}

Real-Time Viewer

Auto-starts on port 3113. Live observation stream, session explorer, memory browser, knowledge graph visualization, and health dashboard.

open http://localhost:3113

The viewer server binds to 127.0.0.1 by default. The REST-served /agentmemory/viewer endpoint follows the normal AGENTMEMORY_SECRET bearer-token rules. CSP headers use a per-response script nonce and disable inline handler attributes (script-src-attr 'none').


Configuration

LLM Providers

agentmemory auto-detects from your environment. No API key needed if you have a Claude subscription.

Provider Config Notes
Claude subscription (default) No config needed Uses @anthropic-ai/claude-agent-sdk
Anthropic API ANTHROPIC_API_KEY Per-token billing
MiniMax MINIMAX_API_KEY Anthropic-compatible
Gemini GEMINI_API_KEY Also enables embeddings
OpenRouter OPENROUTER_API_KEY Any model

Environment Variables

Create .env.local in the repo root:

# LLM provider (pick one, or leave empty for Claude subscription)
# ANTHROPIC_API_KEY=sk-ant-...
# GEMINI_API_KEY=...
# GEMINI_MODEL=gemini-flash-latest
# GEMINI_EMBEDDING_MODEL=gemini-embedding-2-preview
# GEMINI_EMBEDDING_DIMENSIONS=3072
# OPENROUTER_API_KEY=...

# Embedding provider (auto-detected, or override)
# EMBEDDING_PROVIDER=local
# VOYAGE_API_KEY=...

# Search tuning
# BM25_WEIGHT=0.4
# VECTOR_WEIGHT=0.6
# TOKEN_BUDGET=2000

# Auth
# AGENTMEMORY_SECRET=your-secret

# Ports (defaults: 3111 API, 3113 viewer)
# III_REST_PORT=3111

# Features
# AGENTMEMORY_AUTO_COMPRESS=false  # OFF by default (#138). When on,
                                   # every PostToolUse hook calls your
                                   # LLM provider to compress the
                                   # observation — expect significant
                                   # token spend on active sessions.
# AGENTMEMORY_INJECT_CONTEXT=false # OFF by default (#143). When on:
                                   # - SessionStart may inject ~1-2K
                                   #   chars of project context into
                                   #   the first turn of each session
                                   #   (this is what actually reaches
                                   #   the model — Claude Code treats
                                   #   SessionStart stdout as context)
                                   # - PreToolUse fires /agentmemory/enrich
                                   #   on every file-touching tool call
                                   #   (resource cleanup, not a token
                                   #   fix — PreToolUse stdout is debug
                                   #   log only per Claude Code docs)
                                   # Observations are still captured via
                                   # PostToolUse regardless of this flag.
# GRAPH_EXTRACTION_ENABLED=true
# GRAPH_EXTRACTION_BATCH_SIZE=10
# CONSOLIDATION_ENABLED=true
# CONSOLIDATION_DECAY_DAYS=30
# LESSON_DECAY_ENABLED=true
# OBSIDIAN_AUTO_EXPORT=false
# AGENTMEMORY_EXPORT_ROOT=~/.agentmemory
# CLAUDE_MEMORY_BRIDGE=false
# SNAPSHOT_ENABLED=false

# Team
# TEAM_ID=
# USER_ID=
# TEAM_MODE=private

# Tool visibility: "core" (8 tools) or "all" (44 tools)
# AGENTMEMORY_TOOLS=core

API

104 endpoints on port 3111. The REST API binds to 127.0.0.1 by default. Protected endpoints require Authorization: Bearer <secret> when AGENTMEMORY_SECRET is set, and mesh sync endpoints require AGENTMEMORY_SECRET on both peers.

Key endpoints
Method Path Description
GET /agentmemory/health Health check (always public)
POST /agentmemory/session/start Start session + get context
POST /agentmemory/session/end End session
POST /agentmemory/observe Capture observation
POST /agentmemory/smart-search Hybrid search
POST /agentmemory/context Generate context
POST /agentmemory/remember Save to long-term memory
POST /agentmemory/forget Delete observations
POST /agentmemory/enrich File context + memories + bugs
GET /agentmemory/profile Project profile
GET /agentmemory/export Export all data
POST /agentmemory/import Import from JSON
POST /agentmemory/graph/query Knowledge graph query
POST /agentmemory/team/share Share with team
GET /agentmemory/audit Audit trail

Full endpoint list: src/triggers/api.ts


Architecture

Built on iii-engine's three primitives — no Express, no Postgres, no Redis.

118 source files · ~21,800 LOC · 646 tests · 123 functions · 34 KV scopes

What iii-engine replaces
Traditional stack agentmemory uses
Express.js / Fastify iii HTTP Triggers
SQLite / Postgres + pgvector iii KV State + in-memory vector index
SSE / Socket.io iii Streams (WebSocket)
pm2 / systemd iii-engine worker management
Prometheus / Grafana iii OTEL + health monitor

Development

| **Memory Evolution** | `evolve`, `auto-forget`, `evict` | Version memories, TTL expiry, importance-based eviction | | | `consolidate`, `consolidate-pipeline` | Merge duplicates, 4-tier consolidation (working→episodic→semantic→procedural) | | | `verify`, `cascade-update` | Citation chain provenance, staleness propagation | | **Knowledge Graph** | `graph-extract`, `graph-query`, `graph-stats` | LLM entity extraction, BFS traversal, statistics | | | `temporal-graph-extract`, `temporal-query` | Temporal knowledge extraction + point-in-time queries | | **Relationships** | `relate`, `get-related`, `timeline`, `profile` | Memory relations, chronological view, project profiles | | **Claude Bridge** | `claude-bridge-read`, `claude-bridge-sync` | Bi-directional sync with MEMORY.md | | **Actions** | `action-create`, `action-update`, `action-get`, `action-list` | Dependency-aware work items with typed edges | | | `action-edge-create` | Create typed edges between actions (requires, unlocks, gated_by) | | | `frontier`, `next` | Priority-ranked unblocked action queue | | **Leases** | `lease-acquire`, `lease-release`, `lease-renew`, `lease-cleanup` | TTL-based atomic agent claims with auto-cleanup | | **Routines** | `routine-create`, `routine-freeze`, `routine-list`, `routine-run`, `routine-status` | Frozen workflow templates instantiated into action chains | | **Signals** | `signal-send`, `signal-read`, `signal-threads`, `signal-cleanup` | Threaded inter-agent messaging with read receipts | | **Checkpoints** | `checkpoint-create`, `checkpoint-resolve`, `checkpoint-list`, `checkpoint-expire` | External condition gates (CI, approval, deploy) | | **Mesh** | `mesh-register`, `mesh-sync`, `mesh-receive`, `mesh-list`, `mesh-remove` | P2P sync between agentmemory instances | | **Sentinels** | `sentinel-create`, `sentinel-trigger`, `sentinel-check`, `sentinel-cancel`, `sentinel-list`, `sentinel-expire` | Event-driven condition watchers | | **Sketches** | `sketch-create`, `sketch-add`, `sketch-promote`, `sketch-discard`, `sketch-list`, `sketch-gc` | Ephemeral action graphs with auto-expiry | | **Crystals** | `crystallize`, `auto-crystallize`, `crystal-list`, `crystal-get` | LLM-powered compaction of action chains into digests | | **Lessons** | `lesson-save`, `lesson-recall`, `lesson-list`, `lesson-strengthen`, `lesson-decay-sweep` | Confidence-scored lessons with dedup, reinforcement, and decay | | **Facets** | `facet-tag`, `facet-untag`, `facet-query`, `facet-get`, `facet-stats`, `facet-dimensions` | Multi-dimensional tagging with AND/OR queries | | **Diagnostics** | `diagnose`, `heal` | Self-diagnosis across 8 categories with auto-fix | | **Flow** | `flow-compress` | LLM summarization of completed action chains | | **Branch** | `detect-worktree`, `list-worktrees`, `branch-sessions` | Git worktree detection for shared memory | | **Team** | `team-share`, `team-feed`, `team-profile` | Namespaced shared + private team memory | | **Governance** | `governance-delete`, `governance-bulk`, `audit-query` | Delete with audit trail, bulk operations | | **Snapshots** | `snapshot-create`, `snapshot-list`, `snapshot-restore` | Git-versioned memory state | | **Export** | `obsidian-export` | Obsidian-compatible Markdown with YAML frontmatter + wikilinks |

Data Model (34 KV scopes)

Scope Stores
mem:sessions Session metadata, project, timestamps
mem:obs:{session_id} Compressed observations with embeddings
mem:summaries End-of-session summaries
mem:memories Long-term memories (versioned, with relationships)
mem:relations Memory relationship graph
mem:profiles Aggregated project profiles
mem:emb:{obs_id} Vector embeddings
mem:index:bm25 Persisted BM25 index
mem:metrics Per-function metrics
mem:health Health snapshots
mem:config Runtime configuration overrides
mem:confidence Confidence scores for memories
mem:claude-bridge Claude Code MEMORY.md bridge state
mem:graph:nodes Knowledge graph entities
mem:graph:edges Knowledge graph relationships
mem:semantic Semantic memories (consolidated facts)
mem:procedural Procedural memories (extracted workflows)
mem:team:{id}:shared Team shared items
mem:team:{id}:users:{uid} Per-user team state
mem:team:{id}:profile Aggregated team profile
mem:audit Audit trail for all operations
mem:actions Dependency-aware work items
mem:action-edges Typed edges (requires, unlocks, gated_by, etc.)
mem:leases TTL-based agent work claims
mem:routines Frozen workflow templates
mem:routine-runs Instantiated routine execution tracking
mem:signals Inter-agent messages with threading
mem:checkpoints External condition gates
mem:mesh Registered P2P sync peers
mem:sentinels Event-driven condition watchers
mem:sketches Ephemeral action graphs
mem:crystals Compacted action chain digests
mem:facets Multi-dimensional tags
mem:lessons Confidence-scored lessons with decay

Operations

Health checks

The worker exposes two health endpoints:

Endpoint What it checks Auth
GET /agentmemory/livez Process liveness (served directly from the viewer HTTP server, no engine dependency) Public
GET /agentmemory/health Full runtime health — heap, CPU, event loop lag, connection state, function metrics Requires AGENTMEMORY_SECRET when set

Docker healthcheck uses /agentmemory/livez on port 3113. This endpoint never touches the iii-engine, so the container stays healthy even when the engine is temporarily unresponsive.

Common failure: StateKV timeouts after long engine uptime

The iii-engine's internal websocket channels can go stale after 12–24+ hours of continuous operation. When this happens:

Symptoms:

  • docker ps shows the worker as unhealthy
  • Worker logs show StateKV state::set timed out after 5000ms or StateKV temporarily unavailable
  • /agentmemory/health returns 503 or times out
  • /agentmemory/livez should still return 200 (it bypasses the engine)

Fix:

docker compose restart

This restarts both the engine and worker, clearing stale channels. Data is preserved in the iii-data volume.

Mitigation already in place:

  • Maintenance loops (consolidation, auto-forget, eviction, index verify) pause automatically when health is degraded via the maintenance gate
  • KV calls use short timeouts with cooldown periods instead of blocking on long SDK waits
  • Consolidation scans are bounded per run to avoid saturating the engine
  • Healthcheck tolerances: 10s timeout, 5 retries, 30s start period

Restarting and rebuilding

# Restart without rebuilding (clears stale engine state)
docker compose restart

# Rebuild after code changes
docker compose up -d --build

# Rebuild only the worker (faster, keeps engine running)
docker compose up -d --build agentmemory-worker

# View worker logs
docker compose logs -f agentmemory-worker

# Check container health
docker compose ps

Launch Agent (macOS)

agentmemory is registered as a Launch Agent (com.agentmemory) that starts on login. The startup script is at ~/Projects/agentmemory/start.sh. Logs go to /tmp/agentmemory.log.

# Check if running
launchctl list | grep agentmemory

# Restart via launchctl
launchctl kickstart -k gui/$(id -u)/com.agentmemory

# Or restart docker directly
cd ~/Projects/agentmemory && docker compose restart

Key tuning variables

Variable Default Effect
CONSOLIDATION_ENABLED true Enable/disable the 4-tier consolidation pipeline
AUTO_FORGET_ENABLED true Enable/disable automatic memory eviction
LESSON_DECAY_ENABLED true Enable/disable lesson confidence decay
TOKEN_BUDGET 8000 Max tokens injected at session start
MAX_OBS_PER_SESSION 500 Cap on observations per session

When diagnosing stability issues, disable AUTO_FORGET_ENABLED and CONSOLIDATION_ENABLED first to isolate whether maintenance loops are contributing to engine saturation.

Development

npm run dev               # Hot reload
npm run build             # Production build
npm test                  # 646 tests (~1.7s)
npm run test:integration  # API tests (requires running services)

Prerequisites: Node.js >= 20, iii-engine or Docker

License

This repository is distributed under Apache-2.0.

If you publish or redistribute this fork:

  • keep the LICENSE file with the source or any redistributions
  • keep the NOTICE file with the source or any redistributions from this fork
  • retain upstream copyright and attribution notices that still apply
  • clearly mark any files you modify when redistributing source form under Apache-2.0 section 4(b)

Original upstream project: rohitg00/agentmemory

About

Public fork of rohitg00/agentmemory focused on Docker-first local deployment, Codex-native lifecycle ingestion, freshness-oriented retrieval, diagnostics.

Topics

Resources

License

Stars

Watchers

Forks

Contributors