Persistent memory infrastructure for AI agents. Store, recall, and reason across sessions — not just conversations.
Every session starts from zero. Your AI has amnesia.
AI agents forget everything after each session. Weeks of context — gone in an instant.
Context windows are temporary — your knowledge shouldn't be. Token limits force forgetting.
Teams waste hours re-explaining context to AI tools. The same instructions, over and over.
Inspired by how human memory actually works — from instant recall to permanent storage.
Hot cache for your most recent and frequently accessed memories. Lightning-fast retrieval.
Synthesizes, connects, and reasons about your memories. Turns raw data into structured insights.
Full-text search + semantic embeddings + associative graph for multi-signal recall.
Long-term compressed storage. Searchable archive that never forgets. Content-addressable dedup.
Not just storage — a thinking memory system.
Memories connect to related concepts automatically, just like the human brain.
Multi-signal ranking combines text, semantic, and associative scores for best-of-all-worlds retrieval.
Auto-detect conflicting memories. No more outdated info overriding current truth.
Shared knowledge pool across all users. The more people use it, the smarter everyone gets.
Auto-synthesized from community contributions. Verified knowledge, ready to use.
100x dedup for common knowledge. Semantic hashing means identical concepts stored once.
Natural memory aging + promotion. Frequently accessed memories stay hot, stale ones archive.
Each agent gets its own brain. Complete isolation with optional shared knowledge layers.
Start free. Scale as your AI's memory grows.
Try persistent memory
Full memory for power users
Shared brain for your team
Dedicated infrastructure
REST API, Python SDK, MCP Server, or OpenClaw native. Your choice.
from CortexMemory import CortexMemory
# Initialize with your API key
memory = CortexMemory("hm_your_api_key")
# Store a memory
memory.store("User prefers dark theme and TypeScript")
# Recall with natural language
results = memory.recall("What does the user prefer?")
print(results[0].content)
# → "User prefers dark theme and TypeScript"
# Store a memory
curl -X POST https://cortex-memory.com/v1/store \
-H "Authorization: Bearer hm_your_api_key" \
-d '{"content": "User prefers dark theme"}'
# Recall memories
curl https://cortex-memory.com/v1/recall \
-H "Authorization: Bearer hm_your_api_key" \
-d '{"query": "user preferences", "depth": "deep"}'
{
"mcpServers": {
"cortex-memory": {
"command": "npx",
"args": ["@cortexmemory/mcp-server"],
"env": {
"HM_API_KEY": "hm_your_api_key"
}
}
}
}
// Works with Cursor, Windsurf, Kiro, and any MCP client
# Install as native OpenClaw skill
openclaw skill install cortex-memory
# Auto-configured — just use it
# Your agent now has persistent memory across all sessions
# Store, recall, and reason — automatically
Remember codebase architecture, patterns, past bugs, and team conventions. Never re-explain your stack.
Remember customer history across sessions. Personalized support without asking the same questions twice.
Accumulate knowledge over time. Build on previous findings instead of starting from scratch.
Shared brain for your entire team. Collective intelligence that grows with every interaction.
Not all memory solutions are created equal.
| Feature | Mem0 | Zep | Letta | Cortex Memory |
|---|---|---|---|---|
| Architecture | Key-value | Session-based | Academic | 4-Layer Adaptive |
| Collective Memory | ✗ | ✗ | ✗ | ✓ |
| Associative Graph | ✗ | ✗ | ✗ | ✓ |
| Knowledge Cards | ✗ | ✗ | ✗ | ✓ |
| Content Dedup | Basic | Basic | ✗ | Semantic 100x |
| Price (Pro) | $25/mo | $99/mo | Free/complex | $19/mo |
| Self-hosted | ✗ | ✓ | ✓ | ✓ Enterprise |
Pick your platform, get your key, one command to install.
npx @cortexmemory/mcp-server --key YOUR_KEY
Your agent will auto-remember everything. No prompts needed.
{
"mcpServers": {
"cortex-memory": {
"command": "npx",
"args": ["@cortexmemory/mcp-server"],
"env": {
"HM_API_KEY": "YOUR_KEY"
}
}
}
}
Save as ~/.cursor/mcp.json → restart Cursor. Full docs →
npx @cortexmemory/mcp-server --key YOUR_KEY
Works with GitHub Copilot agent mode.
{
"servers": {
"cortex-memory": {
"command": "npx",
"args": ["@cortexmemory/mcp-server"],
"env": {
"HM_API_KEY": "YOUR_KEY"
}
}
}
}
Save as .vscode/mcp.json → restart. Full docs →
npx @cortexmemory/mcp-server --key YOUR_KEY
Claude will remember your conversations forever.
{
"mcpServers": {
"cortex-memory": {
"command": "npx",
"args": ["@cortexmemory/mcp-server"],
"env": {
"HM_API_KEY": "YOUR_KEY"
}
}
}
}
Save as claude_desktop_config.json. Full docs →
npx @cortexmemory/mcp-server --key YOUR_KEY
Cascade will auto-store and recall context.
{
"mcpServers": {
"cortex-memory": {
"command": "npx",
"args": ["@cortexmemory/mcp-server"],
"env": {
"HM_API_KEY": "YOUR_KEY"
}
}
}
}
Add to Windsurf MCP settings. Full docs →
openclaw skill install cortex-memory --key YOUR_KEY
Your Claw gets persistent memory across all sessions.
# In openclaw config (gateway.yaml):
mcp:
servers:
cortex-memory:
command: npx
args: ["@cortexmemory/mcp-server"]
env:
HM_API_KEY: "YOUR_KEY"
pip install cortexmemory
Then: from cortexmemory import CortexMemory
pip install cortexmemory
from cortexmemory import CortexMemory
mem = CortexMemory(api_key="YOUR_KEY")
mem.store("User prefers dark theme", tags=["prefs"])
results = mem.recall("user preferences")
Sync + async clients. Full SDK docs →
curl -X POST https://cortex-memory.com/v1/store \
-H "Authorization: Bearer YOUR_KEY" \
-d '{"content": "Hello, memory!"}'
Works from any language, any platform.
# Store
POST /v1/store {"content":"...", "tags":["..."]}
# Recall
POST /v1/recall {"query":"...", "depth":"deep"}
# Compact
POST /v1/context/compact {"content":"...", "task_context":"..."}
Base URL: https://cortex-memory.com. Full API reference →