🔒 Closed Beta — 100 spots left

Your AI Finally Remembers

Persistent memory infrastructure for AI agents. Store, recall, and reason across sessions — not just conversations.

<1msL1 Cache
4-LayerArchitecture
100xDedup Ratio
L1
L2
L3
L4

The Problem with AI Today

Every session starts from zero. Your AI has amnesia.

🧠

Total Amnesia

AI agents forget everything after each session. Weeks of context — gone in an instant.

Temporary Windows

Context windows are temporary — your knowledge shouldn't be. Token limits force forgetting.

🔄

Endless Re-explaining

Teams waste hours re-explaining context to AI tools. The same instructions, over and over.

4-Layer Memory Architecture

Inspired by how human memory actually works — from instant recall to permanent storage.

L1

Instant Cache

<1ms

Hot cache for your most recent and frequently accessed memories. Lightning-fast retrieval.

In-Memory Store
L2

Deep Understanding

<500ms

Synthesizes, connects, and reasons about your memories. Turns raw data into structured insights.

Reasoning Engine
L3

Deep Storage

<50ms

Full-text search + semantic embeddings + associative graph for multi-signal recall.

Full-Text · Semantic · Associative
L4

Permanent Archive

Compressed

Long-term compressed storage. Searchable archive that never forgets. Content-addressable dedup.

Compressed · Searchable · Deduplicated

Built for Intelligence

Not just storage — a thinking memory system.

🕸️

Associative Graph

Memories connect to related concepts automatically, just like the human brain.

⚖️

Score Fusion

Multi-signal ranking combines text, semantic, and associative scores for best-of-all-worlds retrieval.

Contradiction Detection

Auto-detect conflicting memories. No more outdated info overriding current truth.

🌐

Collective Memory

Shared knowledge pool across all users. The more people use it, the smarter everyone gets.

🃏

Knowledge Cards

Auto-synthesized from community contributions. Verified knowledge, ready to use.

🗜️

Content-Addressable Storage

100x dedup for common knowledge. Semantic hashing means identical concepts stored once.

📉

Ebbinghaus Decay

Natural memory aging + promotion. Frequently accessed memories stay hot, stale ones archive.

🏢

Multi-Tenant

Each agent gets its own brain. Complete isolation with optional shared knowledge layers.

Simple, Transparent Pricing

Start free. Scale as your AI's memory grows.

Monthly Annual Save 20%
Perfect Recall — Deep reasoning about your memory, remembers every detail
Smart Search — Fast semantic search across all your memories
Deep Archive — Compressed long-term storage, always searchable

Free

$0/mo

Try persistent memory

  • 500 memories
  • Perfect Recall: 1 day
  • Smart Search: 7 days
  • Deep Archive:
  • Collective knowledge access
  • Export
Start Free

Team

$49/seat/mo

Shared brain for your team

  • 500K memories
  • Perfect Recall: 20 days
  • Smart Search: 6 months
  • Deep Archive: 2 years
  • Team shared brain
  • Team knowledge pool
  • Export & Priority support
Start Team Trial

Enterprise

Contact Us

Dedicated infrastructure

  • Unlimited memories
  • Perfect Recall: custom
  • Smart Search: unlimited
  • Deep Archive: unlimited
  • Self-hosted option
  • SSO & SLA & compliance
  • Dedicated support
Contact Sales

Integrate in 3 Lines

REST API, Python SDK, MCP Server, or OpenClaw native. Your choice.

Python SDK
REST API
MCP Server
OpenClaw
Python pip install cortexmemory
from CortexMemory import CortexMemory

# Initialize with your API key
memory = CortexMemory("hm_your_api_key")

# Store a memory
memory.store("User prefers dark theme and TypeScript")

# Recall with natural language
results = memory.recall("What does the user prefer?")
print(results[0].content)
# → "User prefers dark theme and TypeScript"
cURL REST API
# Store a memory
curl -X POST https://cortex-memory.com/v1/store \
  -H "Authorization: Bearer hm_your_api_key" \
  -d '{"content": "User prefers dark theme"}'

# Recall memories
curl https://cortex-memory.com/v1/recall \
  -H "Authorization: Bearer hm_your_api_key" \
  -d '{"query": "user preferences", "depth": "deep"}'
JSON MCP Server Config
{
  "mcpServers": {
    "cortex-memory": {
      "command": "npx",
      "args": ["@cortexmemory/mcp-server"],
      "env": {
        "HM_API_KEY": "hm_your_api_key"
      }
    }
  }
}
// Works with Cursor, Windsurf, Kiro, and any MCP client
YAML OpenClaw Skill
# Install as native OpenClaw skill
openclaw skill install cortex-memory

# Auto-configured — just use it
# Your agent now has persistent memory across all sessions
# Store, recall, and reason — automatically

Built for Every AI Use Case

💻

AI Coding Agents

Remember codebase architecture, patterns, past bugs, and team conventions. Never re-explain your stack.

🎧

Customer Support Bots

Remember customer history across sessions. Personalized support without asking the same questions twice.

🔬

Research Assistants

Accumulate knowledge over time. Build on previous findings instead of starting from scratch.

👥

Team Knowledge Base

Shared brain for your entire team. Collective intelligence that grows with every interaction.

How We Compare

Not all memory solutions are created equal.

Feature Mem0 Zep Letta Cortex Memory
Architecture Key-value Session-based Academic 4-Layer Adaptive
Collective Memory
Associative Graph
Knowledge Cards
Content Dedup Basic Basic Semantic 100x
Price (Pro) $25/mo $99/mo Free/complex $19/mo
Self-hosted ✓ Enterprise

Get Started

Pick your platform, get your key, one command to install.

1 Get your API key
2 Pick your platform
3 Install
# Add to ~/.cursor/mcp.json, restart Cursor. Done.
npx @cortexmemory/mcp-server --key YOUR_KEY

Your agent will auto-remember everything. No prompts needed.

{
  "mcpServers": {
    "cortex-memory": {
      "command": "npx",
      "args": ["@cortexmemory/mcp-server"],
      "env": {
        "HM_API_KEY": "YOUR_KEY"
      }
    }
  }
}

Save as ~/.cursor/mcp.json → restart Cursor. Full docs →

# Add to .vscode/mcp.json, restart VS Code. Done.
npx @cortexmemory/mcp-server --key YOUR_KEY

Works with GitHub Copilot agent mode.

{
  "servers": {
    "cortex-memory": {
      "command": "npx",
      "args": ["@cortexmemory/mcp-server"],
      "env": {
        "HM_API_KEY": "YOUR_KEY"
      }
    }
  }
}

Save as .vscode/mcp.json → restart. Full docs →

# Add to claude_desktop_config.json, restart Claude. Done.
npx @cortexmemory/mcp-server --key YOUR_KEY

Claude will remember your conversations forever.

{
  "mcpServers": {
    "cortex-memory": {
      "command": "npx",
      "args": ["@cortexmemory/mcp-server"],
      "env": {
        "HM_API_KEY": "YOUR_KEY"
      }
    }
  }
}

Save as claude_desktop_config.json. Full docs →

# Add to Windsurf MCP config, restart. Done.
npx @cortexmemory/mcp-server --key YOUR_KEY

Cascade will auto-store and recall context.

{
  "mcpServers": {
    "cortex-memory": {
      "command": "npx",
      "args": ["@cortexmemory/mcp-server"],
      "env": {
        "HM_API_KEY": "YOUR_KEY"
      }
    }
  }
}

Add to Windsurf MCP settings. Full docs →

openclaw skill install cortex-memory --key YOUR_KEY

Your Claw gets persistent memory across all sessions.

# In openclaw config (gateway.yaml):
mcp:
  servers:
    cortex-memory:
      command: npx
      args: ["@cortexmemory/mcp-server"]
      env:
        HM_API_KEY: "YOUR_KEY"

Full docs →

pip install cortexmemory

Then: from cortexmemory import CortexMemory

pip install cortexmemory

from cortexmemory import CortexMemory
mem = CortexMemory(api_key="YOUR_KEY")
mem.store("User prefers dark theme", tags=["prefs"])
results = mem.recall("user preferences")

Sync + async clients. Full SDK docs →

curl -X POST https://cortex-memory.com/v1/store \
  -H "Authorization: Bearer YOUR_KEY" \
  -d '{"content": "Hello, memory!"}'

Works from any language, any platform.

# Store
POST /v1/store    {"content":"...", "tags":["..."]}

# Recall
POST /v1/recall   {"query":"...", "depth":"deep"}

# Compact
POST /v1/context/compact  {"content":"...", "task_context":"..."}

Base URL: https://cortex-memory.com. Full API reference →