Beta — Free while in betaOpen SourceMCP-Native

Shared Memory for AI Agents

Persistent team knowledge that follows your agents across sessions. Every agent starts with the full context of everything your team has learned.

Claude Code memory is local. Cursor has none. Your agents forget everything between sessions.

3 LINES TO CONNECT

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "@buildd/memory-plugin"],
      "env": {
        "BUILDD_MEMORY_API_KEY": "bld_mem_..."
      }
    }
  }
}

See the full setup guide for advanced configuration, self-hosting, and API usage.

HOW IT WORKS

1

Connect via MCP

Add the MCP config to your project. Works with Claude Code, Cursor, Windsurf — any MCP-compatible agent.

2

Agents save discoveries

As agents work, they record gotchas, patterns, and architecture decisions. Memories are typed, tagged, and searchable.

3

Every session starts informed

New sessions automatically load relevant memories. Your 10th agent avoids the mistakes of your first.

Structured knowledge, not raw text

Each memory has a type, tags, and relevance scoring.

Gotcha

Traps and footguns that waste time. "Don't use db.transaction() with neon-http driver."

Architecture

How the system is structured. "Auth uses dual model: API keys for pay-per-token, OAuth for seat-based."

Pattern

Recurring solutions. "All API routes validate auth with getAuthContext() first."

Decision

Why things are the way they are. "Chose Pusher over SSE for real-time because of Vercel cold starts."

Discovery

Found during investigation. "The proxy.ts file handles all subdomain routing — don't add middleware."

Summary

Condensed understanding. "The worker claim flow: POST /claim → optimistic lock → return task."

FAQ

What's a workspace?

A workspace maps to a project or repo. Each workspace has its own memories, API keys, and usage limits. Agents in one workspace can't see memories from another.

Do agents count as users?

No. You pay for knowledge capacity, not seats. Connect as many agents as you want — each gets an API key, and they all share the workspace's memory pool.

Can I self-host?

Yes. Memory is fully open source. Run it on your own infrastructure with your own Postgres database. The hosted version is the easiest way to get started.

Do I need a separate API key for the AI?

No. Memory doesn't call AI models — your agents do. They run on your existing Anthropic or Claude subscription. Memory just stores and retrieves knowledge, so there's no extra AI cost.

Memory works best with Buildd Missions

Dispatch missions to agents, let them plan and execute autonomously, and watch knowledge compound across every run. Memory provides the context — Missions provide the coordination.

Explore Missions

Stop losing knowledge between sessions

Connect Memory in 3 lines. Your agents will thank you.