Complete reference for all 62 MCP tools. Memory Spine is ChaozCode's proprietary persistent memory system that enables AI to remember across sessions, build knowledge graphs, analyze codebases, and maintain intelligent context. This documentation covers every tool with parameters, examples, and best practices.
Memory Spine stores memories as vector embeddings, enabling semantic search and intelligent context retrieval. All ChaozCode AI interactions automatically use Memory Spine for context persistence.
Key Capabilities
Persistent Storage: Memories survive across sessions indefinitely
Semantic Search: Find relevant memories using natural language queries
Knowledge Graphs: Build and query relationships between memories
Auto-tagging: AI-powered categorization and organization
Version Control: Track changes and revert to previous versions
Codebase Analysis: Deep code understanding with context generation
Conversation Tracking: Maintain conversation state across sessions
Agent Handoff: Transfer context between AI agents seamlessly
Connection Details
# API Endpoints
HTTP API: http://127.0.0.1:8788
MCP Server: http://127.0.0.1:8789
# Public API (via nginx proxy)
https://chaozcode.com/api/v1/memory/*
# Health Check
curl https://chaozcode.com/api/v1/memory/health
# Stats
curl https://chaozcode.com/api/v1/memory/stats
Authentication
All API requests require authentication via Bearer token:
Authorization: Bearer YOUR_API_KEY
Core Memory Operations (6 tools)
Essential tools for storing, retrieving, searching, and managing memories. These are the most frequently used tools.
memory_store
Core
Store a new memory with content, tags, and metadata. This is the primary way to persist information. Memories are automatically vectorized for semantic search.
Parameters:idrequired - Unique identifier for the memory contentrequired - The memory content (text, JSON, or markdown) tags - Array of tags for categorization importance - Priority level: "low", "medium", "high", "critical" metadata - Additional key-value pairs
memory_search
Core
Search memories using semantic similarity. Returns memories ranked by relevance to the query. Supports filtering by tags and minimum score threshold.
Parameters:queryrequired - Natural language search query limit - Maximum results to return (default: 10, max: 100) tags - Filter by specific tags min_score - Minimum similarity score (0.0-1.0)
memory_retrieve
Core
Get a specific memory by its ID. Returns the full memory object including content, metadata, and version history.
Parameters:idrequired - The memory ID to retrieve
memory_recent
Core
Get the most recently created or updated memories. Useful for quick context checks and session awareness.
Parameters:count - Number of memories to return (default: 10) tags - Filter by specific tags
memory_update
Core
Update an existing memory's content or metadata. Creates a new version while preserving history.
Parameters:idrequired - The memory ID to update content - New content (optional) tags - New tags (optional) importance - New importance level (optional)
memory_delete
Core
Delete a memory by ID. This is a soft delete - the memory can be recovered within 30 days.
Parameters:idrequired - The memory ID to delete
Context & Intelligence (6 tools)
Tools for building context windows, summarization, and intelligent analysis. Essential for AI-powered workflows.
llm_context_window
Context
CRITICAL: Use at the START of every task. Builds an optimized context window for LLM queries by combining relevant memories, pinned context, and recent activity.
Parameters:queryrequired - The user's question or task description max_tokens - Maximum context size (default: 4000)
memory_context
Context
Build task-specific context from relevant memories. Similar to llm_context_window but more focused on specific memory retrieval.
Parameters:queryrequired - Context query max_tokens - Maximum tokens (default: 2000)
memory_summarize
Context
Generate a concise summary from multiple memories matching a query. Useful for condensing large amounts of information.
Parameters:queryrequired - Topic to summarize limit - Number of memories to include (default: 20)
memory_priority_score
Context
Calculate priority scores for memories based on relevance, recency, and importance. Helps rank memories for context building.
Parameters:queryrequired - Query for scoring limit - Number to score (default: 10)
memory_insights
Context
Extract patterns, trends, and insights from memory data. Analyzes memory usage, common topics, and relationships.
Parameters:limit - Number of memories to analyze (default: 100)
memory_analytics
Context
Get detailed analytics on memory usage including storage statistics, access patterns, and performance metrics.
Parameters: None required
Pinned Context (3 tools)
Critical information that should always be available. Pinned memories are included in every context window automatically.
memory_pin
Pins
Pin critical information for persistent, always-available access. Pinned content is automatically included in context windows.
Parameters:keyrequired - Unique key for the pin contentrequired - Content to pin
memory_get_pin
Pins
Retrieve pinned information by key. Returns null if the pin doesn't exist.
Parameters:keyrequired - The pin key to retrieve
memory_unpin
Pins
Remove a pinned memory. The content is not deleted, just unpinned.
Parameters:keyrequired - The pin key to remove
Knowledge Graphs (3 tools)
Build and query relationships between memories. Knowledge graphs enable complex reasoning and relationship discovery.
knowledge_graph_build
Graph
Build a knowledge graph from existing memories. Automatically discovers relationships based on semantic similarity and explicit links.
Parameters:limit - Number of memories to include (default: 100)
knowledge_graph_query
Graph
Query the knowledge graph for relationships. Supports finding neighbors, paths, and clusters.
Create an explicit relationship between two memories. Useful for manual knowledge graph construction.
Parameters:source_idrequired - Source memory ID target_idrequired - Target memory ID relation - Relationship type (e.g., "related_to", "depends_on", "contradicts")
Codebase Analysis (6 tools)
Deep code understanding and context generation. Use before making code changes.
codebase_analyze
Codebase
Perform deep analysis of a codebase. Returns structure, dependencies, patterns, and quality metrics.
Parameters:pathrequired - Path to analyze include_patterns - File patterns to include (e.g., ["*.py", "*.js"]) exclude_patterns - File patterns to exclude
codebase_context
Codebase
Use before ANY code edit. Generates task-specific context from a codebase including relevant files, functions, and dependencies.
Parameters:pathrequired - Codebase path taskrequired - Description of the task max_tokens - Maximum context size (default: 5000)
codebase_suggest
Codebase
Get AI-powered improvement suggestions for a codebase. Identifies refactoring opportunities, bugs, and best practice violations.
Parameters:pathrequired - Codebase path focus_area - Specific area to focus on (e.g., "tests", "security", "performance")
codebase_generate
Codebase
Generate new code based on codebase context and task description. Follows existing patterns and conventions.
Parameters:pathrequired - Codebase path taskrequired - What to generate (e.g., "test file for auth.py")
codebase_symbols
Codebase
Find functions, classes, and other symbols in a codebase. Useful for understanding code structure.