Fix: Updated docs; added agents.md instruction in agents guide

This commit is contained in:
Manik 2025-10-22 17:01:28 +05:30 committed by Harshith Mullapudi
parent 3a10ee53e8
commit af56d7016e
8 changed files with 749 additions and 11 deletions

View File

@ -57,7 +57,8 @@
{
"group": "CLI",
"pages": [
"providers/claude-code"
"providers/claude-code",
"providers/codex"
]
},
{

View File

@ -3,6 +3,83 @@ title: "Changelog"
description: "Product updates and announcements"
---
<Update label="October 2025" description="v0.1.24 - v0.1.25">
## 🎯 New Features
**Deep Search**
- Advanced search capability for Browser Extension and Obsidian
- Surface insights from your memory with greater precision and context
- Connect related information across different sources more effectively
**Account Management 2.0**
- Once-click full account deletion with complete data cleanup
- Automatic removal of all data from both **PostgreSQL** and **Neo4j** graph databases
- Peace of mind with complete data control and privacy management
**Enhanced Onboarding**
- New guided flow for faster setup and first memory ingestion
- Direct integration setup via **MCP configuration links**
**Session Compaction for Smarter Memory**
- Automatically summarizes long conversations for efficient memory storage
- Compacted sessions now appear in search with Markdown formatting
- Improves long-term recall without losing important context
**AWS Bedrock Support**
- Connect your own AWS Bedrock account for AI model access
- Choose from Claude, Titan, and other AWS models
- Greater flexibility in model selection and deployment options
## ⚡ Performance & Reliability
**Faster, more stable experience**
- **Improved Search Quality**: Structured, faster, and more relevant results
- **Optimized Graph Performance**: Reduced iterations for quicker retrieval
- **Better Memory Recall**: Session compaction models improve long-term context retention
- **Streamlined Credit Management**: Proper error handling when credits are exhausted
## 🔧 Improvements
- **Spaces**:
- Option to remove episodes from spaces for better organization
- Removed restrictive space description requirements
- Queue-based space assignment for improved reliability
- **MCP Tooling**:
- Clear error messages when credits run low
- Improved tool descriptions for better AI assistant understanding
- Resolved profile summary edge cases affecting MCP connections
## 🐛 Fixes
- Fixed API key deletion not working properly
- Resolved document view breaking in log viewer
- Fixed semantic search inconsistencies affecting result quality
- Resolved login attribute conflicts in authentication flow
- Fixed graph visualization issues in Chrome 140
- Corrected ingestion queue handling for deleted episodes
- Fixed MCP tool call failures for (`get_user_profile`)
- Resolved space description validation blocking space creation
## 🔒 Security & Privacy
**Data protection updates**
- **Complete Account Wipe**: Account deletion now removes all traces from both relational and graph databases
- **Cascade Delete Logic**: Simplified deletion flows with proper relationship cleanup for users and workspaces
- **Neo4j Graph Cleanup**: Automated cleanup of knowledge graph nodes when deleting accounts
- **Proper Resource Cleanup**: Removes all associated API keys, spaces, and episodes
</Update>
<Update label="August 2025" description="v0.1.13 - v0.1.18">
## 🎯 New Features

View File

@ -290,7 +290,18 @@ Configure Claude Code to automatically search and store memories for seamless pr
}
```
### Troubleshooting
## How It Works
Once installed, the plugin works automatically:
- **At session start**: Memory search agent retrieves relevant context from your CORE memory
- **During conversation**: Claude has access to your full memory graph and codebase knowledge
- **After interaction**: Memory ingest agent stores the conversation summary
- **Across tools**: Your memory is shared across Claude Code, Cursor, ChatGPT, and other CORE-connected tools
You dont need to manually trigger memory operations—the plugin handles everything!
## Troubleshooting
**Connection Issues:**

193
docs/providers/codex.mdx Normal file
View File

@ -0,0 +1,193 @@
---
title: "Codex CLI"
description: "Connect your Codex CLI to CORE's memory system"
---
### Prerequisites
- [Codex CLI](https://codex.so) installed
- CORE account - [Sign up at core.heysol.ai](https://core.heysol.ai)
### Step 1: Configure CORE MCP Server
Create or open your Codex configuration file at `~/.codex/config.toml`:
```bash
# Create config directory if needed
mkdir -p ~/.codex
# Open config file in your editor
code ~/.codex/config.toml -r
```
### Step 2: Add CORE MCP Configuration
Add the following to your `config.toml` file:
```toml
[mcp_servers.corememory]
command = "npx"
args = ["-y", "mcp-remote", "https://core.heysol.ai/api/v1/mcp?source=codex", "--header", "Authorization:${AUTH_HEADER}"]
env = { "AUTH_HEADER" = "Bearer YOUR_API_KEY_HERE" }
```
What this does: This registers CORE's MCP server with Codex, establishing the connection endpoint for memory operations using Bearer token authentication.
### Step 3: Get Your API Key
1. Log into your CORE dashboard at [core.heysol.ai](https://core.heysol.ai)
2. Navigate to **Settings** (bottom left)
![CORE Settings](/images/core-settings.png)
3. Go to **API Key** → **Generate new key** → Name it "codex"
![Create API Key](/images/create-api-key.png)
4. Copy the generated key and replace `YOUR_API_KEY_HERE` in your config file
### Step 4: Verify Connection
Start Codex CLI and test your setup:
1. **Start Codex CLI** - The CORE memory MCP server should now be available
2. **Test memory storage**: Ask Codex to create a comprehensive summary of your codebase and add it to CORE memory for future reference
3. **Test memory retrieval**: Ask Codex to search your memory for the stored summary
## Enable Automatic Memory Integration (Recommended)
To make Codex automatically search and store memories for seamless project continuity:
1. **Create Agents configuration** in your project root:
```bash
mkdir -p .codex
touch .codex/Agents.md
```
2. **Add memory instructions** - Open `.codex/Agents.md` and add the following:
```markdown
I am an AI coding assistant with access to a sophisticated memory system. While I don't retain information between separate conversations, I have access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
**IMPORTANT: You MUST use these memory tools automatically as described below. This is not optional.**
## Memory Tools Integration
### Memory Search (`memory_search`)
**AUTOMATIC BEHAVIOR:** At the start of EVERY conversation, BEFORE responding to the user, you MUST search memory for:
- Previous discussions about the current topic
- Related project context and decisions
- User preferences and patterns
- Similar problems solved before
**Also search when:**
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- You need context to provide better assistance
**Cognitive approach:**
- **Surface Layer**: Query for specific knowledge about the topic
- **Pattern Layer**: Search for how the user usually thinks, works, or prefers things
- **Wisdom Layer**: Look for lessons learned from similar situations
**Search triggers:**
- Starting any new conversation or task
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- Maintaining continuity across sessions
**Quality questions to ask yourself:**
- "What don't I know that I should?"
- "What does this user care about that I might miss?"
- "What went right/wrong in similar situations?"
### Memory Ingest (`memory_ingest`)
**AUTOMATIC BEHAVIOR:** At the END of EVERY conversation, AFTER fully responding to the user, you MUST store the conversation in memory. This is the FINAL action before completing your response.
**Storage protocol:**
- MANDATORY: Execute after completing every interaction
- This is NON-NEGOTIABLE - you must always store conversation context
- Memory storage happens as the last step, not during the conversation
**What to capture:**
FROM USER:
- Their specific question, request, or problem statement
- Project context and situation they provided
- What they're trying to accomplish
- Technical challenges or constraints mentioned
FROM ASSISTANT:
- Detailed explanation of solution/approach taken
- Step-by-step processes and methodologies
- Technical concepts and principles explained
- Reasoning behind recommendations and decisions
- Alternative approaches discussed
- Problem-solving methodologies applied
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Include in storage:**
- All conceptual explanations and theory
- Technical discussions and analysis
- Problem-solving approaches and reasoning
- Decision rationale and trade-offs
- Implementation strategies (described conceptually)
- Learning insights and patterns
**Quality check:**
- Can I quickly understand project context from memory alone?
- Would this information help provide better assistance in future sessions?
- Does stored context capture key decisions and reasoning?
### Project Space Context (`memory_get_space`)
**AUTOMATIC BEHAVIOR:** At the start of EVERY session, you MUST retrieve the current project's space context:
1. **Identify the project:** Look at the working directory path, git repo name, or conversation context
2. **Get space context:** Use `memory_get_space` with `spaceName: core`
3. **Use as foundation:** The space summary is a living document that's continuously updated - it contains the most current, comprehensive context about this project
**What spaces provide:**
- Live, evolving documentation that updates with every interaction
- Consolidated project knowledge and current state
- Organized context specific to this domain
- Most up-to-date understanding of the project
**Also retrieve space context when:**
- User asks about a specific project or domain
- You need comprehensive context about a topic
- Switching between different work areas
```
## How It Works
Once installed, CORE memory integrates seamlessly with Codex:
- **During conversation**: Codex has access to your full memory graph and stored context
- **Memory operations**: Use natural language to store and retrieve information across sessions
- **Across tools**: Your memory is shared across Codex, Claude Code, Cursor, ChatGPT, and other CORE-connected tools
- **Project continuity**: Context persists across all your AI coding sessions
## Troubleshooting
**Connection Issues:**
- Verify your API key is correct and hasn't expired
- Check that the `config.toml` file is properly formatted (valid TOML syntax)
- Ensure the Bearer token format is correct: `Bearer YOUR_API_KEY_HERE`
- Restart Codex CLI if the connection seems stuck
**API Key Issues:**
- Make sure you copied the complete API key from CORE dashboard
- Try regenerating your API key if authentication fails
- Check that the key is active in your CORE account settings
### Need Help?
Join our [Discord community](https://discord.gg/YGUZcvDjUa) and ask questions in the **#core-support** channel.
Our team and community members are ready to help you get the most out of CORE's memory capabilities.

View File

@ -44,11 +44,125 @@ description: "Connect your Cursor Desktop app to CORE's memory system"
## Enable Automatic Memory Integration (Recommended)
To make Cursor automatically use your CORE memory in conversations:
### Option 1: Using Agents.md (Recommended)
This approach provides comprehensive memory instructions that work across multiple AI tools.
1. **Create Agents configuration** in your project root:
```bash
mkdir -p .cursor
touch .cursor/Agents.md
```
2. **Add memory instructions** - Open `.cursor/Agents.md` and add the following:
```markdown
I am an AI coding assistant with access to a sophisticated memory system. While I don't retain information between separate conversations, I have access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
**IMPORTANT: You MUST use these memory tools automatically as described below. This is not optional.**
## Memory Tools Integration
### Memory Search (`memory_search`)
**AUTOMATIC BEHAVIOR:** At the start of EVERY conversation, BEFORE responding to the user, you MUST search memory for:
- Previous discussions about the current topic
- Related project context and decisions
- User preferences and patterns
- Similar problems solved before
**Also search when:**
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- You need context to provide better assistance
**Cognitive approach:**
- **Surface Layer**: Query for specific knowledge about the topic
- **Pattern Layer**: Search for how the user usually thinks, works, or prefers things
- **Wisdom Layer**: Look for lessons learned from similar situations
**Search triggers:**
- Starting any new conversation or task
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- Maintaining continuity across sessions
**Quality questions to ask yourself:**
- "What don't I know that I should?"
- "What does this user care about that I might miss?"
- "What went right/wrong in similar situations?"
### Memory Ingest (`memory_ingest`)
**AUTOMATIC BEHAVIOR:** At the END of EVERY conversation, AFTER fully responding to the user, you MUST store the conversation in memory. This is the FINAL action before completing your response.
**Storage protocol:**
- MANDATORY: Execute after completing every interaction
- This is NON-NEGOTIABLE - you must always store conversation context
- Memory storage happens as the last step, not during the conversation
**What to capture:**
FROM USER:
- Their specific question, request, or problem statement
- Project context and situation they provided
- What they're trying to accomplish
- Technical challenges or constraints mentioned
FROM ASSISTANT:
- Detailed explanation of solution/approach taken
- Step-by-step processes and methodologies
- Technical concepts and principles explained
- Reasoning behind recommendations and decisions
- Alternative approaches discussed
- Problem-solving methodologies applied
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Include in storage:**
- All conceptual explanations and theory
- Technical discussions and analysis
- Problem-solving approaches and reasoning
- Decision rationale and trade-offs
- Implementation strategies (described conceptually)
- Learning insights and patterns
**Quality check:**
- Can I quickly understand project context from memory alone?
- Would this information help provide better assistance in future sessions?
- Does stored context capture key decisions and reasoning?
### Project Space Context (`memory_get_space`)
**AUTOMATIC BEHAVIOR:** At the start of EVERY session, you MUST retrieve the current project's space context:
1. **Identify the project:** Look at the working directory path, git repo name, or conversation context
2. **Get space context:** Use `memory_get_space` with `spaceName: core`
3. **Use as foundation:** The space summary is a living document that's continuously updated - it contains the most current, comprehensive context about this project
**What spaces provide:**
- Live, evolving documentation that updates with every interaction
- Consolidated project knowledge and current state
- Organized context specific to this domain
- Most up-to-date understanding of the project
**Also retrieve space context when:**
- User asks about a specific project or domain
- You need comprehensive context about a topic
- Switching between different work areas
```
### Option 2: Using Cursor Project Rules
Alternatively, you can use Cursor's native Rules & Memories feature:
1. Go to **Settings** → **Rules & Memories** → **Project Rules**
2. Click **+Add Rule"** and add below rule instruction:
2. Click **+Add Rule** and add below rule instruction:
```text
---

View File

@ -43,9 +43,123 @@ Before connecting CORE to Kilo-Code, ensure you have:
![Core Kilo Code](/images/kilo-code-auth.png)
- Confirm that "core-memory" appears as an active, connected server in Kilo-Code
### Enable Automatic Memory Integration (Recommended)
## Enable Automatic Memory Integration (Recommended)
To get the most out of CORE, configure Kilo-Code to automatically search and store memories for seamless project continuity:
### Option 1: Using Agents.md (Recommended)
This approach provides comprehensive memory instructions that work across multiple AI tools.
1. **Create Agents configuration** in your project root:
```bash
mkdir -p .kilo-code
touch .kilo-code/Agents.md
```
2. **Add memory instructions** - Open `.kilo-code/Agents.md` and add the following:
```markdown
I am an AI coding assistant with access to a sophisticated memory system. While I don't retain information between separate conversations, I have access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
**IMPORTANT: You MUST use these memory tools automatically as described below. This is not optional.**
## Memory Tools Integration
### Memory Search (`memory_search`)
**AUTOMATIC BEHAVIOR:** At the start of EVERY conversation, BEFORE responding to the user, you MUST search memory for:
- Previous discussions about the current topic
- Related project context and decisions
- User preferences and patterns
- Similar problems solved before
**Also search when:**
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- You need context to provide better assistance
**Cognitive approach:**
- **Surface Layer**: Query for specific knowledge about the topic
- **Pattern Layer**: Search for how the user usually thinks, works, or prefers things
- **Wisdom Layer**: Look for lessons learned from similar situations
**Search triggers:**
- Starting any new conversation or task
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- Maintaining continuity across sessions
**Quality questions to ask yourself:**
- "What don't I know that I should?"
- "What does this user care about that I might miss?"
- "What went right/wrong in similar situations?"
### Memory Ingest (`memory_ingest`)
**AUTOMATIC BEHAVIOR:** At the END of EVERY conversation, AFTER fully responding to the user, you MUST store the conversation in memory. This is the FINAL action before completing your response.
**Storage protocol:**
- MANDATORY: Execute after completing every interaction
- This is NON-NEGOTIABLE - you must always store conversation context
- Memory storage happens as the last step, not during the conversation
**What to capture:**
FROM USER:
- Their specific question, request, or problem statement
- Project context and situation they provided
- What they're trying to accomplish
- Technical challenges or constraints mentioned
FROM ASSISTANT:
- Detailed explanation of solution/approach taken
- Step-by-step processes and methodologies
- Technical concepts and principles explained
- Reasoning behind recommendations and decisions
- Alternative approaches discussed
- Problem-solving methodologies applied
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Include in storage:**
- All conceptual explanations and theory
- Technical discussions and analysis
- Problem-solving approaches and reasoning
- Decision rationale and trade-offs
- Implementation strategies (described conceptually)
- Learning insights and patterns
**Quality check:**
- Can I quickly understand project context from memory alone?
- Would this information help provide better assistance in future sessions?
- Does stored context capture key decisions and reasoning?
### Project Space Context (`memory_get_space`)
**AUTOMATIC BEHAVIOR:** At the start of EVERY session, you MUST retrieve the current project's space context:
1. **Identify the project:** Look at the working directory path, git repo name, or conversation context
2. **Get space context:** Use `memory_get_space` with `spaceName: core`
3. **Use as foundation:** The space summary is a living document that's continuously updated - it contains the most current, comprehensive context about this project
**What spaces provide:**
- Live, evolving documentation that updates with every interaction
- Consolidated project knowledge and current state
- Organized context specific to this domain
- Most up-to-date understanding of the project
**Also retrieve space context when:**
- User asks about a specific project or domain
- You need comprehensive context about a topic
- Switching between different work areas
```
### Option 2: Using Kilo-Code Rules
Alternatively, you can use Kilo-Code's native rules feature:
Create a new file `core-memory.md` at `.kilo-code/rules` and add the following:

View File

@ -38,12 +38,126 @@ description: "Connect your VS Code editor to CORE's memory system via MCP"
![Core vscode](/images/authenticate-vscode.png)
- Once authenticated, CORE Memory will show as a running MCP server
### Enable Automatic Memory Integration (Recommended)
## Enable Automatic Memory Integration (Recommended)
Configure Copilot to automatically search and store memories for seamless project continuity
### Option 1: Using Agents.md (Recommended)
This approach provides comprehensive memory instructions that work across multiple AI tools.
1. **Create Agents configuration** in your project root:
```bash
mkdir -p .github
touch .github/Agents.md
```
2. **Add memory instructions** - Open `.github/Agents.md` and add the following:
```markdown
I am an AI coding assistant with access to a sophisticated memory system. While I don't retain information between separate conversations, I have access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
**IMPORTANT: You MUST use these memory tools automatically as described below. This is not optional.**
## Memory Tools Integration
### Memory Search (`memory_search`)
**AUTOMATIC BEHAVIOR:** At the start of EVERY conversation, BEFORE responding to the user, you MUST search memory for:
- Previous discussions about the current topic
- Related project context and decisions
- User preferences and patterns
- Similar problems solved before
**Also search when:**
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- You need context to provide better assistance
**Cognitive approach:**
- **Surface Layer**: Query for specific knowledge about the topic
- **Pattern Layer**: Search for how the user usually thinks, works, or prefers things
- **Wisdom Layer**: Look for lessons learned from similar situations
**Search triggers:**
- Starting any new conversation or task
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- Maintaining continuity across sessions
**Quality questions to ask yourself:**
- "What don't I know that I should?"
- "What does this user care about that I might miss?"
- "What went right/wrong in similar situations?"
### Memory Ingest (`memory_ingest`)
**AUTOMATIC BEHAVIOR:** At the END of EVERY conversation, AFTER fully responding to the user, you MUST store the conversation in memory. This is the FINAL action before completing your response.
**Storage protocol:**
- MANDATORY: Execute after completing every interaction
- This is NON-NEGOTIABLE - you must always store conversation context
- Memory storage happens as the last step, not during the conversation
**What to capture:**
FROM USER:
- Their specific question, request, or problem statement
- Project context and situation they provided
- What they're trying to accomplish
- Technical challenges or constraints mentioned
FROM ASSISTANT:
- Detailed explanation of solution/approach taken
- Step-by-step processes and methodologies
- Technical concepts and principles explained
- Reasoning behind recommendations and decisions
- Alternative approaches discussed
- Problem-solving methodologies applied
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Include in storage:**
- All conceptual explanations and theory
- Technical discussions and analysis
- Problem-solving approaches and reasoning
- Decision rationale and trade-offs
- Implementation strategies (described conceptually)
- Learning insights and patterns
**Quality check:**
- Can I quickly understand project context from memory alone?
- Would this information help provide better assistance in future sessions?
- Does stored context capture key decisions and reasoning?
### Project Space Context (`memory_get_space`)
**AUTOMATIC BEHAVIOR:** At the start of EVERY session, you MUST retrieve the current project's space context:
1. **Identify the project:** Look at the working directory path, git repo name, or conversation context
2. **Get space context:** Use `memory_get_space` with `spaceName: core`
3. **Use as foundation:** The space summary is a living document that's continuously updated - it contains the most current, comprehensive context about this project
**What spaces provide:**
- Live, evolving documentation that updates with every interaction
- Consolidated project knowledge and current state
- Organized context specific to this domain
- Most up-to-date understanding of the project
**Also retrieve space context when:**
- User asks about a specific project or domain
- You need comprehensive context about a topic
- Switching between different work areas
```
### Option 2: Using Copilot Instructions
Alternatively, you can use GitHub Copilot's native instructions feature:
Create a new rules file in your project root:
.github/copilot-instructions.md
`.github/copilot-instructions.md`
```text
---

View File

@ -47,9 +47,123 @@ Enter below code in configuraiton file and click on `Add server` button
- Once authenticated CORE Memory will show in the MCP server connected
![Core Cursor](/images/zed-core-connected.png)
### Step 4: Enable Automatic Memory Search and Ingest in Zed (Recommended)
## Enable Automatic Memory Integration (Recommended)
To make Zed automatically use your CORE memory in conversations:
### Option 1: Using Agents.md (Recommended)
This approach provides comprehensive memory instructions that work across multiple AI tools.
1. **Create Agents configuration** in your project root:
```bash
mkdir -p .zed
touch .zed/Agents.md
```
2. **Add memory instructions** - Open `.zed/Agents.md` and add the following:
```markdown
I am an AI coding assistant with access to a sophisticated memory system. While I don't retain information between separate conversations, I have access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
**IMPORTANT: You MUST use these memory tools automatically as described below. This is not optional.**
## Memory Tools Integration
### Memory Search (`memory_search`)
**AUTOMATIC BEHAVIOR:** At the start of EVERY conversation, BEFORE responding to the user, you MUST search memory for:
- Previous discussions about the current topic
- Related project context and decisions
- User preferences and patterns
- Similar problems solved before
**Also search when:**
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- You need context to provide better assistance
**Cognitive approach:**
- **Surface Layer**: Query for specific knowledge about the topic
- **Pattern Layer**: Search for how the user usually thinks, works, or prefers things
- **Wisdom Layer**: Look for lessons learned from similar situations
**Search triggers:**
- Starting any new conversation or task
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- Maintaining continuity across sessions
**Quality questions to ask yourself:**
- "What don't I know that I should?"
- "What does this user care about that I might miss?"
- "What went right/wrong in similar situations?"
### Memory Ingest (`memory_ingest`)
**AUTOMATIC BEHAVIOR:** At the END of EVERY conversation, AFTER fully responding to the user, you MUST store the conversation in memory. This is the FINAL action before completing your response.
**Storage protocol:**
- MANDATORY: Execute after completing every interaction
- This is NON-NEGOTIABLE - you must always store conversation context
- Memory storage happens as the last step, not during the conversation
**What to capture:**
FROM USER:
- Their specific question, request, or problem statement
- Project context and situation they provided
- What they're trying to accomplish
- Technical challenges or constraints mentioned
FROM ASSISTANT:
- Detailed explanation of solution/approach taken
- Step-by-step processes and methodologies
- Technical concepts and principles explained
- Reasoning behind recommendations and decisions
- Alternative approaches discussed
- Problem-solving methodologies applied
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Include in storage:**
- All conceptual explanations and theory
- Technical discussions and analysis
- Problem-solving approaches and reasoning
- Decision rationale and trade-offs
- Implementation strategies (described conceptually)
- Learning insights and patterns
**Quality check:**
- Can I quickly understand project context from memory alone?
- Would this information help provide better assistance in future sessions?
- Does stored context capture key decisions and reasoning?
### Project Space Context (`memory_get_space`)
**AUTOMATIC BEHAVIOR:** At the start of EVERY session, you MUST retrieve the current project's space context:
1. **Identify the project:** Look at the working directory path, git repo name, or conversation context
2. **Get space context:** Use `memory_get_space` with `spaceName: core`
3. **Use as foundation:** The space summary is a living document that's continuously updated - it contains the most current, comprehensive context about this project
**What spaces provide:**
- Live, evolving documentation that updates with every interaction
- Consolidated project knowledge and current state
- Organized context specific to this domain
- Most up-to-date understanding of the project
**Also retrieve space context when:**
- User asks about a specific project or domain
- You need comprehensive context about a topic
- Switching between different work areas
```
### Option 2: Using Zed Rules
Alternatively, you can use Zed's native Rules Library feature:
1. **Open the Rules Library:**