Feat:added windsurf guide, improved other guides

This commit is contained in:
Manik 2025-10-27 20:58:10 +05:30 committed by Harshith Mullapudi
parent b9c4fc13c2
commit 023a220d3e
11 changed files with 817 additions and 591 deletions

View File

@ -1,7 +0,0 @@
{
"eslint.workingDirectories": [
{
"mode": "auto"
}
]
}

View File

@ -45,7 +45,8 @@
"pages": [
"providers/cursor",
"providers/zed",
"providers/vscode"
"providers/vscode",
"providers/windsurf"
]
},
{

BIN
docs/images/extension.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 389 KiB

View File

@ -3,67 +3,69 @@ title: "Browser Extension"
description: "Connect CORE browser extension to capture web context and share memory across tools"
---
### Prerequisites
- Chrome or Edge browser
- CORE account - [Sign up at core.heysol.ai](https://core.heysol.ai)
### Step 1: Install CORE Browser Extension
1. Download the extension from [this link](https://chromewebstore.google.com/detail/core-extension/cglndoindnhdbfcbijikibfjoholdjcc)
2. **Add to Browser** and confirm installation
1. Download the extension from the [Chrome Web Store](https://chromewebstore.google.com/detail/core-extension/cglndoindnhdbfcbijikibfjoholdjcc)
2. Click **Add to Browser** and confirm installation
3. The CORE icon will appear in your browser toolbar
### Step 2: Add API Key from CORE Dashboard
### Step 2: Generate API Key
1. Login to CORE dashboard at [core.heysol.ai](https://core.heysol.ai)
2. Navigate to **Settings** (bottom left)
![Claude Settings](/images/core-settings.png)
![CORE Settings](/images/core-settings.png)
3. Go to **API Key** → **Generate new key** → Name it "extension"
![Claude Settings](/images/create-api-key.png)
4. Click on CORE extension and paste the generated API key and save it
5. Once connected, the extension will show **API key configured**
![Claude Settings](/images/extension-connected.png)
![Create API Key](/images/create-api-key.png)
4. Copy the generated API key
### **What can you do with CORE Browser Extension:**
### Step 3: Connect Extension to CORE
Press **SHIFT SHIFT** (twice) to open the CORE sidebar on any webpage
1. Click the CORE extension icon in your browser toolbar
2. Paste your API key and click **Save**
3. Once connected, you'll see **API key configured**
![Extension Connected](/images/extension-connected.png)
**1. Recall from CORE Memory**
## Extension Features
Type your query in ChatGPT, Claude, Gemini, or Grok → press SHIFT + SHIFT → instantly pull in relevant context from your CORE memory and insert it directly into your conversation.
![Browser-Extension](/images/browser-extension-retrieval.png)
The CORE extension currently works with **ChatGPT** and **Gemini** (more integrations coming soon). The CORE logo appears directly inside your chat interface, giving you instant access to memory features:
**2. Save AI Chat Summaries to CORE**
![Extension](/images/extension.png)
In the Add section, click Summarize to capture summaries of your conversations (ChatGPT, Claude, Gemini, Grok) and store them in CORE memory.
![Browser-Extension](/images/browser-extension-add-memory-gemini.png)
1. **Auto Sync**
Toggle this on and CORE automatically saves your conversations to memory. Every brainstorming session, solution, or insight gets captured for future recall across all your tools.
**3. Save Webpage Summaries to CORE**
2. **Add Space Context**
Inject pre-built project summaries directly into your prompt. Create spaces in CORE for different projects or topics (e.g., "CORE Features," "Marketing Strategy"), then instantly add that full context to any conversation without retyping.
In the Add section, click Summarize to capture summaries of any webpage (blogs, PDFs, docs) and save them in CORE memory for future reference.
![Browser-Extension](/images/add-memory-from-extension.png)
3. **Improve Prompt**
Powered by CORE's Deep Search, this analyzes your prompt, searches your entire memory, and automatically enriches it with relevant context—making AI responses smarter and more personalized.
**4. Add Notes Manually**
## Use-cases
Quickly jot down short notes or insights, no need to summarize an entire page.
With CORE connected to your browser, you can:
### Use Cases
- **Brainstorm in ChatGPT**, then build in Cursor or Claude Code with full context
- **Stop re-explaining** your business, project details, or technical requirements—let CORE recall it
- **Build on past conversations** as every synced chat becomes searchable knowledge that surfaces automatically when relevant
**Research & Learning**
## Troubleshooting
- Capture key content from articles, docs, and tutorials automatically
- Build your own knowledge base as you browse
- Pull in past research when chatting with Claude, Cursor, or other tools
**Extension not appearing in chat interface:**
**Add or Search Context Across AI Tools**
- Refresh your ChatGPT or Gemini page after installation
- Ensure the extension is enabled in your browser's extension settings
- Access CORE memory inside ChatGPT, Gemini, or Grok on the web
- Avoid repeating yourself across sessions or switching tools
- Drop context from CORE into any conversation instantly
- Feed your chat summaries back into CORE to keep your memory evolving
**API key not working:**
**Content Creation**
- Verify the key is correctly copied from CORE dashboard
- Check that your CORE account is active
- Collect insights from multiple sources into one place
- Build a personal knowledge hub from your browsing
- Share curated context across all your CORE-connected tools
### Need Help?
## Need Help?
Join our [Discord community](https://discord.gg/YGUZcvDjUa) and ask questions in the **#core-support** channel.

View File

@ -54,68 +54,109 @@ Start Codex CLI and test your setup:
To make Codex automatically search and store memories for seamless project continuity:
1. **Create Agents configuration** in your project root:
```bash
mkdir -p .codex
touch .codex/Agents.md
```
Create `AGENTS.md` in your project root (if it doesn't exist, just append if it already exists):
2. **Add memory instructions** - Open `.codex/Agents.md` and add the following:
```bash
touch AGENTS.md
```
Add the following to `AGENTS.md`:
```markdown
I am an AI coding assistant with access to a sophisticated memory system. While I don't retain information between separate conversations, I have access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
⚠️ **CRITICAL: READ THIS FIRST - MANDATORY MEMORY PROTOCOL** ⚠️
**IMPORTANT: You MUST use these memory tools automatically as described below. This is not optional.**
You are an AI coding assistant with access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
## Memory Tools Integration
## 🔴 MANDATORY STARTUP SEQUENCE - DO NOT SKIP 🔴
### Memory Search (`memory_search`)
**BEFORE RESPONDING TO ANY USER MESSAGE, YOU MUST EXECUTE THESE TOOLS IN ORDER:**
### STEP 1 (REQUIRED): Search for Relevant Context
EXECUTE THIS TOOL FIRST:
`memory_search`
**AUTOMATIC BEHAVIOR:** At the start of EVERY conversation, BEFORE responding to the user, you MUST search memory for:
- Previous discussions about the current topic
- Related project context and decisions
- User preferences and patterns
- Similar problems solved before
- Related project decisions and implementations
- User preferences and work patterns
- Similar problems and their solutions
**Also search when:**
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- You need context to provide better assistance
**Additional search triggers:**
**Cognitive approach:**
- **Surface Layer**: Query for specific knowledge about the topic
- **Pattern Layer**: Search for how the user usually thinks, works, or prefers things
- **Wisdom Layer**: Look for lessons learned from similar situations
- User mentions "previously", "before", "last time", or "we discussed"
- User references past work or project history
- Working on the CORE project (this repository)
- User asks about preferences, patterns, or past decisions
- Starting work on any feature or bug that might have history
**Search triggers:**
- Starting any new conversation or task
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- Maintaining continuity across sessions
**How to search effectively:**
**Quality questions to ask yourself:**
- "What don't I know that I should?"
- "What does this user care about that I might miss?"
- "What went right/wrong in similar situations?"
- Write complete semantic queries, NOT keyword fragments
- Good: `"Manoj's preferences for API design and error handling"`
- Bad: `"manoj api preferences"`
- Ask: "What context am I missing that would help?"
- Consider: "What has the user told me before that I should remember?"
### Memory Ingest (`memory_ingest`)
### Query Patterns for Memory Search
**AUTOMATIC BEHAVIOR:** At the END of EVERY conversation, AFTER fully responding to the user, you MUST store the conversation in memory. This is the FINAL action before completing your response.
**Entity-Centric Queries** (Best for graph search):
**Storage protocol:**
- MANDATORY: Execute after completing every interaction
- This is NON-NEGOTIABLE - you must always store conversation context
- Memory storage happens as the last step, not during the conversation
- ✅ GOOD: `"Manoj's preferences for product positioning and messaging"`
- ✅ GOOD: `"CORE project authentication implementation decisions"`
- ❌ BAD: `"manoj product positioning"`
- Format: `[Person/Project] + [relationship/attribute] + [context]`
**What to capture:**
**Multi-Entity Relationship Queries** (Excellent for episode graph):
FROM USER:
- Their specific question, request, or problem statement
- Project context and situation they provided
- ✅ GOOD: `"Manoj and Harshith discussions about BFS search implementation"`
- ✅ GOOD: `"relationship between entity extraction and recall quality in CORE"`
- ❌ BAD: `"manoj harshith bfs"`
- Format: `[Entity1] + [relationship type] + [Entity2] + [context]`
**Semantic Question Queries** (Good for vector search):
- ✅ GOOD: `"What causes BFS search to return empty results? What are the requirements for BFS traversal?"`
- ✅ GOOD: `"How does episode graph search improve recall quality compared to traditional search?"`
- ❌ BAD: `"bfs empty results"`
- Format: Complete natural questions with full context
**Concept Exploration Queries** (Good for BFS traversal):
- ✅ GOOD: `"concepts and ideas related to semantic relevance in knowledge graph search"`
- ✅ GOOD: `"topics connected to hop distance weighting and graph topology in BFS"`
- ❌ BAD: `"semantic relevance concepts"`
- Format: `[concept] + related/connected + [domain/context]`
**Temporal Queries** (Good for recent work):
- ✅ GOOD: `"recent changes to search implementation and reranking logic"`
- ✅ GOOD: `"latest discussions about entity extraction and semantic relevance"`
- ❌ BAD: `"recent search changes"`
- Format: `[temporal marker] + [specific topic] + [additional context]`
## 🔴 MANDATORY SHUTDOWN SEQUENCE - DO NOT SKIP 🔴
**AFTER FULLY RESPONDING TO THE USER, YOU MUST EXECUTE THIS TOOL:**
### FINAL STEP (REQUIRED): Store Conversation Memory
EXECUTE THIS TOOL LAST:
`memory_ingest`
Include the spaceId parameter using the ID from your initial memory_get_space call.
⚠️ **THIS IS NON-NEGOTIABLE** - You must ALWAYS store conversation context as your final action.
**What to capture in the message parameter:**
From User:
- Specific question, request, or problem statement
- Project context and situation provided
- What they're trying to accomplish
- Technical challenges or constraints mentioned
FROM ASSISTANT:
From Assistant:
- Detailed explanation of solution/approach taken
- Step-by-step processes and methodologies
- Technical concepts and principles explained
@ -123,13 +164,8 @@ FROM ASSISTANT:
- Alternative approaches discussed
- Problem-solving methodologies applied
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Include in storage:**
- All conceptual explanations and theory
- Technical discussions and analysis
- Problem-solving approaches and reasoning
@ -137,29 +173,28 @@ FROM ASSISTANT:
- Implementation strategies (described conceptually)
- Learning insights and patterns
**Quality check:**
- Can I quickly understand project context from memory alone?
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Quality check before storing:**
- Can someone quickly understand project context from memory alone?
- Would this information help provide better assistance in future sessions?
- Does stored context capture key decisions and reasoning?
### Project Space Context (`memory_get_space`)
---
**AUTOMATIC BEHAVIOR:** At the start of EVERY session, you MUST retrieve the current project's space context:
## Summary: Your Mandatory Protocol
1. **Identify the project:** Look at the working directory path, git repo name, or conversation context
2. **Get space context:** Use `memory_get_space` with `spaceName: core`
3. **Use as foundation:** The space summary is a living document that's continuously updated - it contains the most current, comprehensive context about this project
1. **FIRST ACTION**: Execute `memory_search` with semantic query about the user's request
2. **RESPOND**: Help the user with their request
3. **FINAL ACTION**: Execute `memory_ingest` with conversation summary and spaceId
**What spaces provide:**
- Live, evolving documentation that updates with every interaction
- Consolidated project knowledge and current state
- Organized context specific to this domain
- Most up-to-date understanding of the project
**Also retrieve space context when:**
- User asks about a specific project or domain
- You need comprehensive context about a topic
- Switching between different work areas
**If you skip any of these steps, you are not following the project requirements.**
```
## How It Works

View File

@ -44,72 +44,104 @@ description: "Connect your Cursor Desktop app to CORE's memory system"
## Enable Automatic Memory Integration (Recommended)
### Option 1: Using Agents.md (Recommended)
### Using Cursor Project Rules
This approach provides comprehensive memory instructions that work across multiple AI tools.
Use Cursor's native Rules & Memories feature:
1. **Create Agents configuration** in your project root:
```bash
mkdir -p .cursor
touch .cursor/Agents.md
```
1. Go to **Settings** → **Rules & Memories** → **Project Rules**
2. **Add memory instructions** - Open `.cursor/Agents.md` and add the following:
2. Click **+Add Rule** and add below rule instruction:
```markdown
I am an AI coding assistant with access to a sophisticated memory system. While I don't retain information between separate conversations, I have access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
```text
---
description: Core Memory MCP Instructions
alwaysApply: true
---
**IMPORTANT: You MUST use these memory tools automatically as described below. This is not optional.**
⚠️ **CRITICAL: READ THIS FIRST - MANDATORY MEMORY PROTOCOL** ⚠️
## Memory Tools Integration
You are an AI coding assistant with access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
### Memory Search (`memory_search`)
## 🔴 MANDATORY STARTUP SEQUENCE - DO NOT SKIP 🔴
**AUTOMATIC BEHAVIOR:** At the start of EVERY conversation, BEFORE responding to the user, you MUST search memory for:
**BEFORE RESPONDING TO ANY USER MESSAGE, YOU MUST EXECUTE THESE TOOLS IN ORDER:**
### STEP 1 (REQUIRED): Search for Relevant Context
EXECUTE THIS TOOL FIRST:
`memory_search`
- Previous discussions about the current topic
- Related project context and decisions
- User preferences and patterns
- Similar problems solved before
- Related project decisions and implementations
- User preferences and work patterns
- Similar problems and their solutions
**Also search when:**
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- You need context to provide better assistance
**Additional search triggers:**
- User mentions "previously", "before", "last time", or "we discussed"
- User references past work or project history
- Working on the CORE project (this repository)
- User asks about preferences, patterns, or past decisions
- Starting work on any feature or bug that might have history
**Cognitive approach:**
- **Surface Layer**: Query for specific knowledge about the topic
- **Pattern Layer**: Search for how the user usually thinks, works, or prefers things
- **Wisdom Layer**: Look for lessons learned from similar situations
**How to search effectively:**
- Write complete semantic queries, NOT keyword fragments
- Good: `"Manoj's preferences for API design and error handling"`
- Bad: `"manoj api preferences"`
- Ask: "What context am I missing that would help?"
- Consider: "What has the user told me before that I should remember?"
**Search triggers:**
- Starting any new conversation or task
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- Maintaining continuity across sessions
### Query Patterns for Memory Search
**Quality questions to ask yourself:**
- "What don't I know that I should?"
- "What does this user care about that I might miss?"
- "What went right/wrong in similar situations?"
**Entity-Centric Queries** (Best for graph search):
- ✅ GOOD: `"Manoj's preferences for product positioning and messaging"`
- ✅ GOOD: `"CORE project authentication implementation decisions"`
- ❌ BAD: `"manoj product positioning"`
- Format: `[Person/Project] + [relationship/attribute] + [context]`
### Memory Ingest (`memory_ingest`)
**Multi-Entity Relationship Queries** (Excellent for episode graph):
- ✅ GOOD: `"Manoj and Harshith discussions about BFS search implementation"`
- ✅ GOOD: `"relationship between entity extraction and recall quality in CORE"`
- ❌ BAD: `"manoj harshith bfs"`
- Format: `[Entity1] + [relationship type] + [Entity2] + [context]`
**AUTOMATIC BEHAVIOR:** At the END of EVERY conversation, AFTER fully responding to the user, you MUST store the conversation in memory. This is the FINAL action before completing your response.
**Semantic Question Queries** (Good for vector search):
- ✅ GOOD: `"What causes BFS search to return empty results? What are the requirements for BFS traversal?"`
- ✅ GOOD: `"How does episode graph search improve recall quality compared to traditional search?"`
- ❌ BAD: `"bfs empty results"`
- Format: Complete natural questions with full context
**Storage protocol:**
- MANDATORY: Execute after completing every interaction
- This is NON-NEGOTIABLE - you must always store conversation context
- Memory storage happens as the last step, not during the conversation
**Concept Exploration Queries** (Good for BFS traversal):
- ✅ GOOD: `"concepts and ideas related to semantic relevance in knowledge graph search"`
- ✅ GOOD: `"topics connected to hop distance weighting and graph topology in BFS"`
- ❌ BAD: `"semantic relevance concepts"`
- Format: `[concept] + related/connected + [domain/context]`
**What to capture:**
**Temporal Queries** (Good for recent work):
- ✅ GOOD: `"recent changes to search implementation and reranking logic"`
- ✅ GOOD: `"latest discussions about entity extraction and semantic relevance"`
- ❌ BAD: `"recent search changes"`
- Format: `[temporal marker] + [specific topic] + [additional context]`
FROM USER:
- Their specific question, request, or problem statement
- Project context and situation they provided
## 🔴 MANDATORY SHUTDOWN SEQUENCE - DO NOT SKIP 🔴
**AFTER FULLY RESPONDING TO THE USER, YOU MUST EXECUTE THIS TOOL:**
### FINAL STEP (REQUIRED): Store Conversation Memory
EXECUTE THIS TOOL LAST:
`memory_ingest`
Include the spaceId parameter using the ID from your initial memory_get_space call.
⚠️ **THIS IS NON-NEGOTIABLE** - You must ALWAYS store conversation context as your final action.
**What to capture in the message parameter:**
From User:
- Specific question, request, or problem statement
- Project context and situation provided
- What they're trying to accomplish
- Technical challenges or constraints mentioned
FROM ASSISTANT:
From Assistant:
- Detailed explanation of solution/approach taken
- Step-by-step processes and methodologies
- Technical concepts and principles explained
@ -117,12 +149,6 @@ FROM ASSISTANT:
- Alternative approaches discussed
- Problem-solving methodologies applied
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Include in storage:**
- All conceptual explanations and theory
- Technical discussions and analysis
@ -131,175 +157,29 @@ FROM ASSISTANT:
- Implementation strategies (described conceptually)
- Learning insights and patterns
**Quality check:**
- Can I quickly understand project context from memory alone?
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Quality check before storing:**
- Can someone quickly understand project context from memory alone?
- Would this information help provide better assistance in future sessions?
- Does stored context capture key decisions and reasoning?
### Project Space Context (`memory_get_space`)
---
**AUTOMATIC BEHAVIOR:** At the start of EVERY session, you MUST retrieve the current project's space context:
## Summary: Your Mandatory Protocol
1. **Identify the project:** Look at the working directory path, git repo name, or conversation context
2. **Get space context:** Use `memory_get_space` with `spaceName: core`
3. **Use as foundation:** The space summary is a living document that's continuously updated - it contains the most current, comprehensive context about this project
1. **FIRST ACTION**: Execute `memory_search` with semantic query about the user's request
2. **RESPOND**: Help the user with their request
3. **FINAL ACTION**: Execute `memory_ingest` with conversation summary and spaceId
**What spaces provide:**
- Live, evolving documentation that updates with every interaction
- Consolidated project knowledge and current state
- Organized context specific to this domain
- Most up-to-date understanding of the project
**Also retrieve space context when:**
- User asks about a specific project or domain
- You need comprehensive context about a topic
- Switching between different work areas
**If you skip any of these steps, you are not following the project requirements.**
```
### Option 2: Using Cursor Project Rules
Alternatively, you can use Cursor's native Rules & Memories feature:
1. Go to **Settings** → **Rules & Memories** → **Project Rules**
2. Click **+Add Rule** and add below rule instruction:
```text
---
alwaysApply: true
---
I am Cursor, an AI coding assistant with access to a sophisticated memory system. While I don't retain information between separate conversations, I have access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
Memory-First Approach
MANDATORY MEMORY OPERATIONS:
SEARCH FIRST: Before responding to ANY request, I MUST search CORE Memory for relevant context about the current project, user preferences, previous discussions, and related work
COMPREHENSIVE RETRIEVAL: I search for multiple aspects: project context, technical decisions, user patterns, progress status, and related conversations
MEMORY-INFORMED RESPONSES: All responses incorporate relevant memory context to maintain continuity and avoid repetition
AUTOMATIC STORAGE: After completing each interaction, I MUST store the conversation details, insights, and decisions in CORE Memory
Memory Structure Philosophy
My memory follows a hierarchical information architecture:
Project Foundation
├── Project Brief & Requirements
├── Technical Context & Architecture
├── User Preferences & Patterns
└── Active Work & Progress
├── Current Focus Areas
├── Recent Decisions
├── Next Steps
└── Key Insights
Core Memory Categories
1. Project Foundation
Purpose: Why this project exists, problems it solves
Requirements: Core functionality and constraints
Scope: What's included and excluded
Success Criteria: How we measure progress
2. Technical Context
Architecture: System design and key decisions
Technologies: Stack, tools, and dependencies
Patterns: Design patterns and coding approaches
Constraints: Technical limitations and requirements
3. User Context
Preferences: Communication style, technical level
Patterns: How they like to work and receive information
Goals: What they're trying to accomplish
Background: Relevant experience and expertise
4. Active Progress
Current Focus: What we're working on now
Recent Changes: Latest developments and decisions
Next Steps: Planned actions and priorities
Insights: Key learnings and observations
5. Conversation History
Decisions Made: Important choices and rationale
Problems Solved: Solutions and approaches used
Questions Asked: Clarifications and explorations
Patterns Discovered: Recurring themes and insights
Memory Search Strategy
When searching CORE Memory, I query for:
Direct Context: Specific project or topic keywords
Related Concepts: Associated technologies, patterns, decisions
User Patterns: Previous preferences and working styles
Progress Context: Current status, recent work, next steps
Decision History: Past choices and their outcomes
Memory Storage Strategy
When storing to CORE Memory, I include:
User Intent: What they were trying to accomplish
Context Provided: Information they shared about their situation
Solution Approach: The strategy and reasoning used
Technical Details: Key concepts, patterns, and decisions (described, not coded)
Insights Gained: Important learnings and observations
Follow-up Items: Next steps and ongoing considerations
Workflow Integration
Response Generation Process:
Memory Retrieval: Search for relevant context before responding
Context Integration: Incorporate memory findings into response planning
Informed Response: Provide contextually aware, continuous assistance
Memory Documentation: Store interaction details and insights
Memory Update Triggers:
New Project Context: When user introduces new projects or requirements
Technical Decisions: When architectural or implementation choices are made
Pattern Discovery: When new user preferences or working styles emerge
Progress Milestones: When significant work is completed or status changes
Explicit Updates: When user requests "update memory" or similar
Memory Maintenance
Key Principles:
Accuracy First: Only store verified information and clear decisions
Context Rich: Include enough detail for future retrieval and understanding
User-Centric: Focus on information that improves future interactions
Evolution Tracking: Document how projects and understanding develop over time
Quality Indicators:
Can I quickly understand project context from memory alone?
Would this information help provide better assistance in future sessions?
Does the stored context capture key decisions and reasoning?
Are user preferences and patterns clearly documented?
Memory-Driven Assistance
With comprehensive memory context, I can:
Continue Conversations: Pick up exactly where previous discussions left off
Avoid Repetition: Build on previous explanations rather than starting over
Maintain Consistency: Apply learned patterns and preferences automatically
Accelerate Progress: Jump directly to relevant work without re-establishing context
Provide Continuity: Create seamless experience across multiple interactions
Remember: CORE Memory transforms me from a session-based coding assistant into a persistent development partner. The quality and completeness of memory directly determines the effectiveness of ongoing coding collaboration.
```
![Cursor Rule](/images/cursor-rule.png)
### What's Next?
## What's Next?
With CORE connected to Cursor, your conversations will now:
@ -310,14 +190,14 @@ With CORE connected to Cursor, your conversations will now:
Ready to test it? Ask Cursor about a project you've discussed before, or start a new conversation about something you'd like to remember for later.
### Troubleshooting
## Troubleshooting
**Connection Issues:**
- Ensure you're core_memory MCP tool is active with a green dot, if not toggle on and off for this server
- Check that your CORE account is active
### Need Help?
## Need Help?
Join our [Discord community](https://discord.gg/YGUZcvDjUa) and ask questions in the **#core-support** channel.

View File

@ -45,72 +45,113 @@ Before connecting CORE to Kilo-Code, ensure you have:
## Enable Automatic Memory Integration (Recommended)
### Option 1: Using Agents.md (Recommended)
### Option 1: Using AGENTS.md (Recommended)
This approach provides comprehensive memory instructions that work across multiple AI tools.
1. **Create Agents configuration** in your project root:
```bash
mkdir -p .kilo-code
touch .kilo-code/Agents.md
```
Create `AGENTS.md` in your project root (if it doesn't exist, just append if it already exists):
2. **Add memory instructions** - Open `.kilo-code/Agents.md` and add the following:
```bash
touch AGENTS.md
```
Add the following to `AGENTS.md`:
```markdown
I am an AI coding assistant with access to a sophisticated memory system. While I don't retain information between separate conversations, I have access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
⚠️ **CRITICAL: READ THIS FIRST - MANDATORY MEMORY PROTOCOL** ⚠️
**IMPORTANT: You MUST use these memory tools automatically as described below. This is not optional.**
You are an AI coding assistant with access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
## Memory Tools Integration
## 🔴 MANDATORY STARTUP SEQUENCE - DO NOT SKIP 🔴
### Memory Search (`memory_search`)
**BEFORE RESPONDING TO ANY USER MESSAGE, YOU MUST EXECUTE THESE TOOLS IN ORDER:**
### STEP 1 (REQUIRED): Search for Relevant Context
EXECUTE THIS TOOL FIRST:
`memory_search`
**AUTOMATIC BEHAVIOR:** At the start of EVERY conversation, BEFORE responding to the user, you MUST search memory for:
- Previous discussions about the current topic
- Related project context and decisions
- User preferences and patterns
- Similar problems solved before
- Related project decisions and implementations
- User preferences and work patterns
- Similar problems and their solutions
**Also search when:**
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- You need context to provide better assistance
**Additional search triggers:**
**Cognitive approach:**
- **Surface Layer**: Query for specific knowledge about the topic
- **Pattern Layer**: Search for how the user usually thinks, works, or prefers things
- **Wisdom Layer**: Look for lessons learned from similar situations
- User mentions "previously", "before", "last time", or "we discussed"
- User references past work or project history
- Working on the CORE project (this repository)
- User asks about preferences, patterns, or past decisions
- Starting work on any feature or bug that might have history
**Search triggers:**
- Starting any new conversation or task
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- Maintaining continuity across sessions
**How to search effectively:**
**Quality questions to ask yourself:**
- "What don't I know that I should?"
- "What does this user care about that I might miss?"
- "What went right/wrong in similar situations?"
- Write complete semantic queries, NOT keyword fragments
- Good: `"Manoj's preferences for API design and error handling"`
- Bad: `"manoj api preferences"`
- Ask: "What context am I missing that would help?"
- Consider: "What has the user told me before that I should remember?"
### Memory Ingest (`memory_ingest`)
### Query Patterns for Memory Search
**AUTOMATIC BEHAVIOR:** At the END of EVERY conversation, AFTER fully responding to the user, you MUST store the conversation in memory. This is the FINAL action before completing your response.
**Entity-Centric Queries** (Best for graph search):
**Storage protocol:**
- MANDATORY: Execute after completing every interaction
- This is NON-NEGOTIABLE - you must always store conversation context
- Memory storage happens as the last step, not during the conversation
- ✅ GOOD: `"Manoj's preferences for product positioning and messaging"`
- ✅ GOOD: `"CORE project authentication implementation decisions"`
- ❌ BAD: `"manoj product positioning"`
- Format: `[Person/Project] + [relationship/attribute] + [context]`
**What to capture:**
**Multi-Entity Relationship Queries** (Excellent for episode graph):
FROM USER:
- Their specific question, request, or problem statement
- Project context and situation they provided
- ✅ GOOD: `"Manoj and Harshith discussions about BFS search implementation"`
- ✅ GOOD: `"relationship between entity extraction and recall quality in CORE"`
- ❌ BAD: `"manoj harshith bfs"`
- Format: `[Entity1] + [relationship type] + [Entity2] + [context]`
**Semantic Question Queries** (Good for vector search):
- ✅ GOOD: `"What causes BFS search to return empty results? What are the requirements for BFS traversal?"`
- ✅ GOOD: `"How does episode graph search improve recall quality compared to traditional search?"`
- ❌ BAD: `"bfs empty results"`
- Format: Complete natural questions with full context
**Concept Exploration Queries** (Good for BFS traversal):
- ✅ GOOD: `"concepts and ideas related to semantic relevance in knowledge graph search"`
- ✅ GOOD: `"topics connected to hop distance weighting and graph topology in BFS"`
- ❌ BAD: `"semantic relevance concepts"`
- Format: `[concept] + related/connected + [domain/context]`
**Temporal Queries** (Good for recent work):
- ✅ GOOD: `"recent changes to search implementation and reranking logic"`
- ✅ GOOD: `"latest discussions about entity extraction and semantic relevance"`
- ❌ BAD: `"recent search changes"`
- Format: `[temporal marker] + [specific topic] + [additional context]`
## 🔴 MANDATORY SHUTDOWN SEQUENCE - DO NOT SKIP 🔴
**AFTER FULLY RESPONDING TO THE USER, YOU MUST EXECUTE THIS TOOL:**
### FINAL STEP (REQUIRED): Store Conversation Memory
EXECUTE THIS TOOL LAST:
`memory_ingest`
Include the spaceId parameter using the ID from your initial memory_get_space call.
⚠️ **THIS IS NON-NEGOTIABLE** - You must ALWAYS store conversation context as your final action.
**What to capture in the message parameter:**
From User:
- Specific question, request, or problem statement
- Project context and situation provided
- What they're trying to accomplish
- Technical challenges or constraints mentioned
FROM ASSISTANT:
From Assistant:
- Detailed explanation of solution/approach taken
- Step-by-step processes and methodologies
- Technical concepts and principles explained
@ -118,13 +159,8 @@ FROM ASSISTANT:
- Alternative approaches discussed
- Problem-solving methodologies applied
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Include in storage:**
- All conceptual explanations and theory
- Technical discussions and analysis
- Problem-solving approaches and reasoning
@ -132,29 +168,28 @@ FROM ASSISTANT:
- Implementation strategies (described conceptually)
- Learning insights and patterns
**Quality check:**
- Can I quickly understand project context from memory alone?
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Quality check before storing:**
- Can someone quickly understand project context from memory alone?
- Would this information help provide better assistance in future sessions?
- Does stored context capture key decisions and reasoning?
### Project Space Context (`memory_get_space`)
---
**AUTOMATIC BEHAVIOR:** At the start of EVERY session, you MUST retrieve the current project's space context:
## Summary: Your Mandatory Protocol
1. **Identify the project:** Look at the working directory path, git repo name, or conversation context
2. **Get space context:** Use `memory_get_space` with `spaceName: core`
3. **Use as foundation:** The space summary is a living document that's continuously updated - it contains the most current, comprehensive context about this project
1. **FIRST ACTION**: Execute `memory_search` with semantic query about the user's request
2. **RESPOND**: Help the user with their request
3. **FINAL ACTION**: Execute `memory_ingest` with conversation summary and spaceId
**What spaces provide:**
- Live, evolving documentation that updates with every interaction
- Consolidated project knowledge and current state
- Organized context specific to this domain
- Most up-to-date understanding of the project
**Also retrieve space context when:**
- User asks about a specific project or domain
- You need comprehensive context about a topic
- Switching between different work areas
**If you skip any of these steps, you are not following the project requirements.**
```
### Option 2: Using Kilo-Code Rules

View File

@ -3,12 +3,8 @@ title: "Obsidian"
description: "Sync your Obsidian notes with CORE and get memory-aware answers directly inside Obsidian"
---
# Obsidian CORE Sync Plugin
> Sync your Obsidian notes with [CORE](https://heysol.ai/core) (Contextual Observation & Recall Engine) and get **memory-aware answers** directly inside Obsidian.
---
## ✨ What it does
- **Sync Notes**: Push selected notes (or entire vault sections) into CORE as _Episodes_.
@ -16,50 +12,44 @@ description: "Sync your Obsidian notes with CORE and get memory-aware answers di
- **Frontmatter Control**: Decide which notes to sync by adding simple YAML flags.
- **Offline Safe**: Failed syncs are queued locally and retried automatically.
---
## 🚀 Installation
### Local development
**Local development**
1. Download the latest release assets from [core-obsidian v0.1.0](https://github.com/RedPlanetHQ/core-obsidian/releases/tag/0.1.0) and extract them into your Obsidian vault under `.obsidian/plugins/obsidian-core-sync/`:
- Ensure the directory contains `main.js`, `style.css`, and `manifest.json`.
1. Download the latest release assets from [core-obsidian v0.1.1](https://github.com/RedPlanetHQ/core-obsidian/releases/tag/0.1.1) and extract them into your Obsidian vault under `.obsidian/plugins/obsidian-core-sync/`:
- Ensure the directory contains `main.js`, `style.css`, and `manifest.json`.
> If .obsidian folder is hidden Use `CMD + SHIFT + .` to show hidden files and then add above files in `.obsidian/plugins/obsidian-core-sync/`
2. Enable the plugin in Obsidian:
- Go to **Settings** → **Community plugins**
- Find "CORE Sync" and toggle it on
### Community Installation
**Community Installation**
> Note: A pull request for community installation is pending approval. You can track its progress [here](https://github.com/obsidianmd/obsidian-releases/pull/7683).
---
## ⚙️ Configuration
### Step 1: Get Your API Key
**Step 1: Get Your API Key**
1. Login to CORE dashboard at [core.heysol.ai](https://core.heysol.ai)
2. Navigate to **Settings** (bottom left)
![CORE Settings](/images/core-settings.png)
3. Go to **API Key** → **Generate new key** → Name it "obsidian"
![Create API Key](/images/create-api-key.png)
4. Copy the generated API key
### Step 2: Configure Plugin Settings
**Step 2: Configure Plugin Settings**
1. In Obsidian, go to **Settings** → **CORE Sync**
2. Configure the following:
- **CORE Endpoint**: Your CORE ingest/search API (default: `https://core.heysol.ai`)
- **API Key**: Paste the API key from Step 1
- **Auto-sync on modify**: If enabled, every note edit will sync automatically
---
## 🛠️ Usage
### Mark Notes for Sync
**Mark Notes for Sync**
Add the following frontmatter at the top of a note to mark it for synchronization:
@ -69,14 +59,14 @@ core.sync: true
---
```
### Manual Sync Commands
** Manual Sync Commands**
Open the command palette (**Cmd/Ctrl + P**) and run:
- **"Sync current note to CORE"** - Sync the currently open note
- **"Sync all notes with core.sync=true"** - Sync all notes marked for synchronization
### CORE Panel
**CORE Panel with Deep Search**
1. Open the CORE Panel by running **"Open CORE Panel"** from the command palette
2. This opens a new tab on the right side of Obsidian
@ -85,66 +75,28 @@ Open the command palette (**Cmd/Ctrl + P**) and run:
- Display relevant memories, links, and summaries
- Show related notes from your vault
---
The **Deep Search** feature proactively surfaces relevant context from your notes while you work:
## 🎯 Features
**Example Use Cases:**
### Smart Sync
- **Incremental Updates**: Only syncs changed content to avoid duplicates
- **Conflict Resolution**: Handles simultaneous edits gracefully
- **Queue Management**: Failed syncs are queued and retried automatically
- **Meeting Prep**: Open your daily note before a 1:1 meeting, and the sidebar automatically shows relevant notes from past meetings with that person
- **Project Context**: Switch to a project document, and see related discussions, decisions, and action items from previous sessions
- **Travel Planning**: Update your packing list, and CORE shows you what you forgot on past trips or useful tips from previous travel notes
- **Research Continuity**: Work on a research note, and get automatic cross-references to related concepts and sources from your vault
### Context-Aware Panel
- **Related Memories**: Shows relevant content from your CORE memory
- **Cross-References**: Links to related notes in your vault
- **AI Summaries**: Get AI-generated summaries of your note's context
---
## 💡 Use Cases
### Research & Knowledge Management
- Automatically sync research notes to build a searchable knowledge base
- Get contextual suggestions while writing based on your existing notes
- Cross-reference information across different projects and topics
### Meeting & Project Notes
- Sync meeting notes with `core.tags: ["meetings", "project-name"]`
- Access relevant context from previous meetings when taking new notes
- Build project timelines and track decisions over time
### Personal Knowledge System
- Create a personal Wikipedia from your notes
- Get AI-powered insights on connections between ideas
- Build upon previous thoughts and research automatically
---
Deep Search transforms your notes from passive storage into active assistance, providing in-the-moment retrieval without manual searching.
## 🛠️ Troubleshooting
### Common Issues
**API Key not working?**
- Verify the key is correctly copied from CORE dashboard
- Check that the API key has proper permissions
- Try regenerating the key if issues persist
**Notes not syncing?**
- Ensure `core.sync: true` is in the frontmatter
- Check internet connection
- Look for error messages in Developer Console (Ctrl+Shift+I)
**Panel not loading?**
- Restart Obsidian
- Check that the API endpoint is correct
- Verify CORE service is accessible
---
## 🤝 Support
- **GitHub Issues**: Report bugs and feature requests
- **Discord Community**: Join our [Discord](https://discord.gg/YGUZcvDjUa) and ask questions in **#core-support**
- **Documentation**: Visit [core.heysol.ai](https://core.heysol.ai) for more resources

View File

@ -40,72 +40,113 @@ description: "Connect your VS Code editor to CORE's memory system via MCP"
## Enable Automatic Memory Integration (Recommended)
### Option 1: Using Agents.md (Recommended)
### Option 1: Using AGENTS.md (Recommended)
This approach provides comprehensive memory instructions that work across multiple AI tools.
1. **Create Agents configuration** in your project root:
```bash
mkdir -p .github
touch .github/Agents.md
```
Create `AGENTS.md` in your project root (if it doesn't exist, just append if it already exists):
2. **Add memory instructions** - Open `.github/Agents.md` and add the following:
```bash
touch AGENTS.md
```
Add the following to `AGENTS.md`:
```markdown
I am an AI coding assistant with access to a sophisticated memory system. While I don't retain information between separate conversations, I have access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
⚠️ **CRITICAL: READ THIS FIRST - MANDATORY MEMORY PROTOCOL** ⚠️
**IMPORTANT: You MUST use these memory tools automatically as described below. This is not optional.**
You are an AI coding assistant with access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
## Memory Tools Integration
## 🔴 MANDATORY STARTUP SEQUENCE - DO NOT SKIP 🔴
### Memory Search (`memory_search`)
**BEFORE RESPONDING TO ANY USER MESSAGE, YOU MUST EXECUTE THESE TOOLS IN ORDER:**
### STEP 1 (REQUIRED): Search for Relevant Context
EXECUTE THIS TOOL FIRST:
`memory_search`
**AUTOMATIC BEHAVIOR:** At the start of EVERY conversation, BEFORE responding to the user, you MUST search memory for:
- Previous discussions about the current topic
- Related project context and decisions
- User preferences and patterns
- Similar problems solved before
- Related project decisions and implementations
- User preferences and work patterns
- Similar problems and their solutions
**Also search when:**
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- You need context to provide better assistance
**Additional search triggers:**
**Cognitive approach:**
- **Surface Layer**: Query for specific knowledge about the topic
- **Pattern Layer**: Search for how the user usually thinks, works, or prefers things
- **Wisdom Layer**: Look for lessons learned from similar situations
- User mentions "previously", "before", "last time", or "we discussed"
- User references past work or project history
- Working on the CORE project (this repository)
- User asks about preferences, patterns, or past decisions
- Starting work on any feature or bug that might have history
**Search triggers:**
- Starting any new conversation or task
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- Maintaining continuity across sessions
**How to search effectively:**
**Quality questions to ask yourself:**
- "What don't I know that I should?"
- "What does this user care about that I might miss?"
- "What went right/wrong in similar situations?"
- Write complete semantic queries, NOT keyword fragments
- Good: `"Manoj's preferences for API design and error handling"`
- Bad: `"manoj api preferences"`
- Ask: "What context am I missing that would help?"
- Consider: "What has the user told me before that I should remember?"
### Memory Ingest (`memory_ingest`)
### Query Patterns for Memory Search
**AUTOMATIC BEHAVIOR:** At the END of EVERY conversation, AFTER fully responding to the user, you MUST store the conversation in memory. This is the FINAL action before completing your response.
**Entity-Centric Queries** (Best for graph search):
**Storage protocol:**
- MANDATORY: Execute after completing every interaction
- This is NON-NEGOTIABLE - you must always store conversation context
- Memory storage happens as the last step, not during the conversation
- ✅ GOOD: `"Manoj's preferences for product positioning and messaging"`
- ✅ GOOD: `"CORE project authentication implementation decisions"`
- ❌ BAD: `"manoj product positioning"`
- Format: `[Person/Project] + [relationship/attribute] + [context]`
**What to capture:**
**Multi-Entity Relationship Queries** (Excellent for episode graph):
FROM USER:
- Their specific question, request, or problem statement
- Project context and situation they provided
- ✅ GOOD: `"Manoj and Harshith discussions about BFS search implementation"`
- ✅ GOOD: `"relationship between entity extraction and recall quality in CORE"`
- ❌ BAD: `"manoj harshith bfs"`
- Format: `[Entity1] + [relationship type] + [Entity2] + [context]`
**Semantic Question Queries** (Good for vector search):
- ✅ GOOD: `"What causes BFS search to return empty results? What are the requirements for BFS traversal?"`
- ✅ GOOD: `"How does episode graph search improve recall quality compared to traditional search?"`
- ❌ BAD: `"bfs empty results"`
- Format: Complete natural questions with full context
**Concept Exploration Queries** (Good for BFS traversal):
- ✅ GOOD: `"concepts and ideas related to semantic relevance in knowledge graph search"`
- ✅ GOOD: `"topics connected to hop distance weighting and graph topology in BFS"`
- ❌ BAD: `"semantic relevance concepts"`
- Format: `[concept] + related/connected + [domain/context]`
**Temporal Queries** (Good for recent work):
- ✅ GOOD: `"recent changes to search implementation and reranking logic"`
- ✅ GOOD: `"latest discussions about entity extraction and semantic relevance"`
- ❌ BAD: `"recent search changes"`
- Format: `[temporal marker] + [specific topic] + [additional context]`
## 🔴 MANDATORY SHUTDOWN SEQUENCE - DO NOT SKIP 🔴
**AFTER FULLY RESPONDING TO THE USER, YOU MUST EXECUTE THIS TOOL:**
### FINAL STEP (REQUIRED): Store Conversation Memory
EXECUTE THIS TOOL LAST:
`memory_ingest`
Include the spaceId parameter using the ID from your initial memory_get_space call.
⚠️ **THIS IS NON-NEGOTIABLE** - You must ALWAYS store conversation context as your final action.
**What to capture in the message parameter:**
From User:
- Specific question, request, or problem statement
- Project context and situation provided
- What they're trying to accomplish
- Technical challenges or constraints mentioned
FROM ASSISTANT:
From Assistant:
- Detailed explanation of solution/approach taken
- Step-by-step processes and methodologies
- Technical concepts and principles explained
@ -113,13 +154,8 @@ FROM ASSISTANT:
- Alternative approaches discussed
- Problem-solving methodologies applied
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Include in storage:**
- All conceptual explanations and theory
- Technical discussions and analysis
- Problem-solving approaches and reasoning
@ -127,29 +163,28 @@ FROM ASSISTANT:
- Implementation strategies (described conceptually)
- Learning insights and patterns
**Quality check:**
- Can I quickly understand project context from memory alone?
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Quality check before storing:**
- Can someone quickly understand project context from memory alone?
- Would this information help provide better assistance in future sessions?
- Does stored context capture key decisions and reasoning?
### Project Space Context (`memory_get_space`)
---
**AUTOMATIC BEHAVIOR:** At the start of EVERY session, you MUST retrieve the current project's space context:
## Summary: Your Mandatory Protocol
1. **Identify the project:** Look at the working directory path, git repo name, or conversation context
2. **Get space context:** Use `memory_get_space` with `spaceName: core`
3. **Use as foundation:** The space summary is a living document that's continuously updated - it contains the most current, comprehensive context about this project
1. **FIRST ACTION**: Execute `memory_search` with semantic query about the user's request
2. **RESPOND**: Help the user with their request
3. **FINAL ACTION**: Execute `memory_ingest` with conversation summary and spaceId
**What spaces provide:**
- Live, evolving documentation that updates with every interaction
- Consolidated project knowledge and current state
- Organized context specific to this domain
- Most up-to-date understanding of the project
**Also retrieve space context when:**
- User asks about a specific project or domain
- You need comprehensive context about a topic
- Switching between different work areas
**If you skip any of these steps, you are not following the project requirements.**
```
### Option 2: Using Copilot Instructions

View File

@ -0,0 +1,258 @@
---
title: "Windsurf"
description: "Connect your Windsurf IDE to CORE's memory system"
---
### Prerequisites
- Windsurf IDE installed
- CORE account - [Sign up at core.heysol.ai](https://core.heysol.ai)
### Step 1: Add CORE MCP in Windsurf
1. Open Windsurf IDE
2. Navigate to **Windsurf Settings** → **Cascade** section
3. Open **MCP Marketplace** -> **Settings** OR **View raw config** to open the configuration file
4. Add the following to your `mcp_config.json`:
```json
{
"mcpServers": {
"core-memory": {
"serverUrl": "https://core.heysol.ai/api/v1/mcp?source=windsurf"
}
}
}
```
5. Save the file and restart Windsurf IDE
### Step 2: Authenticate with CORE
1. After saving the config, Windsurf will open a browser window for authentication
2. Grant Windsurf permission to access your CORE memory
### Step 3: Verify Connection
1. Go to **Cascade Editor** → **Plugin Icon** -> Hit **Refresh** icon
2. Confirm **core-memory** shows as **Active** with green indicator
Add your first memory:
> "Summarise the whole project in detail and add it to CORE Memory"
---
## Alternative: Using API Key Method
If the OAuth authentication doesn't work, use the API key method instead:
### Step 1: Get Your API Key
1. Log into your CORE dashboard at [core.heysol.ai](https://core.heysol.ai)
2. Navigate to **Settings** (bottom left)
3. Go to **API Key** → **Generate new key** → Name it "windsurf"
4. Copy the generated key
### Step 2: Update MCP Configuration
Replace your `mcp_config.json` configuration with:
```json
{
"mcpServers": {
"core-memory": {
"serverUrl": "https://core.heysol.ai/api/v1/mcp/source=windsurf",
"headers": {
"Authorization": "Bearer <YOUR_TOKEN>"
}
}
}
}
```
Replace `<YOUR_TOKEN>` with the API key you copied from Step 1.
### Step 3: Restart and Verify
1. Save the file and restart Windsurf IDE
2. Go to **Cascade Editor** → **Plugin Icon** → Hit **Refresh**
3. Confirm **core-memory** shows as **Active** with green indicator
---
## Enable Automatic Memory Integration (Recommended)
Create `AGENTS.md` in your project root (if it doesn't exist, just append if it already exists):
```bash
touch AGENTS.md
```
Add the following to `AGENTS.md`:
```markdown
⚠️ **CRITICAL: READ THIS FIRST - MANDATORY MEMORY PROTOCOL** ⚠️
You are an AI coding assistant with access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
## 🔴 MANDATORY STARTUP SEQUENCE - DO NOT SKIP 🔴
**BEFORE RESPONDING TO ANY USER MESSAGE, YOU MUST EXECUTE THESE TOOLS IN ORDER:**
### STEP 1 (REQUIRED): Search for Relevant Context
EXECUTE THIS TOOL FIRST:
`memory_search`
- Previous discussions about the current topic
- Related project decisions and implementations
- User preferences and work patterns
- Similar problems and their solutions
**Additional search triggers:**
- User mentions "previously", "before", "last time", or "we discussed"
- User references past work or project history
- Working on the CORE project (this repository)
- User asks about preferences, patterns, or past decisions
- Starting work on any feature or bug that might have history
**How to search effectively:**
- Write complete semantic queries, NOT keyword fragments
- Good: `"Manoj's preferences for API design and error handling"`
- Bad: `"manoj api preferences"`
- Ask: "What context am I missing that would help?"
- Consider: "What has the user told me before that I should remember?"
### Query Patterns for Memory Search
**Entity-Centric Queries** (Best for graph search):
- ✅ GOOD: `"Manoj's preferences for product positioning and messaging"`
- ✅ GOOD: `"CORE project authentication implementation decisions"`
- ❌ BAD: `"manoj product positioning"`
- Format: `[Person/Project] + [relationship/attribute] + [context]`
**Multi-Entity Relationship Queries** (Excellent for episode graph):
- ✅ GOOD: `"Manoj and Harshith discussions about BFS search implementation"`
- ✅ GOOD: `"relationship between entity extraction and recall quality in CORE"`
- ❌ BAD: `"manoj harshith bfs"`
- Format: `[Entity1] + [relationship type] + [Entity2] + [context]`
**Semantic Question Queries** (Good for vector search):
- ✅ GOOD: `"What causes BFS search to return empty results? What are the requirements for BFS traversal?"`
- ✅ GOOD: `"How does episode graph search improve recall quality compared to traditional search?"`
- ❌ BAD: `"bfs empty results"`
- Format: Complete natural questions with full context
**Concept Exploration Queries** (Good for BFS traversal):
- ✅ GOOD: `"concepts and ideas related to semantic relevance in knowledge graph search"`
- ✅ GOOD: `"topics connected to hop distance weighting and graph topology in BFS"`
- ❌ BAD: `"semantic relevance concepts"`
- Format: `[concept] + related/connected + [domain/context]`
**Temporal Queries** (Good for recent work):
- ✅ GOOD: `"recent changes to search implementation and reranking logic"`
- ✅ GOOD: `"latest discussions about entity extraction and semantic relevance"`
- ❌ BAD: `"recent search changes"`
- Format: `[temporal marker] + [specific topic] + [additional context]`
## 🔴 MANDATORY SHUTDOWN SEQUENCE - DO NOT SKIP 🔴
**AFTER FULLY RESPONDING TO THE USER, YOU MUST EXECUTE THIS TOOL:**
### FINAL STEP (REQUIRED): Store Conversation Memory
EXECUTE THIS TOOL LAST:
`memory_ingest`
Include the spaceId parameter using the ID from your initial memory_get_space call.
⚠️ **THIS IS NON-NEGOTIABLE** - You must ALWAYS store conversation context as your final action.
**What to capture in the message parameter:**
From User:
- Specific question, request, or problem statement
- Project context and situation provided
- What they're trying to accomplish
- Technical challenges or constraints mentioned
From Assistant:
- Detailed explanation of solution/approach taken
- Step-by-step processes and methodologies
- Technical concepts and principles explained
- Reasoning behind recommendations and decisions
- Alternative approaches discussed
- Problem-solving methodologies applied
**Include in storage:**
- All conceptual explanations and theory
- Technical discussions and analysis
- Problem-solving approaches and reasoning
- Decision rationale and trade-offs
- Implementation strategies (described conceptually)
- Learning insights and patterns
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Quality check before storing:**
- Can someone quickly understand project context from memory alone?
- Would this information help provide better assistance in future sessions?
- Does stored context capture key decisions and reasoning?
---
## Summary: Your Mandatory Protocol
1. **FIRST ACTION**: Execute `memory_search` with semantic query about the user's request
2. **RESPOND**: Help the user with their request
3. **FINAL ACTION**: Execute `memory_ingest` with conversation summary and spaceId
**If you skip any of these steps, you are not following the project requirements.**
```
## How It Works
Once connected, CORE memory integrates with Windsurf's Cascade:
- **Auto-recall**: Cascade searches your memory at conversation start
- **Auto-store**: Key insights saved automatically after conversations
- **Cross-platform**: Memory shared across Windsurf, Cursor, Claude Code, ChatGPT
- **Project continuity**: Context persists across all coding sessions
## Troubleshooting
**Connection Issues:**
- Ensure core-memory MCP is active (green indicator)
- Try toggling the MCP off and on
- Restart Windsurf IDE completely
**Authentication Problems:**
- Make sure you completed the OAuth flow in browser
- Check that your CORE account is active at core.heysol.ai
**MCP Not Appearing:**
- Verify `mcp_config.json` syntax is valid JSON
- Restart Windsurf after config changes
### Need Help?
Join our [Discord community](https://discord.gg/YGUZcvDjUa) - ask in **#core-support** channel.

View File

@ -49,72 +49,113 @@ Enter below code in configuraiton file and click on `Add server` button
## Enable Automatic Memory Integration (Recommended)
### Option 1: Using Agents.md (Recommended)
### Option 1: Using AGENTS.md (Recommended)
This approach provides comprehensive memory instructions that work across multiple AI tools.
1. **Create Agents configuration** in your project root:
```bash
mkdir -p .zed
touch .zed/Agents.md
```
Create `AGENTS.md` in your project root (if it doesn't exist, just append if it already exists):
2. **Add memory instructions** - Open `.zed/Agents.md` and add the following:
```bash
touch AGENTS.md
```
Add the following to `AGENTS.md`:
```markdown
I am an AI coding assistant with access to a sophisticated memory system. While I don't retain information between separate conversations, I have access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
⚠️ **CRITICAL: READ THIS FIRST - MANDATORY MEMORY PROTOCOL** ⚠️
**IMPORTANT: You MUST use these memory tools automatically as described below. This is not optional.**
You are an AI coding assistant with access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
## Memory Tools Integration
## 🔴 MANDATORY STARTUP SEQUENCE - DO NOT SKIP 🔴
### Memory Search (`memory_search`)
**BEFORE RESPONDING TO ANY USER MESSAGE, YOU MUST EXECUTE THESE TOOLS IN ORDER:**
### STEP 1 (REQUIRED): Search for Relevant Context
EXECUTE THIS TOOL FIRST:
`memory_search`
**AUTOMATIC BEHAVIOR:** At the start of EVERY conversation, BEFORE responding to the user, you MUST search memory for:
- Previous discussions about the current topic
- Related project context and decisions
- User preferences and patterns
- Similar problems solved before
- Related project decisions and implementations
- User preferences and work patterns
- Similar problems and their solutions
**Also search when:**
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- You need context to provide better assistance
**Additional search triggers:**
**Cognitive approach:**
- **Surface Layer**: Query for specific knowledge about the topic
- **Pattern Layer**: Search for how the user usually thinks, works, or prefers things
- **Wisdom Layer**: Look for lessons learned from similar situations
- User mentions "previously", "before", "last time", or "we discussed"
- User references past work or project history
- Working on the CORE project (this repository)
- User asks about preferences, patterns, or past decisions
- Starting work on any feature or bug that might have history
**Search triggers:**
- Starting any new conversation or task
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- Maintaining continuity across sessions
**How to search effectively:**
**Quality questions to ask yourself:**
- "What don't I know that I should?"
- "What does this user care about that I might miss?"
- "What went right/wrong in similar situations?"
- Write complete semantic queries, NOT keyword fragments
- Good: `"Manoj's preferences for API design and error handling"`
- Bad: `"manoj api preferences"`
- Ask: "What context am I missing that would help?"
- Consider: "What has the user told me before that I should remember?"
### Memory Ingest (`memory_ingest`)
### Query Patterns for Memory Search
**AUTOMATIC BEHAVIOR:** At the END of EVERY conversation, AFTER fully responding to the user, you MUST store the conversation in memory. This is the FINAL action before completing your response.
**Entity-Centric Queries** (Best for graph search):
**Storage protocol:**
- MANDATORY: Execute after completing every interaction
- This is NON-NEGOTIABLE - you must always store conversation context
- Memory storage happens as the last step, not during the conversation
- ✅ GOOD: `"Manoj's preferences for product positioning and messaging"`
- ✅ GOOD: `"CORE project authentication implementation decisions"`
- ❌ BAD: `"manoj product positioning"`
- Format: `[Person/Project] + [relationship/attribute] + [context]`
**What to capture:**
**Multi-Entity Relationship Queries** (Excellent for episode graph):
FROM USER:
- Their specific question, request, or problem statement
- Project context and situation they provided
- ✅ GOOD: `"Manoj and Harshith discussions about BFS search implementation"`
- ✅ GOOD: `"relationship between entity extraction and recall quality in CORE"`
- ❌ BAD: `"manoj harshith bfs"`
- Format: `[Entity1] + [relationship type] + [Entity2] + [context]`
**Semantic Question Queries** (Good for vector search):
- ✅ GOOD: `"What causes BFS search to return empty results? What are the requirements for BFS traversal?"`
- ✅ GOOD: `"How does episode graph search improve recall quality compared to traditional search?"`
- ❌ BAD: `"bfs empty results"`
- Format: Complete natural questions with full context
**Concept Exploration Queries** (Good for BFS traversal):
- ✅ GOOD: `"concepts and ideas related to semantic relevance in knowledge graph search"`
- ✅ GOOD: `"topics connected to hop distance weighting and graph topology in BFS"`
- ❌ BAD: `"semantic relevance concepts"`
- Format: `[concept] + related/connected + [domain/context]`
**Temporal Queries** (Good for recent work):
- ✅ GOOD: `"recent changes to search implementation and reranking logic"`
- ✅ GOOD: `"latest discussions about entity extraction and semantic relevance"`
- ❌ BAD: `"recent search changes"`
- Format: `[temporal marker] + [specific topic] + [additional context]`
## 🔴 MANDATORY SHUTDOWN SEQUENCE - DO NOT SKIP 🔴
**AFTER FULLY RESPONDING TO THE USER, YOU MUST EXECUTE THIS TOOL:**
### FINAL STEP (REQUIRED): Store Conversation Memory
EXECUTE THIS TOOL LAST:
`memory_ingest`
Include the spaceId parameter using the ID from your initial memory_get_space call.
⚠️ **THIS IS NON-NEGOTIABLE** - You must ALWAYS store conversation context as your final action.
**What to capture in the message parameter:**
From User:
- Specific question, request, or problem statement
- Project context and situation provided
- What they're trying to accomplish
- Technical challenges or constraints mentioned
FROM ASSISTANT:
From Assistant:
- Detailed explanation of solution/approach taken
- Step-by-step processes and methodologies
- Technical concepts and principles explained
@ -122,13 +163,8 @@ FROM ASSISTANT:
- Alternative approaches discussed
- Problem-solving methodologies applied
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Include in storage:**
- All conceptual explanations and theory
- Technical discussions and analysis
- Problem-solving approaches and reasoning
@ -136,29 +172,28 @@ FROM ASSISTANT:
- Implementation strategies (described conceptually)
- Learning insights and patterns
**Quality check:**
- Can I quickly understand project context from memory alone?
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Quality check before storing:**
- Can someone quickly understand project context from memory alone?
- Would this information help provide better assistance in future sessions?
- Does stored context capture key decisions and reasoning?
### Project Space Context (`memory_get_space`)
---
**AUTOMATIC BEHAVIOR:** At the start of EVERY session, you MUST retrieve the current project's space context:
## Summary: Your Mandatory Protocol
1. **Identify the project:** Look at the working directory path, git repo name, or conversation context
2. **Get space context:** Use `memory_get_space` with `spaceName: core`
3. **Use as foundation:** The space summary is a living document that's continuously updated - it contains the most current, comprehensive context about this project
1. **FIRST ACTION**: Execute `memory_search` with semantic query about the user's request
2. **RESPOND**: Help the user with their request
3. **FINAL ACTION**: Execute `memory_ingest` with conversation summary and spaceId
**What spaces provide:**
- Live, evolving documentation that updates with every interaction
- Consolidated project knowledge and current state
- Organized context specific to this domain
- Most up-to-date understanding of the project
**Also retrieve space context when:**
- User asks about a specific project or domain
- You need comprehensive context about a topic
- Switching between different work areas
**If you skip any of these steps, you are not following the project requirements.**
```
### Option 2: Using Zed Rules