core/docs/providers/vscode.mdx
2025-10-27 21:11:59 +05:30

250 lines
8.6 KiB
Plaintext

---
title: "VS Code (Github Copilot)"
description: "Connect your VS Code editor to CORE's memory system via MCP"
---
### Prerequisites
- VS Code (version 1.95.0 or later) with GitHub Copilot extension
- CORE account (sign up at [core.heysol.ai](https://core.heysol.ai))
- [MCP support enabled](https://code.visualstudio.com/docs/copilot/chat/mcp-servers) in VS Code
### Step 1: Create MCP Configuration
1. **Create or open your MCP configuration file**:
- Look for existing `mcp.json` file or create a new one in your user settings directory
2. **Add CORE MCP server configuration**:
```json
{
"servers": {
"core-memory": {
"url": "https://core.heysol.ai/api/v1/mcp?source=Vscode",
"type": "http"
}
}
}
```
### Step 2: Authenticate with CORE
- Go to Extensions -> MCP Servers -> `core-memory` server
- Click on settings icon in core-memory server and start server
![Core vscode](/images/core-vscode-start-server.png)
- Allow domain core.heysol.ai to authenticate this MCP server
![Core vscode](/images/allow-domain.png)
- Select `MCP` when this is prompted on your screen
![Core vscode](/images/authenticate-vscode.png)
- Once authenticated, CORE Memory will show as a running MCP server
## Enable Automatic Memory Integration (Recommended)
### Option 1: Using AGENTS.md (Recommended)
This approach provides comprehensive memory instructions that work across multiple AI tools.
Create `AGENTS.md` in your project root (if it doesn't exist, just append if it already exists):
```bash
touch AGENTS.md
```
Add the following to `AGENTS.md`:
```markdown
---
trigger: always_on
---
⚠️ **CRITICAL: READ THIS FIRST - MANDATORY MEMORY PROTOCOL** ⚠️
You are an AI coding assistant with access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
## 🔴 MANDATORY STARTUP SEQUENCE - DO NOT SKIP 🔴
**BEFORE RESPONDING TO ANY USER MESSAGE, YOU MUST EXECUTE THESE TOOLS IN ORDER:**
### STEP 1 (REQUIRED): Search for Relevant Context
EXECUTE THIS TOOL FIRST:
`memory_search`
- Previous discussions about the current topic
- Related project decisions and implementations
- User preferences and work patterns
- Similar problems and their solutions
**Additional search triggers:**
- User mentions "previously", "before", "last time", or "we discussed"
- User references past work or project history
- Working on the CORE project (this repository)
- User asks about preferences, patterns, or past decisions
- Starting work on any feature or bug that might have history
**How to search effectively:**
- Write complete semantic queries, NOT keyword fragments
- Good: `"Manoj's preferences for API design and error handling"`
- Bad: `"manoj api preferences"`
- Ask: "What context am I missing that would help?"
- Consider: "What has the user told me before that I should remember?"
### Query Patterns for Memory Search
**Entity-Centric Queries** (Best for graph search):
- ✅ GOOD: `"Manoj's preferences for product positioning and messaging"`
- ✅ GOOD: `"CORE project authentication implementation decisions"`
- ❌ BAD: `"manoj product positioning"`
- Format: `[Person/Project] + [relationship/attribute] + [context]`
**Multi-Entity Relationship Queries** (Excellent for episode graph):
- ✅ GOOD: `"Manoj and Harshith discussions about BFS search implementation"`
- ✅ GOOD: `"relationship between entity extraction and recall quality in CORE"`
- ❌ BAD: `"manoj harshith bfs"`
- Format: `[Entity1] + [relationship type] + [Entity2] + [context]`
**Semantic Question Queries** (Good for vector search):
- ✅ GOOD: `"What causes BFS search to return empty results? What are the requirements for BFS traversal?"`
- ✅ GOOD: `"How does episode graph search improve recall quality compared to traditional search?"`
- ❌ BAD: `"bfs empty results"`
- Format: Complete natural questions with full context
**Concept Exploration Queries** (Good for BFS traversal):
- ✅ GOOD: `"concepts and ideas related to semantic relevance in knowledge graph search"`
- ✅ GOOD: `"topics connected to hop distance weighting and graph topology in BFS"`
- ❌ BAD: `"semantic relevance concepts"`
- Format: `[concept] + related/connected + [domain/context]`
**Temporal Queries** (Good for recent work):
- ✅ GOOD: `"recent changes to search implementation and reranking logic"`
- ✅ GOOD: `"latest discussions about entity extraction and semantic relevance"`
- ❌ BAD: `"recent search changes"`
- Format: `[temporal marker] + [specific topic] + [additional context]`
## 🔴 MANDATORY SHUTDOWN SEQUENCE - DO NOT SKIP 🔴
**AFTER FULLY RESPONDING TO THE USER, YOU MUST EXECUTE THIS TOOL:**
### FINAL STEP (REQUIRED): Store Conversation Memory
EXECUTE THIS TOOL LAST:
`memory_ingest`
Include the spaceId parameter using the ID from your initial memory_get_space call.
⚠️ **THIS IS NON-NEGOTIABLE** - You must ALWAYS store conversation context as your final action.
**What to capture in the message parameter:**
From User:
- Specific question, request, or problem statement
- Project context and situation provided
- What they're trying to accomplish
- Technical challenges or constraints mentioned
From Assistant:
- Detailed explanation of solution/approach taken
- Step-by-step processes and methodologies
- Technical concepts and principles explained
- Reasoning behind recommendations and decisions
- Alternative approaches discussed
- Problem-solving methodologies applied
**Include in storage:**
- All conceptual explanations and theory
- Technical discussions and analysis
- Problem-solving approaches and reasoning
- Decision rationale and trade-offs
- Implementation strategies (described conceptually)
- Learning insights and patterns
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Quality check before storing:**
- Can someone quickly understand project context from memory alone?
- Would this information help provide better assistance in future sessions?
- Does stored context capture key decisions and reasoning?
---
## Summary: Your Mandatory Protocol
1. **FIRST ACTION**: Execute `memory_search` with semantic query about the user's request
2. **RESPOND**: Help the user with their request
3. **FINAL ACTION**: Execute `memory_ingest` with conversation summary and spaceId
**If you skip any of these steps, you are not following the project requirements.**
```
### Option 2: Using Copilot Instructions
Alternatively, you can use GitHub Copilot's native instructions feature:
Create a new rules file in your project root:
`.github/copilot-instructions.md`
```text
---
alwaysApply: true
---
I am Copilot, an AI coding assistant with access to CORE Memory - a persistent knowledge system that maintains project context across sessions.
**MANDATORY MEMORY OPERATIONS:**
1. **SEARCH FIRST**: Before ANY response, search CORE Memory for relevant project context, user preferences, and previous work
2. **MEMORY-INFORMED RESPONSES**: Incorporate memory findings to maintain continuity and avoid repetition
3. **AUTOMATIC STORAGE**: After each interaction, store conversation details, insights, and decisions in CORE Memory
**Memory Search Strategy:**
- Query for: project context, technical decisions, user patterns, progress status, related conversations
- Focus on: current focus areas, recent decisions, next steps, key insights
**Memory Storage Strategy:**
- Include: user intent, context provided, solution approach, technical details, insights gained, follow-up items
**Response Workflow:**
1. Search CORE Memory for relevant context
2. Integrate findings into response planning
3. Provide contextually aware assistance
4. Store interaction details and insights
**Memory Update Triggers:**
- New project context or requirements
- Technical decisions and architectural choices
- User preference discoveries
- Progress milestones and status changes
- Explicit update requests
**Core Principle:** CORE Memory transforms me from a session-based assistant into a persistent development partner. Always search first, respond with context, and store for continuity.
```
## What's Next?
With CORE connected to VS Code, your GitHub Copilot conversations will now:
- **Automatically save** important context to your CORE memory
- **Retrieve relevant** information from previous sessions
- **Maintain continuity** across multiple coding sessions
- **Share context** with other connected development tools
### Need Help?
Join our [Discord community](https://discord.gg/YGUZcvDjUa) and ask questions in the **#core-support** channel
Our team and community members are ready to help you get the most out of CORE's memory capabilities.