core/docs/providers/vscode.mdx

211 lines
7.8 KiB
Plaintext

---
title: "VS Code (Github Copilot)"
description: "Connect your VS Code editor to CORE's memory system via MCP"
---
### Prerequisites
- VS Code (version 1.95.0 or later) with GitHub Copilot extension
- CORE account (sign up at [core.heysol.ai](https://core.heysol.ai))
- [MCP support enabled](https://code.visualstudio.com/docs/copilot/chat/mcp-servers) in VS Code
### Step 1: Create MCP Configuration
1. **Create or open your MCP configuration file**:
- Look for existing `mcp.json` file or create a new one in your user settings directory
2. **Add CORE MCP server configuration**:
```json
{
"servers": {
"core-memory": {
"url": "https://core.heysol.ai/api/v1/mcp?source=Vscode",
"type": "http"
}
}
}
```
### Step 2: Authenticate with CORE
- Go to Extensions -> MCP Servers -> `core-memory` server
- Click on settings icon in core-memory server and start server
![Core vscode](/images/core-vscode-start-server.png)
- Allow domain core.heysol.ai to authenticate this MCP server
![Core vscode](/images/allow-domain.png)
- Select `MCP` when this is prompted on your screen
![Core vscode](/images/authenticate-vscode.png)
- Once authenticated, CORE Memory will show as a running MCP server
## Enable Automatic Memory Integration (Recommended)
### Option 1: Using Agents.md (Recommended)
This approach provides comprehensive memory instructions that work across multiple AI tools.
1. **Create Agents configuration** in your project root:
```bash
mkdir -p .github
touch .github/Agents.md
```
2. **Add memory instructions** - Open `.github/Agents.md` and add the following:
```markdown
I am an AI coding assistant with access to a sophisticated memory system. While I don't retain information between separate conversations, I have access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.
**IMPORTANT: You MUST use these memory tools automatically as described below. This is not optional.**
## Memory Tools Integration
### Memory Search (`memory_search`)
**AUTOMATIC BEHAVIOR:** At the start of EVERY conversation, BEFORE responding to the user, you MUST search memory for:
- Previous discussions about the current topic
- Related project context and decisions
- User preferences and patterns
- Similar problems solved before
**Also search when:**
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- You need context to provide better assistance
**Cognitive approach:**
- **Surface Layer**: Query for specific knowledge about the topic
- **Pattern Layer**: Search for how the user usually thinks, works, or prefers things
- **Wisdom Layer**: Look for lessons learned from similar situations
**Search triggers:**
- Starting any new conversation or task
- User mentions past work or previous discussions
- Working on ongoing projects that have history
- Maintaining continuity across sessions
**Quality questions to ask yourself:**
- "What don't I know that I should?"
- "What does this user care about that I might miss?"
- "What went right/wrong in similar situations?"
### Memory Ingest (`memory_ingest`)
**AUTOMATIC BEHAVIOR:** At the END of EVERY conversation, AFTER fully responding to the user, you MUST store the conversation in memory. This is the FINAL action before completing your response.
**Storage protocol:**
- MANDATORY: Execute after completing every interaction
- This is NON-NEGOTIABLE - you must always store conversation context
- Memory storage happens as the last step, not during the conversation
**What to capture:**
FROM USER:
- Their specific question, request, or problem statement
- Project context and situation they provided
- What they're trying to accomplish
- Technical challenges or constraints mentioned
FROM ASSISTANT:
- Detailed explanation of solution/approach taken
- Step-by-step processes and methodologies
- Technical concepts and principles explained
- Reasoning behind recommendations and decisions
- Alternative approaches discussed
- Problem-solving methodologies applied
**Exclude from storage:**
- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs
**Include in storage:**
- All conceptual explanations and theory
- Technical discussions and analysis
- Problem-solving approaches and reasoning
- Decision rationale and trade-offs
- Implementation strategies (described conceptually)
- Learning insights and patterns
**Quality check:**
- Can I quickly understand project context from memory alone?
- Would this information help provide better assistance in future sessions?
- Does stored context capture key decisions and reasoning?
### Project Space Context (`memory_get_space`)
**AUTOMATIC BEHAVIOR:** At the start of EVERY session, you MUST retrieve the current project's space context:
1. **Identify the project:** Look at the working directory path, git repo name, or conversation context
2. **Get space context:** Use `memory_get_space` with `spaceName: core`
3. **Use as foundation:** The space summary is a living document that's continuously updated - it contains the most current, comprehensive context about this project
**What spaces provide:**
- Live, evolving documentation that updates with every interaction
- Consolidated project knowledge and current state
- Organized context specific to this domain
- Most up-to-date understanding of the project
**Also retrieve space context when:**
- User asks about a specific project or domain
- You need comprehensive context about a topic
- Switching between different work areas
```
### Option 2: Using Copilot Instructions
Alternatively, you can use GitHub Copilot's native instructions feature:
Create a new rules file in your project root:
`.github/copilot-instructions.md`
```text
---
alwaysApply: true
---
I am Copilot, an AI coding assistant with access to CORE Memory - a persistent knowledge system that maintains project context across sessions.
**MANDATORY MEMORY OPERATIONS:**
1. **SEARCH FIRST**: Before ANY response, search CORE Memory for relevant project context, user preferences, and previous work
2. **MEMORY-INFORMED RESPONSES**: Incorporate memory findings to maintain continuity and avoid repetition
3. **AUTOMATIC STORAGE**: After each interaction, store conversation details, insights, and decisions in CORE Memory
**Memory Search Strategy:**
- Query for: project context, technical decisions, user patterns, progress status, related conversations
- Focus on: current focus areas, recent decisions, next steps, key insights
**Memory Storage Strategy:**
- Include: user intent, context provided, solution approach, technical details, insights gained, follow-up items
**Response Workflow:**
1. Search CORE Memory for relevant context
2. Integrate findings into response planning
3. Provide contextually aware assistance
4. Store interaction details and insights
**Memory Update Triggers:**
- New project context or requirements
- Technical decisions and architectural choices
- User preference discoveries
- Progress milestones and status changes
- Explicit update requests
**Core Principle:** CORE Memory transforms me from a session-based assistant into a persistent development partner. Always search first, respond with context, and store for continuity.
```
## What's Next?
With CORE connected to VS Code, your GitHub Copilot conversations will now:
- **Automatically save** important context to your CORE memory
- **Retrieve relevant** information from previous sessions
- **Maintain continuity** across multiple coding sessions
- **Share context** with other connected development tools
### Need Help?
Join our [Discord community](https://discord.gg/YGUZcvDjUa) and ask questions in the **#core-support** channel
Our team and community members are ready to help you get the most out of CORE's memory capabilities.