Update README.md

- Added benchmark blog link in research section
- Added section - How CORE creates memory
- Added section - How CORE recall from memory
This commit is contained in:
Manik Aggarwal 2025-09-02 16:07:15 +05:30 committed by GitHub
parent 0b88a2cd49
commit 1995d4a9c6
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -48,18 +48,16 @@
## 🔥 Research Highlights
CORE memory achieves **88.24%** average accuracy in Locomo dataset across all reasoning tasks, significantly outperforming other memory providers. Technical Report will be published soon!
CORE memory achieves **88.24%** average accuracy in Locomo dataset across all reasoning tasks, significantly outperforming other memory providers. Check out this [blog](https://blog.heysol.ai/we-built-memory-for-individuals-and-achieved-sota-on-locomo-benchmark/) for more info.
<img width="6048" height="3428" alt="benchmark" src="https://github.com/user-attachments/assets/2e5fdac5-02ed-4d00-9312-c21d09974e1f" />
(1) Single-hop questions require answers based on a single session; (2) Multi-hop questions require synthesizing information from multiple different sessions; (3) Open-domain knowledge questions can be answered by integrating a speakers provided information with external knowledge such as commonsense or world facts; (4) Temporal reasoning questions can be answered through temporal reasoning and capturing time-related data cues within the conversation;
## Overview
**Problem**
Developers waste time re-explaining context to AI tools. Hit token limits in Claude? Start fresh and lose everything. Switch from ChatGPT/Claude to Cursor? Explain your context again. Your conversations, decisions, and insights vanish between sessions.
Developers waste time re-explaining context to AI tools. Hit token limits in Claude? Start fresh and lose everything. Switch from ChatGPT/Claude to Cursor? Explain your context again. Your conversations, decisions, and insights vanish between sessions. With every new AI tool, the cost of context switching grows.
**Solution** - **CORE** (**Contextual Observation & Recall Engine**)
@ -78,9 +76,9 @@ CORE is an open-source unified, persistent memory layer for all your AI tools. Y
5. **Test it out** - ask "What do you know about me?" in conversatio section
6. Connect to your tools:
- [Claude](https://docs.heysol.ai/providers/claude) & [Cursor](https://docs.heysol.ai/providers/cursor) - coding with context
- [CLaude Code CLI](https://docs.heysol.ai/providers/claude-code) & [Gemini CLI](https://docs.heysol.ai/providers/cursor) - terminal-based coding with memory
- [Add Browser Extension](https://docs.heysol.ai/providers/cursor) - bring your memory to any website
- [Linear](https://docs.heysol.ai/providers/claude), [Slack](https://docs.heysol.ai/providers/cursor), [Github](https://docs.heysol.ai/providers/cursor) - add project context automatically
- [CLaude Code CLI](https://docs.heysol.ai/providers/claude-code) & [Gemini CLI](https://docs.heysol.ai/providers/claude-code) - terminal-based coding with memory
- [Add Browser Extension](https://docs.heysol.ai/providers/browser-extension) - bring your memory to any website
- [Linear](https://docs.heysol.ai/integrations/linear), [Github](https://docs.heysol.ai/integrations/github) - add project context automatically
## 🧩 Key Features
@ -131,6 +129,37 @@ Connect Linear, Slack, GitHub, Notion once to CORE—then use all their tools in
![core-linear-claude](https://github.com/user-attachments/assets/7d59d92b-8c56-4745-a7ab-9a3c0341aa32)
## How CORE create memory
<img width="12885" height="3048" alt="memory-ingest-diagram" src="https://github.com/user-attachments/assets/c51679de-8260-4bee-bebf-aff32c6b8e13" />
COREs ingestion pipeline has four phases designed to capture evolving context:
1. **Normalization**: Links new information to recent context, breaks long documents into coherent chunks while keeping cross-references, and standardizes terms so by the time CORE extracts knowledge, its working with clean, contextualized input instead of messy text.
2. **Extraction**: Pulls meaning from normalized text by identifying entities (people, tools, projects, concepts), turning them into statements with context, source, and time, and mapping relationships. For example, “We wrote CORE in Next.js” becomes: Entities (Core, Next.js), Statement (CORE was developed using Next.js), and Relationship (was developed using).
3. **Resolution**: Detects contradictions, tracks how preferences evolve, and preserves multiple perspectives with provenance instead of overwriting them so memory reflects your full journey, not just the latest snapshot.
4. **Graph Integration**: Connects entities, statements, and episodes into a temporal knowledge graph that links facts to their context and history, turning isolated data into a living web of knowledge agents can actually use.
The Result: Instead of a flat database, CORE gives you a memory that grows and changes with you - preserving context, evolution, and ownership so agents can actually use it.
![memory-ingest-eg](https://github.com/user-attachments/assets/1d0a8007-153a-4842-9586-f6f4de43e647)
## How CORE recalls from memory
<img width="10610" height="3454" alt="memory-search-diagram" src="https://github.com/user-attachments/assets/3541893e-f7c9-42b9-8fad-6dabf138dbeb" />
When you ask CORE a question, it doesnt just look up text - it digs into your whole knowledge graph to find the most useful answers.
1. **Search**: CORE looks through memory from multiple angles at once - keyword search for exact matches, semantic search for related ideas even if phrased differently, and graph traversal to follow links between connected concepts.
2. **Re-Rank**: The retrieved results are reordered to highlight the most relevant and diverse ones, ensuring you dont just see obvious matches but also deeper connections.
3. **Filtering**: CORE applies smart filters based on time, reliability, and relationship strength, so only the most meaningful knowledge surfaces.
4. **Output**: You get back both facts (clear statements) and episodes (the original context they came from), so recall is always grounded in context, time, and story.
The result: CORE doesnt just recall facts - it recalls them in the right context, time, and story, so agents can respond the way you would remember.
## Documentation
Explore our documentation to get the most out of CORE
@ -188,3 +217,4 @@ Have questions or feedback? We're here to help: