AI agent memory is one of the most practical problems in production AI right now. Every team building agents eventually hits the same wall – the agent forgets everything between sessions. You end up duct-taping RAG pipelines, vector stores, rules engines, and session logs just to get basic continuity. cognee, a Berlin-based startup, built a dedicated memory engine specifically to solve this. And on February 19, 2026, it announced a $7.5M seed round to scale that work.
The round was led by Pebblebed – a firm run by Pamela Vagata, co-founder of OpenAI, and Keith Adams, founder of Facebook AI Research Lab. Additional backing came from 42CAP, Vermilion Ventures, and angel investors from Google DeepMind, n8n, and Snowplow.
The RAG Problem:
Most teams today use retrieval-augmented generation – or RAG – to give agents access to context. It works to a point. But RAG doesn’t understand relationships between entities. It doesn’t track how knowledge changes over time. And as data grows, recall quality drops.
cognee takes a different approach. Instead of just indexing documents and fetching chunks, it builds a knowledge graph from your data. The graph captures relationships, updates over time, and self-tunes based on feedback. The result is an agent that gets more accurate with use, not less.
How the ECL Pipeline Works:
cognee’s core is what it calls the ECL pipeline – Extract, Cognify, Load. It ingests data from 38+ source types, including PDFs, SQL databases, Excel files, audio, and images. It then structures that data into a knowledge graph with embeddings and explicit entity relationships. Finally, it makes the whole graph searchable and usable by agents.
On top of that, a layer called memify refines the graph over time. When an agent gets rated or corrected, those signals feed back into the graph’s edge weights. The memory sharpens with every interaction.
This is built on three unified storage layers – relational, vector, and graph – managed through a single engine. You don’t need to manage three separate systems.
Already Running in Production:
cognee’s growth between 2024 and 2025 tells the production story clearly. Pipeline runs went from around 2,000 to over one million – a 500x increase in a single year. Today, more than 70 companies are running cognee in live environments.
The use cases are concrete. Bayer uses it for scientific research workflows. The University of Wyoming built an evidence graph from scattered policy documents with page-level provenance. Knowunity, an education platform, used cognee to map relationships across 40,000 students in a two-day proof of concept – something that had previously required unwieldy SQL queries or sparse embeddings. Dynamo enriched customer data and deployed personalized support through cognee, with the full solution built and shipped in under a month.
The open-source project has crossed 12,000 GitHub stars with 80+ contributors. It was also accepted into GitHub’s Secure Open Source Program.
Integrations Teams Already Use:
cognee plugs into the tools most AI teams are already working with. It has native integrations with LangGraph, CrewAI, Claude Agent SDK, OpenAI Agents SDK, Google ADK, n8n, Neo4j, Amazon Neptune, and more. If you want to explore how it fits into existing agentic workflows, the cognee docs and integration guides are a solid starting point.
It also supports a wide range of vector databases – Qdrant, LanceDB, Milvus, Redis – and graph databases including Neo4j, FalkorDB, and Kuzu. That flexibility means teams can bring their own infrastructure rather than being locked into a new stack.
Where the $7.5M Goes:
cognee has outlined four areas for the new funding. First, scaling cognee Cloud so teams can add memory to agents without managing infrastructure. Second, a Rust engine for edge and on-device agents where latency and privacy are priorities. Third, continued cognitive science research to inform how the memory system works. Fourth, open-source expansion – 30+ new data source connectors and multi-database support shipping across Q1 and Q2 2026.
The open-source core stays open. That’s a deliberate decision from the team.









