What problem are we solving?
AI agents struggle with statelessness - they forget important context between interactions. Tela Mentis provides durable, searchable memory that evolves over time, enabling:
Persistent Memory
Store conversations, facts, and relationships in a structured knowledge graph that persists between sessions and across multiple users.
Temporal Awareness
Track both when facts were true and when they were learned, enabling time-aware reasoning and context retrieval.
Multi-Agent Collaboration
Enable multiple AI agents to share a common knowledge base while maintaining tenant isolation for security and privacy.
Knowledge Extraction with LLMs
Tela Mentis seamlessly integrates with LLMs to extract structured knowledge from unstructured text. This enables AI agents to build and maintain their knowledge graphs automatically.
// Extract knowledge from conversation
const context = {
messages: [{
role: "user",
content: "Alice started working at Acme Corp in January 2023"
}]
};
// Knowledge graph is automatically updated
const result = await telaMentis.extract(tenant, context);
Pluggable Architecture
Tela Mentis is built with a pluggable architecture that allows you to customize any component while maintaining a consistent core API. This makes it adaptable to any AI application stack.
Key Capabilities
Millisecond-Scale Performance
Powered by a high-performance Rust core for millisecond-latency graph operations.
Adapt to Any Stack
Pluggable architecture with adapters for storage, transport, and LLMs.
Time-Aware Reasoning
Bitemporal knowledge model tracks both when facts were true and when they were recorded.
Secure Multi-Tenancy
Enterprise-grade isolation between tenants in a single deployment.
Developer-Friendly Tools
Comprehensive CLI tool (kgctl) for all operations and management tasks.
AI-Native Integration
Built for AI agents with first-class LLM extraction pipeline.