Memory Layer

Memories

Facts. Decisions. Insights. Events.

Seven memory types capture the raw material of thinking. Tag emotions, track speakers, measure confidence. Your AI stops forgetting what happened yesterday and builds on what it learned last month.

7 memory types for every kind of knowledge

Not all memories are equal. A fact is different from a decision is different from an insight. NeuralSnap classifies each memory so you can filter, search, and reason about your knowledge effectively.

📌 fact

Objective information — URLs, numbers, configurations, names.

⚖️ decision

A choice that was made, with context on why.

💡 insight

A realization or pattern recognition — the "aha" moment.

📖 story

A narrative or anecdote that illustrates a point.

🧠 framework

A mental model or structured way of thinking.

🎯 preference

How someone likes things done.

📅 event

A milestone, meeting, or occurrence.

Emotional context that enriches search

Every memory can carry emotional metadata — valence (positive/negative), intensity (how strong), and source emotion (clarity, frustration, surprise). This isn't decoration. Emotional context improves semantic search relevance and helps the crystallization pipeline generate richer snapshots.

  • source_emotion: The dominant feeling (clarity, frustration, surprise, excitement)
  • emotional_valence: -1 (negative) to +1 (positive)
  • emotional_intensity: 0 (neutral) to 1 (overwhelming)
  • Speaker attribution: Track who said what

Search by meaning, not keywords

Every memory is embedded as a 768-dimensional vector at creation time. Search for 'how to make AI agents reliable' and find the memory 'SOPs produce 10x better output than prompts' — even though they share zero keywords.

// Store a memory
await neuralsnap.memories.remember({
  content: "SOPs produce 10x better agent output than prompts",
  memory_type: "decision",
  speaker: "Sefy",
  tags: ["agents", "operations"],
  source_emotion: "clarity",
  emotional_valence: 0.8,
});

// Search semantically
const results = await neuralsnap.memories.search({
  query: "how to make AI agents reliable",
  limit: 5,
});
// → "SOPs produce 10x better output" (0.91 match)

Auto-extract from conversations

Drop a meeting transcript and NeuralSnap extracts every decision, insight, fact, and action item — classified by type, tagged with speaker and emotion. One API call replaces hours of manual note-taking.

  • Works with Fathom, Fireflies, and raw transcripts
  • Extracts 10-30 memories from a typical 30-minute meeting
  • Auto-classifies each memory into the right type
  • Preserves speaker attribution and emotional context

Use Cases

Agent Long-Term Memory

Store what your AI agents learn across sessions. Facts, decisions, and preferences persist and accumulate.

Meeting Memory

Auto-extract memories from Fathom and Fireflies transcripts. Every decision, action item, and insight — captured and searchable.

Personal Journal

Store insights as they happen. Six months later, search "what did I learn about hiring" and get exact memories.

Ready to get started?

Create your free account and start building your second brain in minutes.