NeuralSnapNeuralSnap/Docs

Crystallization Guide

Crystallization is the heart of NeuralSnap — an AI pipeline that transforms messy, unstructured text into structured 16-field Neural Snapshots. Instead of manually filling out every field, you drop in a meeting transcript, article, or conversation and the pipeline extracts, synthesises, and crystallises the knowledge for you.

Think of it as distillation for thoughts. A 5,000-word meeting transcript goes in; three or four precisely structured thought-crystals come out — each capturing what was said, why it matters, how to act on it, and how to challenge it.

How the Pipeline Works

Raw Text
Extract
Raw Insights
Synthesize
Refined Insights
Crystallize
Neural Snapshots

Phase 1 — Extraction

The LLM scans your raw text and pulls out discrete claims, decisions, observations, and ideas. Each is tagged with the speaker (if identifiable) and rough significance.

Phase 2 — Synthesis

Duplicate or overlapping insights are merged. Related ideas are combined into higher-level observations. Low-signal noise is filtered out.

Phase 3 — Crystallization

Each refined insight is mapped into the full 16-field Neural Snapshot schema, including confidence scores, challenge questions, and actionable steps.

Using the Dashboard

The fastest way to crystallize is through the web dashboard. Here's the full flow:

1

Go to the Crystallize page

Click Crystallize in the sidebar or navigate directly to /crystallize.

2

Paste your raw text

Drop in a meeting transcript, article excerpt, conversation log, or any unstructured content. The sweet spot is 2,000–10,000 characters.

3

Select a source type

Choose what kind of content this is: Meeting, Article, Conversation, or Manual. This helps the pipeline calibrate its extraction heuristics.

4

Choose a target brain

Select which brain the resulting snapshots should be saved to. Most users have one default brain, but you can create multiple brains for different domains.

5

Click Crystallize

Hit the Crystallize button. The pipeline typically takes 10–30 seconds depending on input length. You'll see a real-time progress indicator.

6

Review the results

The pipeline returns one or more Neural Snapshots. Review each one — check the name, core insight, confidence score, and suggested tags. Edit any field inline.

7

Accept, reject, or save

Accept snapshots that look good, reject any that aren't useful, and click Save to Brain to persist them. They're immediately searchable and visible in the brain graph.

Using the API

For programmatic access, send a POST to /api/v1/crystallize with your raw text and get back structured Neural Snapshots.

bash
curl -X POST https://neuralsnap.ai/api/v1/crystallize \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"text": "Q1 Product Sync — Jan 15 2026\n\nAttendees: Sarah (PM), Mike (Eng Lead), Dana (Design)\n\nSarah: We need to ship the onboarding revamp by end of Feb. Activation rate is 23%, target is 40%.\n\nMike: Main blocker is the auth flow rewrite. Migrating from NextAuth to Clerk for SSO. Done by Jan 30.\n\nDana: New onboarding wireframes ready. User testing showed 67% drop-off at the use-case selection step. Simplify to two options — Personal and Team.\n\nSarah: Agreed. Also add an interactive tutorial post-signup. Mike, can you build a guided walkthrough?\n\nMike: Yes. react-joyride or custom. Custom adds 2 weeks but gives full control.\n\nSarah: Go custom. First impressions matter.\n\nAction items:\n1. Mike — auth migration by Jan 30\n2. Dana — simplified onboarding by Jan 22\n3. Mike — scope walkthrough by Jan 20\n4. Sarah — set up A/B test for activation rate",
"source_type": "meeting"
}'

Best Practices

Ideal Input Length

The sweet spot is 2,000–10,000 characters (~500–2,500 words). Shorter texts may not yield enough signal for crystallization. Longer texts work but cost more tokens and produce more snapshots.

What Works Well

  • Meeting transcripts — decisions, action items, and key insights from multi-speaker dialogue
  • Conversations & interviews — chat logs, Q&A sessions, podcast transcripts
  • Articles & blog posts — distils core arguments, evidence, and conclusions
  • Personal notes & journals — stream-of-consciousness gets structured into actionable insights
  • Email threads — multi-party chains with decisions and context

What Doesn't Work Well

  • Source code — code is already structured; the pipeline can't meaningfully crystallize it
  • Spreadsheets & tabular data — numbers without narrative context produce poor results
  • Very short snippets — a single sentence rarely has enough substance for 16 fields
  • Technical specs — API schemas, config files, and protocol docs aren't narrative

Pro Tip

Include context about who is speaking and what the conversation is about. A transcript that starts with "Q1 Product Sync — Attendees: Sarah (PM), Mike (Eng)" produces much richer snapshots than one that jumps straight into dialogue.

16-Field Schema

For the complete field reference with descriptions and examples, see Core Concepts → Neural Snapshots.

Token Usage & Billing

Crystallization uses LLM tokens across all three pipeline phases (extraction, synthesis, crystallization). A typical 5,000-character input uses roughly 8,000–15,000 tokens total.

How Tokens Are Counted

  • Input tokens — your raw text plus system prompts for each phase
  • Output tokens — generated extractions, syntheses, and final snapshots
  • Total usage = input + output across all three phases

Tier Limits

PlanMonthly Token LimitNote
Starter500,000Great for personal use
Pro2,000,000For power users & creators
Business10,000,000For teams & power users

Reducing Token Usage

  • Trim your input — strip headers, footers, and boilerplate before crystallizing
  • Stay in the sweet spot — 2K–10K characters for the best insight-per-token ratio
  • Avoid re-crystallizing — the pipeline is idempotent but each run costs tokens
  • Set source_type — helps the pipeline skip irrelevant extraction patterns