LlamaIndex Integration
Use NeuralSnap as a structured knowledge layer alongside LlamaIndex's RAG pipelines. Instead of flat document chunks, your index returns typed, confidence-scored Neural Snapshots with connections and context.
NeuralSnap complements your vector store — it provides structured knowledge (beliefs, decisions, frameworks) while your vector store handles unstructured document retrieval.
Installation
bash
pip install neuralsnap-llamaindex
Quick Start
python
from neuralsnap_llamaindex import NeuralSnapRetriever# Create a retriever that searches your NeuralSnap brainretriever = NeuralSnapRetriever(api_key="ns_your_api_key",brain_id="brain_abc123",top_k=5,)# Retrieve relevant snapshotsnodes = retriever.retrieve("how to approach complex problems")for node in nodes:print(f"[{node.score:.2f}] {node.metadata['name']}")print(f" Type: {node.metadata['type']}")print(f" Core: {node.text}")
With a Query Engine
Combine NeuralSnap retrieval with LlamaIndex's query engine for RAG that understands structured knowledge:
python
from llama_index.core import VectorStoreIndexfrom llama_index.core.query_engine import RetrieverQueryEnginefrom neuralsnap_llamaindex import NeuralSnapRetriever# NeuralSnap retriever for structured knowledgens_retriever = NeuralSnapRetriever(api_key="ns_...",brain_id="brain_abc123",top_k=5,include_metadata=True, # include type, confidence, tags)# Build a query enginequery_engine = RetrieverQueryEngine.from_args(retriever=ns_retriever,)# Query with full contextresponse = query_engine.query("What do we believe about scaling engineering teams?")print(response)# → Response includes snapshot type, confidence scores, and connections
Ingesting Documents
python
from neuralsnap_llamaindex import NeuralSnapIngester# Ingest documents into NeuralSnapingester = NeuralSnapIngester(api_key="ns_...",brain_id="brain_abc123",)# From a LlamaIndex documentfrom llama_index.core import Documentdoc = Document(text="In our Q4 review, we decided to focus on enterprise...",metadata={"source": "q4-review", "date": "2025-01-15"},)# Crystallize into Neural Snapshotsresult = ingester.ingest(doc)print(f"Created {len(result.snapshots)} snapshots")
Configuration
| Property | Type | Description |
|---|---|---|
api_key* | string | Your NeuralSnap API key. |
brain_id | string | Scope to a specific brain. |
top_k | number | Number of results to return. Default: 5. |
include_metadata | boolean | Include snapshot type, confidence, tags in node metadata. Default: true. |
min_confidence | number | Filter results below this confidence threshold (0–1). Default: 0. |
snapshot_types | list[str] | Filter by snapshot type (Belief, Model, Rule, Conviction, Principle). |
Early access
The LlamaIndex integration is in early access (v0.1.0). The retriever and ingester are fully typed but connect to the NeuralSnap API in v0.2.0. Install now to lock in the API surface.