LlamaIndex Integration
Full observability for your LlamaIndex RAG pipelines — zero code changes.
Install
pip install nyraxis-sdkQuick start
import nyraxis_sdk
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
nyraxis_sdk.init(
api_key="nyx_your_api_key",
agent_name="my-rag-agent",
)
# Use LlamaIndex as normal — all calls are auto-traced
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What is AI governance?")
nyraxis_sdk.shutdown()What gets captured
- LLM calls — prompt, completion, token counts, model, cost
- Embedding calls — model, input size
- Retrieval — query, retrieved context
- Governance policies evaluated (PII detection, cost limits)