LangChain Integration
Add full observability and governance to your LangChain agents — zero code changes required.
Install
pip install nyraxis-sdkQuick start
import nyraxis_sdk
from langchain_openai import ChatOpenAI
nyraxis_sdk.init(
api_key="nyx_your_api_key",
agent_name="my-langchain-agent",
)
# Use LangChain as normal — all calls are auto-traced
llm = ChatOpenAI(model="gpt-4o")
response = llm.invoke("Summarize the latest AI news")
nyraxis_sdk.shutdown()What gets captured
- Every LLM call — model, prompt, completion, token counts, latency
- Tool calls — name, parameters, result
- LCEL chains and LangGraph agents — full span tree
- Cost auto-calculated from token counts + model pricing table
- Governance policies evaluated in real-time (PII, prompt injection, cost limits)
Trace grouping
from nyraxis_sdk import workflow, task
@workflow(name="research-agent")
def run_agent(query):
return summarize(query)
@task(name="summarize")
def summarize(text):
llm = ChatOpenAI(model="gpt-4o")
return llm.invoke(f"Summarize: {text}")View traces at Dashboard → Traces.