Gentrace integrates with LangGraph by leveraging OpenTelemetry (OTEL) tracing to automatically capture and monitor your LangGraph agent executions, providing full observability into complex agent workflows and tool usage.
Prerequisites
- Python 3.8 or higher
- Gentrace API key
- OpenAI API key (or other LLM provider credentials)
Installation
pip install gentrace-py langchain langchain-openai langgraph
Configuration
To enable Gentrace tracing with LangGraph, you need to set specific environment variables before importing any LangChain or LangGraph modules:
import os
# Set OpenTelemetry environment variables BEFORE importing LangChain/LangGraph
os.environ['LANGSMITH_OTEL_ENABLED'] = 'true'
os.environ['LANGSMITH_TRACING'] = 'true'
# Now import the libraries
import gentrace
from gentrace import interaction
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
# Initialize Gentrace
gentrace.init(api_key=os.getenv("GENTRACE_API_KEY"))
The OpenTelemetry environment variables must be set before importing
LangChain or LangGraph modules. Setting them after imports will not
enable tracing.
Usage Example
Here’s a complete example showing how to trace a LangGraph agent with Gentrace:
import os
# Set OTEL environment variables first
os.environ['LANGSMITH_OTEL_ENABLED'] = 'true'
os.environ['LANGSMITH_TRACING'] = 'true'
import gentrace
from gentrace import interaction
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
# Initialize Gentrace
gentrace.init(api_key=os.getenv("GENTRACE_API_KEY"))
# Define a tool for the agent
@tool
def add(a: int, b: int) -> int:
"""Add two numbers together."""
return a + b
# Create the agent wrapped in a Gentrace interaction
@interaction(name="math-agent-interaction")
def run_math_agent():
# Create a ReAct agent with the add tool
math_agent = create_react_agent(
'openai:gpt-4o',
tools=[add],
name='math_agent'
)
# Invoke the agent
result = math_agent.invoke({
'messages': [{
'role': 'user',
'content': "What's 123 + 456?"
}]
})
return result
# Run the traced agent
if __name__ == "__main__":
result = run_math_agent()
print(result)
Environment Variables
Set these environment variables in your .env
file:
# Gentrace configuration
GENTRACE_API_KEY=your-gentrace-api-key
# OpenTelemetry configuration (required for LangGraph tracing)
LANGSMITH_OTEL_ENABLED=true
LANGSMITH_TRACING=true
# LLM provider credentials
OPENAI_API_KEY=your-openai-api-key
How It Works
Gentrace captures LangGraph traces by:
- Intercepting LangSmith traces: The OTEL environment variables redirect LangSmith’s built-in tracing to OpenTelemetry
- Capturing via OpenTelemetry: Gentrace’s OTEL integration automatically collects these traces
- Organizing with interactions: The
@interaction
decorator groups related agent executions
This approach provides zero-code-change tracing for existing LangGraph applications while maintaining full compatibility with LangGraph’s features.
Advanced Usage
Custom Agent Configurations
You can trace more complex agent setups with custom configurations:
import os
# Set OTEL environment variables first
os.environ['LANGSMITH_OTEL_ENABLED'] = 'true'
os.environ['LANGSMITH_TRACING'] = 'true'
import gentrace
from gentrace import interaction
from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
from operator import add
# Initialize Gentrace
gentrace.init(api_key=os.getenv("GENTRACE_API_KEY"))
class AgentState(TypedDict):
messages: Annotated[list, add]
next_action: str
def process_node(state):
# Process the messages
state["messages"].append("Processing completed")
state["next_action"] = "analyze"
return state
def analyze_node(state):
# Analyze the processed data
state["messages"].append("Analysis completed")
return state
@interaction(name="custom-workflow-agent")
def run_custom_agent():
# Build custom graph
workflow = StateGraph(AgentState)
# Add nodes and edges
workflow.add_node("process", process_node)
workflow.add_node("analyze", analyze_node)
workflow.add_edge("process", "analyze")
workflow.add_edge("analyze", END)
# Set entry point
workflow.set_entry_point("process")
# Compile and run
app = workflow.compile()
result = app.invoke({"messages": [], "next_action": "start"})
return result
if __name__ == "__main__":
result = run_custom_agent()
print("Final state:", result)
Multiple Agent Coordination
Track complex multi-agent systems:
import os
# Set OTEL environment variables first
os.environ['LANGSMITH_OTEL_ENABLED'] = 'true'
os.environ['LANGSMITH_TRACING'] = 'true'
import gentrace
from gentrace import interaction
from langgraph.prebuilt import create_react_agent
from langchain_core.tools import tool
# Initialize Gentrace
gentrace.init(api_key=os.getenv("GENTRACE_API_KEY"))
# Define tools for the agents
@tool
def search_tool(query: str) -> str:
"""Search for information about a topic."""
# Simulate search results
return f"Search results for '{query}': Lorem ipsum dolor sit amet, consectetur adipiscing elit."
@tool
def write_tool(content: str) -> str:
"""Write formatted content based on input."""
# Simulate writing process
return f"Article: {content}\n\nFormatted and enhanced with additional context."
@interaction(name="multi-agent-system")
def coordinate_agents():
# Create specialized agents
research_agent = create_react_agent(
'openai:gpt-4o',
tools=[search_tool],
name='researcher'
)
writer_agent = create_react_agent(
'openai:gpt-4o',
tools=[write_tool],
name='writer'
)
# Research phase
research_results = research_agent.invoke({
'messages': [{
'role': 'user',
'content': 'Research the latest developments in artificial intelligence'
}]
})
# Extract the content from research results
research_content = research_results['messages'][-1].content
# Writing phase using research
final_output = writer_agent.invoke({
'messages': [{
'role': 'user',
'content': f'Write an article based on this research: {research_content}'
}]
})
return final_output
if __name__ == "__main__":
result = coordinate_agents()
print("Final output:", result['messages'][-1].content)