On this tutorial, we offer a sensible information for implementing LangGraph, a streamlined, graph-based AI orchestration framework, built-in seamlessly with Anthropic’s Claude API. By means of detailed, executable code optimized for Google Colab, builders discover ways to construct and visualize AI workflows as interconnected nodes performing distinct duties, resembling producing concise solutions, critically analyzing responses, and routinely composing technical weblog content material. The compact implementation highlights LangGraph’s intuitive node-graph structure. It will probably handle advanced sequences of Claude-powered pure language duties, from primary question-answering eventualities to superior content material technology pipelines.
from getpass import getpass
import os
anthropic_key = getpass("Enter your Anthropic API key: ")
os.environ["ANTHROPIC_API_KEY"] = anthropic_key
print("Key set:", "ANTHROPIC_API_KEY" in os.environ)
We securely immediate customers to enter their Anthropic API key utilizing Python’s getpass module, making certain delicate knowledge isn’t displayed. It then units this key as an setting variable (ANTHROPIC_API_KEY) and confirms profitable storage.
import os
import json
import requests
from typing import Dict, Record, Any, Callable, Non-compulsory, Union
from dataclasses import dataclass, area
import networkx as nx
import matplotlib.pyplot as plt
from IPython.show import show, HTML, clear_output
We import important libraries for constructing and visualizing structured AI workflows. It contains modules for dealing with knowledge (json, requests, dataclasses), graph creation and visualization (networkx, matplotlib), interactive pocket book show (IPython.show), and sort annotations (typing) for readability and maintainability.
attempt:
import anthropic
besides ImportError:
print("Putting in anthropic package deal...")
!pip set up -q anthropic
import anthropic
from anthropic import Anthropic
We make sure the anthropic Python package deal is out there to be used. It makes an attempt to import the module and, if not discovered, routinely installs it utilizing pip in a Google Colab setting. After set up, it imports the Anthropic shopper, important for interacting with Claude fashions by way of the Anthropic API. 4o
@dataclass
class NodeConfig:
identify: str
operate: Callable
inputs: Record[str] = area(default_factory=record)
outputs: Record[str] = area(default_factory=record)
config: Dict[str, Any] = area(default_factory=dict)
This NodeConfig knowledge class defines the construction of every node within the LangGraph workflow. Every node has a reputation, an executable operate, optionally available inputs and outputs, and an optionally available config dictionary to retailer extra parameters. This setup permits for modular, reusable node definitions for graph-based AI duties.
class LangGraph:
def __init__(self, api_key: Non-compulsory[str] = None):
self.api_key = api_key or os.environ.get("ANTHROPIC_API_KEY")
if not self.api_key:
from google.colab import userdata
attempt:
self.api_key = userdata.get('ANTHROPIC_API_KEY')
if not self.api_key:
elevate ValueError("No API key discovered")
besides:
print("No Anthropic API key present in setting variables or Colab secrets and techniques.")
self.api_key = enter("Please enter your Anthropic API key: ")
if not self.api_key:
elevate ValueError("Please present an Anthropic API key")
self.shopper = Anthropic(api_key=self.api_key)
self.graph = nx.DiGraph()
self.nodes = {}
self.state = {}
def add_node(self, node_config: NodeConfig):
self.nodes[node_config.name] = node_config
self.graph.add_node(node_config.identify)
for input_node in node_config.inputs:
if input_node in self.nodes:
self.graph.add_edge(input_node, node_config.identify)
return self
def claude_node(self, identify: str, prompt_template: str, mannequin: str = "claude-3-7-sonnet-20250219",
inputs: Record[str] = None, outputs: Record[str] = None, system_prompt: str = None):
"""Comfort methodology to create a Claude API node"""
inputs = inputs or []
outputs = outputs or [name + "_response"]
def claude_fn(state, **kwargs):
immediate = prompt_template
for okay, v in state.gadgets():
if isinstance(v, str):
immediate = immediate.change(f"{{{okay}}}", v)
message_params = {
"mannequin": mannequin,
"max_tokens": 1000,
"messages": [{"role": "user", "content": prompt}]
}
if system_prompt:
message_params["system"] = system_prompt
response = self.shopper.messages.create(**message_params)
return response.content material[0].textual content
node_config = NodeConfig(
identify=identify,
operate=claude_fn,
inputs=inputs,
outputs=outputs,
config={"mannequin": mannequin, "prompt_template": prompt_template}
)
return self.add_node(node_config)
def transform_node(self, identify: str, transform_fn: Callable,
inputs: Record[str] = None, outputs: Record[str] = None):
"""Add an information transformation node"""
inputs = inputs or []
outputs = outputs or [name + "_output"]
node_config = NodeConfig(
identify=identify,
operate=transform_fn,
inputs=inputs,
outputs=outputs
)
return self.add_node(node_config)
def visualize(self):
"""Visualize the graph"""
plt.determine(figsize=(10, 6))
pos = nx.spring_layout(self.graph)
nx.draw(self.graph, pos, with_labels=True, node_color="lightblue",
node_size=1500, arrowsize=20, font_size=10)
plt.title("LangGraph Move")
plt.tight_layout()
plt.present()
print("nGraph Construction:")
for node in self.graph.nodes():
successors = record(self.graph.successors(node))
if successors:
print(f" {node} → {', '.be a part of(successors)}")
else:
print(f" {node} (endpoint)")
print()
def _get_execution_order(self):
"""Decide execution order primarily based on dependencies"""
attempt:
return record(nx.topological_sort(self.graph))
besides nx.NetworkXUnfeasible:
elevate ValueError("Graph accommodates a cycle")
def execute(self, initial_state: Dict[str, Any] = None):
"""Execute the graph in topological order"""
self.state = initial_state or {}
execution_order = self._get_execution_order()
print("Executing LangGraph circulation:")
for node_name in execution_order:
print(f"- Operating node: {node_name}")
node = self.nodes[node_name]
inputs = {okay: self.state.get(okay) for okay in node.inputs if okay in self.state}
end result = node.operate(self.state, **inputs)
if len(node.outputs) == 1:
self.state[node.outputs[0]] = end result
elif isinstance(end result, (record, tuple)) and len(end result) == len(node.outputs):
for i, output_name in enumerate(node.outputs):
self.state[output_name] = end result[i]
print("Execution accomplished!")
return self.state
def run_example(query="What are the important thing advantages of utilizing a graph-based structure for AI workflows?"):
"""Run an instance LangGraph circulation with a predefined query"""
print(f"Operating instance with query: '{query}'")
graph = LangGraph()
def question_provider(state, **kwargs):
return query
graph.transform_node(
identify="question_provider",
transform_fn=question_provider,
outputs=["user_question"]
)
graph.claude_node(
identify="question_answerer",
prompt_template="Reply this query clearly and concisely: {user_question}",
inputs=["user_question"],
outputs=["answer"],
system_prompt="You're a useful AI assistant."
)
graph.claude_node(
identify="answer_analyzer",
prompt_template="Analyze if this reply addresses the query properly: Query: {user_question}nAnswer: {reply}",
inputs=["user_question", "answer"],
outputs=["analysis"],
system_prompt="You're a essential evaluator. Be temporary however thorough."
)
graph.visualize()
end result = graph.execute()
print("n" + "="*50)
print("EXECUTION RESULTS:")
print("="*50)
print(f"n🔍 QUESTION:n{end result.get('user_question')}n")
print(f"📝 ANSWER:n{end result.get('reply')}n")
print(f"✅ ANALYSIS:n{end result.get('evaluation')}")
print("="*50 + "n")
return graph
The LangGraph class implements a light-weight framework for setting up and executing graph-based AI workflows utilizing Claude from Anthropic. It permits customers to outline modular nodes, both Claude-powered prompts or customized transformation features, join them by way of dependencies, visualize the complete pipeline, and execute them in topological order. The run_example operate demonstrates this by constructing a easy question-answering and analysis circulation, showcasing the readability and modularity of LangGraph’s structure.
def run_advanced_example():
"""Run a extra superior instance with a number of nodes for content material technology"""
graph = LangGraph()
def topic_selector(state, **kwargs):
return "Graph-based AI programs"
graph.transform_node(
identify="topic_selector",
transform_fn=topic_selector,
outputs=["topic"]
)
graph.claude_node(
identify="outline_generator",
prompt_template="Create a quick define for a technical weblog publish about {subject}. Embody 3-4 essential sections solely.",
inputs=["topic"],
outputs=["outline"],
system_prompt="You're a technical author specializing in AI applied sciences."
)
graph.claude_node(
identify="intro_writer",
prompt_template="Write an enticing introduction for a weblog publish with this define: {define}nTopic: {subject}",
inputs=["topic", "outline"],
outputs=["introduction"],
system_prompt="You're a technical author. Write in a transparent, participating fashion."
)
graph.claude_node(
identify="conclusion_writer",
prompt_template="Write a conclusion for a weblog publish with this define: {define}nTopic: {subject}",
inputs=["topic", "outline"],
outputs=["conclusion"],
system_prompt="You're a technical author. Summarize key factors and embody a forward-looking assertion."
)
def assembler(state, introduction, define, conclusion, **kwargs):
return f"# {state['topic']}nn{introduction}nn## Outlinen{define}nn## Conclusionn{conclusion}"
graph.transform_node(
identify="content_assembler",
transform_fn=assembler,
inputs=["topic", "introduction", "outline", "conclusion"],
outputs=["final_content"]
)
graph.visualize()
end result = graph.execute()
print("n" + "="*50)
print("BLOG POST GENERATED:")
print("="*50 + "n")
print(end result.get("final_content"))
print("n" + "="*50)
return graph
The run_advanced_example operate showcases a extra subtle use of LangGraph by orchestrating a number of Claude-powered nodes to generate an entire weblog publish. It begins by deciding on a subject, then creates a top level view, an introduction, and a conclusion, all utilizing structured Claude prompts. Lastly, a metamorphosis node assembles the content material right into a formatted weblog publish. This instance demonstrates how LangGraph can automate advanced, multi-step content material technology duties utilizing modular, related nodes in a transparent and executable circulation.
print("1. Operating easy question-answering instance")
query = "What are the three essential benefits of utilizing graph-based AI architectures?"
simple_graph = run_example(query)
print("n2. Operating superior weblog publish creation instance")
advanced_graph = run_advanced_example()
Lastly, we set off the execution of each outlined LangGraph workflows. First, it runs the easy question-answering instance by passing a predefined query to the run_example() operate. Then, it initiates the extra superior weblog publish technology workflow utilizing run_advanced_example(). Collectively, these calls display the sensible flexibility of LangGraph, from primary prompt-based interactions to multi-step content material automation utilizing Anthropic’s Claude API.
In conclusion, we’ve got applied LangGraph built-in with Anthropic’s Claude API, which illustrates the benefit of designing modular AI workflows that leverage highly effective language fashions in structured, graph-based pipelines. By means of visualizing process flows and separating tasks amongst nodes, resembling query processing, analytical analysis, content material outlining, and meeting, builders achieve sensible expertise in constructing maintainable, scalable AI programs. LangGraph’s clear node dependencies and Claude’s subtle language capabilities present an environment friendly answer for orchestrating advanced AI processes, particularly for speedy prototyping and execution in environments like Google Colab.
Try the Colab Pocket book. All credit score for this analysis goes to the researchers of this challenge. Additionally, be happy to comply with us on Twitter and don’t overlook to affix our 95k+ ML SubReddit and Subscribe to our E-newsletter.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.