On this complete tutorial, we information customers by way of creating a robust multi-tool AI agent utilizing LangGraph and Claude, optimized for numerous duties together with mathematical computations, internet searches, climate inquiries, textual content evaluation, and real-time info retrieval. It begins by simplifying dependency installations to make sure easy setup, even for learners. Customers are then launched to structured implementations of specialised instruments, equivalent to a secure calculator, an environment friendly web-search utility leveraging DuckDuckGo, a mock climate info supplier, an in depth textual content analyzer, and a time-fetching operate. The tutorial additionally clearly delineates the mixing of those instruments inside a classy agent structure constructed utilizing LangGraph, illustrating sensible utilization by way of interactive examples and clear explanations, facilitating each learners and superior builders to deploy customized multi-functional AI brokers quickly.
import subprocess
import sys
def install_packages():
packages = [
"langgraph",
"langchain",
"langchain-anthropic",
"langchain-community",
"requests",
"python-dotenv",
"duckduckgo-search"
]
for package deal in packages:
strive:
subprocess.check_call([sys.executable, "-m", "pip", "install", package, "-q"])
print(f"✓ Put in {package deal}")
besides subprocess.CalledProcessError:
print(f"✗ Failed to put in {package deal}")
print("Putting in required packages...")
install_packages()
print("Set up full!n")
We automate the set up of important Python packages required for constructing a LangGraph-based multi-tool AI agent. It leverages a subprocess to run pip instructions silently and ensures every package deal, starting from long-chain parts to internet search and surroundings dealing with instruments, is put in efficiently. This setup streamlines the surroundings preparation course of, making the pocket book transportable and beginner-friendly.
import os
import json
import math
import requests
from typing import Dict, Checklist, Any, Annotated, TypedDict
from datetime import datetime
import operator
from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, ToolMessage
from langchain_core.instruments import software
from langchain_anthropic import ChatAnthropic
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode
from langgraph.checkpoint.reminiscence import MemorySaver
from duckduckgo_search import DDGS
We import all the mandatory libraries and modules for establishing the multi-tool AI agent. It consists of Python customary libraries equivalent to os, json, math, and datetime for general-purpose performance and exterior libraries like requests for HTTP calls and duckduckgo_search for implementing internet search. The LangChain and LangGraph ecosystems herald message varieties, software decorators, state graph parts, and checkpointing utilities, whereas ChatAnthropic allows integration with the Claude mannequin for conversational intelligence. These imports kind the foundational constructing blocks for outlining instruments, agent workflows, and interactions.
os.environ["ANTHROPIC_API_KEY"] = "Use Your API Key Right here"
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY")
We set and retrieve the Anthropic API key required to authenticate and work together with Claude fashions. The os.environ line assigns your API key (which it is best to change with a sound key), whereas os.getenv securely retrieves it for later use in mannequin initialization. This method ensures the hot button is accessible all through the script with out hardcoding it a number of occasions.
from typing import TypedDict
class AgentState(TypedDict):
messages: Annotated[List[BaseMessage], operator.add]
@software
def calculator(expression: str) -> str:
"""
Carry out mathematical calculations. Helps primary arithmetic, trigonometry, and extra.
Args:
expression: Mathematical expression as a string (e.g., "2 + 3 * 4", "sin(3.14159/2)")
Returns:
Results of the calculation as a string
"""
strive:
allowed_names = {
'abs': abs, 'spherical': spherical, 'min': min, 'max': max,
'sum': sum, 'pow': pow, 'sqrt': math.sqrt,
'sin': math.sin, 'cos': math.cos, 'tan': math.tan,
'log': math.log, 'log10': math.log10, 'exp': math.exp,
'pi': math.pi, 'e': math.e
}
expression = expression.change('^', '**')
outcome = eval(expression, {"__builtins__": {}}, allowed_names)
return f"End result: {outcome}"
besides Exception as e:
return f"Error in calculation: {str(e)}"
We outline the agent’s inside state and implement a strong calculator software. The AgentState class makes use of TypedDict to construction agent reminiscence, particularly monitoring messages exchanged through the dialog. The calculator operate, adorned with @software to register it as an AI-usable utility, securely evaluates mathematical expressions. It permits for secure computation by limiting accessible capabilities to a predefined set from the mathematics module and changing frequent syntax like ^ with Python’s exponentiation operator. This ensures the software can deal with easy arithmetic and superior capabilities like trigonometry or logarithms whereas stopping unsafe code execution.
@software
def web_search(question: str, num_results: int = 3) -> str:
"""
Search the net for info utilizing DuckDuckGo.
Args:
question: Search question string
num_results: Variety of outcomes to return (default: 3, max: 10)
Returns:
Search outcomes as formatted string
"""
strive:
num_results = min(max(num_results, 1), 10)
with DDGS() as ddgs:
outcomes = checklist(ddgs.textual content(question, max_results=num_results))
if not outcomes:
return f"No search outcomes discovered for: {question}"
formatted_results = f"Search outcomes for '{question}':nn"
for i, lead to enumerate(outcomes, 1):
formatted_results += f"{i}. **{outcome['title']}**n"
formatted_results += f" {outcome['body']}n"
formatted_results += f" Supply: {outcome['href']}nn"
return formatted_results
besides Exception as e:
return f"Error performing internet search: {str(e)}"
We outline a web_search software that allows the agent to fetch real-time info from the web utilizing the DuckDuckGo Search API through the duckduckgo_search Python package deal. The software accepts a search question and an non-compulsory num_results parameter, making certain that the variety of outcomes returned is between 1 and 10. It opens a DuckDuckGo search session, retrieves the outcomes, and codecs them neatly for user-friendly show. If no outcomes are discovered or an error happens, the operate handles it gracefully by returning an informative message. This software equips the agent with real-time search capabilities, enhancing responsiveness and utility.
@software
def weather_info(metropolis: str) -> str:
"""
Get present climate info for a metropolis utilizing OpenWeatherMap API.
Be aware: It is a mock implementation for demo functions.
Args:
metropolis: Title of town
Returns:
Climate info as a string
"""
mock_weather = {
"big apple": {"temp": 22, "situation": "Partly Cloudy", "humidity": 65},
"london": {"temp": 15, "situation": "Wet", "humidity": 80},
"tokyo": {"temp": 28, "situation": "Sunny", "humidity": 70},
"paris": {"temp": 18, "situation": "Overcast", "humidity": 75}
}
city_lower = metropolis.decrease()
if city_lower in mock_weather:
climate = mock_weather[city_lower]
return f"Climate in {metropolis}:n"
f"Temperature: {climate['temp']}°Cn"
f"Situation: {climate['condition']}n"
f"Humidity: {climate['humidity']}%"
else:
return f"Climate information not accessible for {metropolis}. (It is a demo with restricted cities: New York, London, Tokyo, Paris)"
We outline a weather_info software that simulates retrieving present climate information for a given metropolis. Whereas it doesn’t connect with a reside climate API, it makes use of a predefined dictionary of mock information for main cities like New York, London, Tokyo, and Paris. Upon receiving a metropolis identify, the operate normalizes it to lowercase and checks for its presence within the mock dataset. It returns temperature, climate situation, and humidity in a readable format if discovered. In any other case, it notifies the consumer that climate information is unavailable. This software serves as a placeholder and might later be upgraded to fetch reside information from an precise climate API.
@software
def text_analyzer(textual content: str) -> str:
"""
Analyze textual content and supply statistics like phrase rely, character rely, and so forth.
Args:
textual content: Textual content to research
Returns:
Textual content evaluation outcomes
"""
if not textual content.strip():
return "Please present textual content to research."
phrases = textual content.cut up()
sentences = textual content.cut up('.') + textual content.cut up('!') + textual content.cut up('?')
sentences = [s.strip() for s in sentences if s.strip()]
evaluation = f"Textual content Evaluation Outcomes:n"
evaluation += f"• Characters (with areas): {len(textual content)}n"
evaluation += f"• Characters (with out areas): {len(textual content.change(' ', ''))}n"
evaluation += f"• Phrases: {len(phrases)}n"
evaluation += f"• Sentences: {len(sentences)}n"
evaluation += f"• Common phrases per sentence: {len(phrases) / max(len(sentences), 1):.1f}n"
evaluation += f"• Commonest phrase: {max(set(phrases), key=phrases.rely) if phrases else 'N/A'}"
return evaluation
The text_analyzer software supplies an in depth statistical evaluation of a given textual content enter. It calculates metrics equivalent to character rely (with and with out areas), phrase rely, sentence rely, and common phrases per sentence, and it identifies probably the most ceaselessly occurring phrase. The software handles empty enter gracefully by prompting the consumer to supply legitimate textual content. It makes use of easy string operations and Python’s set and max capabilities to extract significant insights. It’s a invaluable utility for language evaluation or content material high quality checks within the AI agent’s toolkit.
@software
def current_time() -> str:
"""
Get the present date and time.
Returns:
Present date and time as a formatted string
"""
now = datetime.now()
return f"Present date and time: {now.strftime('%Y-%m-%d %H:%M:%S')}"
The current_time software supplies a simple approach to retrieve the present system date and time in a human-readable format. Utilizing Python’s datetime module, it captures the current second and codecs it as YYYY-MM-DD HH:MM:SS. This utility is especially helpful for time-stamping responses or answering consumer queries in regards to the present date and time inside the AI agent’s interplay movement.
instruments = [calculator, web_search, weather_info, text_analyzer, current_time]
def create_llm():
if ANTHROPIC_API_KEY:
return ChatAnthropic(
mannequin="claude-3-haiku-20240307",
temperature=0.1,
max_tokens=1024
)
else:
class MockLLM:
def invoke(self, messages):
last_message = messages[-1].content material if messages else ""
if any(phrase in last_message.decrease() for phrase in ['calculate', 'math', '+', '-', '*', '/', 'sqrt', 'sin', 'cos']):
import re
numbers = re.findall(r'[d+-*/.()sw]+', last_message)
expr = numbers[0] if numbers else "2+2"
return AIMessage(content material="I am going to show you how to with that calculation.",
tool_calls=[{"name": "calculator", "args": {"expression": expr.strip()}, "id": "calc1"}])
elif any(phrase in last_message.decrease() for phrase in ['search', 'find', 'look up', 'information about']):
question = last_message.change('seek for', '').change('discover', '').change('search for', '').strip()
if not question or len(question) < 3:
question = "python programming"
return AIMessage(content material="I am going to seek for that info.",
tool_calls=[{"name": "web_search", "args": {"query": query}, "id": "search1"}])
elif any(phrase in last_message.decrease() for phrase in ['weather', 'temperature']):
metropolis = "New York"
phrases = last_message.decrease().cut up()
for i, phrase in enumerate(phrases):
if phrase == 'in' and that i + 1 < len(phrases):
metropolis = phrases[i + 1].title()
break
return AIMessage(content material="I am going to get the climate info.",
tool_calls=[{"name": "weather_info", "args": {"city": city}, "id": "weather1"}])
elif any(phrase in last_message.decrease() for phrase in ['time', 'date']):
return AIMessage(content material="I am going to get the present time.",
tool_calls=[{"name": "current_time", "args": {}, "id": "time1"}])
elif any(phrase in last_message.decrease() for phrase in ['analyze', 'analysis']):
textual content = last_message.change('analyze this textual content:', '').change('analyze', '').strip()
if not textual content:
textual content = "Pattern textual content for evaluation"
return AIMessage(content material="I am going to analyze that textual content for you.",
tool_calls=[{"name": "text_analyzer", "args": {"text": text}, "id": "analyze1"}])
else:
return AIMessage(content material="Hi there! I am a multi-tool agent powered by Claude. I can assist with:n• Mathematical calculationsn• Internet searchesn• Climate informationn• Textual content analysisn• Present time/datennWhat would you want me that can assist you with?")
def bind_tools(self, instruments):
return self
print("⚠️ Be aware: Utilizing mock LLM for demo. Add your ANTHROPIC_API_KEY for full performance.")
return MockLLM()
llm = create_llm()
llm_with_tools = llm.bind_tools(instruments)
We initialize the language mannequin that powers the AI agent. If a sound Anthropic API secret is accessible, it makes use of the Claude 3 Haiku mannequin for high-quality responses. With out an API key, a MockLLM is outlined to simulate primary tool-routing habits primarily based on key phrase matching, permitting the agent to operate offline with restricted capabilities. The bind_tools technique hyperlinks the outlined instruments to the mannequin, enabling it to invoke them as wanted.
def agent_node(state: AgentState) -> Dict[str, Any]:
"""Predominant agent node that processes messages and decides on software utilization."""
messages = state["messages"]
response = llm_with_tools.invoke(messages)
return {"messages": [response]}
def should_continue(state: AgentState) -> str:
"""Decide whether or not to proceed with software calls or finish."""
last_message = state["messages"][-1]
if hasattr(last_message, 'tool_calls') and last_message.tool_calls:
return "instruments"
return END
We outline the agent’s core decision-making logic. The agent_node operate handles incoming messages, invokes the language mannequin (with instruments), and returns the mannequin’s response. The should_continue operate then evaluates whether or not the mannequin’s response consists of software calls. If that’s the case, it routes management to the software execution node; in any other case, it directs the movement to finish the interplay. These capabilities allow dynamic and conditional transitions inside the agent’s workflow.
def create_agent_graph():
tool_node = ToolNode(instruments)
workflow = StateGraph(AgentState)
workflow.add_node("agent", agent_node)
workflow.add_node("instruments", tool_node)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue, {"instruments": "instruments", END: END})
workflow.add_edge("instruments", "agent")
reminiscence = MemorySaver()
app = workflow.compile(checkpointer=reminiscence)
return app
print("Creating LangGraph Multi-Device Agent...")
agent = create_agent_graph()
print("✓ Agent created efficiently!n")
We assemble the LangGraph-powered workflow that defines the AI agent’s operational construction. It initializes a ToolNode to deal with software executions and makes use of a StateGraph to arrange the movement between agent choices and gear utilization. Nodes and edges are added to handle transitions: beginning with the agent, conditionally routing to instruments, and looping again as wanted. A MemorySaver is built-in for persistent state monitoring throughout turns. The graph is compiled into an executable software (app), enabling a structured, memory-aware multi-tool agent prepared for deployment.
def test_agent():
"""Check the agent with numerous queries."""
config = {"configurable": {"thread_id": "test-thread"}}
test_queries = [
"What's 15 * 7 + 23?",
"Search for information about Python programming",
"What's the weather like in Tokyo?",
"What time is it?",
"Analyze this text: 'LangGraph is an amazing framework for building AI agents.'"
]
print("🧪 Testing the agent with pattern queries...n")
for i, question in enumerate(test_queries, 1):
print(f"Question {i}: {question}")
print("-" * 50)
strive:
response = agent.invoke(
{"messages": [HumanMessage(content=query)]},
config=config
)
last_message = response["messages"][-1]
print(f"Response: {last_message.content material}n")
besides Exception as e:
print(f"Error: {str(e)}n")
The test_agent operate is a validation utility that ensures that the LangGraph agent responds accurately throughout totally different use circumstances. It runs predefined queries, arithmetic, internet search, climate, time, and textual content evaluation, and prints the agent’s responses. Utilizing a constant thread_id for configuration, it invokes the agent with every question. It neatly shows the outcomes, serving to builders confirm software integration and conversational logic earlier than shifting to interactive or manufacturing use.
def chat_with_agent():
"""Interactive chat operate."""
config = {"configurable": {"thread_id": "interactive-thread"}}
print("🤖 Multi-Device Agent Chat")
print("Obtainable instruments: Calculator, Internet Search, Climate Data, Textual content Analyzer, Present Time")
print("Sort 'stop' to exit, 'assist' for accessible commandsn")
whereas True:
strive:
user_input = enter("You: ").strip()
if user_input.decrease() in ['quit', 'exit', 'q']:
print("Goodbye!")
break
elif user_input.decrease() == 'assist':
print("nAvailable instructions:")
print("• Calculator: 'Calculate 15 * 7 + 23' or 'What's sin(pi/2)?'")
print("• Internet Search: 'Seek for Python tutorials' or 'Discover details about AI'")
print("• Climate: 'Climate in Tokyo' or 'What is the temperature in London?'")
print("• Textual content Evaluation: 'Analyze this textual content: [your text]'")
print("• Present Time: 'What time is it?' or 'Present date'")
print("• stop: Exit the chatn")
proceed
elif not user_input:
proceed
response = agent.invoke(
{"messages": [HumanMessage(content=user_input)]},
config=config
)
last_message = response["messages"][-1]
print(f"Agent: {last_message.content material}n")
besides KeyboardInterrupt:
print("nGoodbye!")
break
besides Exception as e:
print(f"Error: {str(e)}n")
The chat_with_agent operate supplies an interactive command-line interface for real-time conversations with the LangGraph multi-tool agent. It helps pure language queries and acknowledges instructions like “assist” for utilization steering and “stop” to exit. Every consumer enter is processed by way of the agent, which dynamically selects and invokes applicable response instruments. The operate enhances consumer engagement by simulating a conversational expertise and showcasing the agent’s capabilities in dealing with numerous queries, from math and internet search to climate, textual content evaluation, and time retrieval.
if __name__ == "__main__":
test_agent()
print("=" * 60)
print("🎉 LangGraph Multi-Device Agent is prepared!")
print("=" * 60)
chat_with_agent()
def quick_demo():
"""Fast demonstration of agent capabilities."""
config = {"configurable": {"thread_id": "demo"}}
demos = [
("Math", "Calculate the square root of 144 plus 5 times 3"),
("Search", "Find recent news about artificial intelligence"),
("Time", "What's the current date and time?")
]
print("🚀 Fast Demo of Agent Capabilitiesn")
for class, question in demos:
print(f"[{category}] Question: {question}")
strive:
response = agent.invoke(
{"messages": [HumanMessage(content=query)]},
config=config
)
print(f"Response: {response['messages'][-1].content material}n")
besides Exception as e:
print(f"Error: {str(e)}n")
print("n" + "="*60)
print("🔧 Utilization Directions:")
print("1. Add your ANTHROPIC_API_KEY to make use of Claude mannequin")
print(" os.environ['ANTHROPIC_API_KEY'] = 'your-anthropic-api-key'")
print("2. Run quick_demo() for a fast demonstration")
print("3. Run chat_with_agent() for interactive chat")
print("4. The agent helps: calculations, internet search, climate, textual content evaluation, and time")
print("5. Instance: 'Calculate 15*7+23' or 'Seek for Python tutorials'")
print("="*60)
Lastly, we orchestrate the execution of the LangGraph multi-tool agent. If the script is run straight, it initiates test_agent() to validate performance with pattern queries, adopted by launching the interactive chat_with_agent() mode for real-time interplay. The quick_demo() operate additionally briefly showcases the agent’s capabilities in math, search, and time queries. Clear utilization directions are printed on the finish, guiding customers on configuring the API key, operating demonstrations, and interacting with the agent. This supplies a clean onboarding expertise for customers to discover and prolong the agent’s performance.
In conclusion, this step-by-step tutorial offers invaluable insights into constructing an efficient multi-tool AI agent leveraging LangGraph and Claude’s generative capabilities. With easy explanations and hands-on demonstrations, the information empowers customers to combine numerous utilities right into a cohesive and interactive system. The agent’s flexibility in performing duties, from advanced calculations to dynamic info retrieval, showcases the flexibility of recent AI improvement frameworks. Additionally, the inclusion of user-friendly capabilities for each testing and interactive chat enhances sensible understanding, enabling instant software in numerous contexts. Builders can confidently prolong and customise their AI brokers with this foundational data.
Try the Pocket book on GitHub. All credit score for this analysis goes to the researchers of this mission. Additionally, be happy to comply with us on Twitter and don’t neglect to hitch our 95k+ ML SubReddit and Subscribe to our E-newsletter.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its recognition amongst audiences.