• About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
Thursday, November 13, 2025
mGrowTech
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
No Result
View All Result
mGrowTech
No Result
View All Result
Home Al, Analytics and Automation

Building ReAct Agents with LangGraph: A Beginner’s Guide

Josh by Josh
November 13, 2025
in Al, Analytics and Automation
0
Building ReAct Agents with LangGraph: A Beginner’s Guide
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter


In this article, you will learn how the ReAct (Reasoning + Acting) pattern works and how to implement it with LangGraph β€” first with a simple, hardcoded loop and then with an LLM-driven agent.

Topics we will cover include:

  • The ReAct cycle (Reason β†’ Act β†’ Observe) and why it’s useful for agents.
  • How to model agent workflows as graphs with LangGraph.
  • Building a hardcoded ReAct loop, then upgrading it to an LLM-powered version.

Let’s explore these techniques.

Building ReAct Agents LangGraph Beginners Guide

Building ReAct Agents with LangGraph: A Beginner’s Guide
Image by Author

What is the ReAct Pattern?

ReAct (Reasoning + Acting) is a common pattern for building AI agents that think through problems and take actions to solve them. The pattern follows a simple cycle:

READ ALSO

Top 8 3D Point Cloud Annotation Companies in 2026

Talk to Your TV β€” Bitmovin’s Agentic AI Hub Quietly Redefines How We Watch

  1. Reasoning: The agent thinks about what it needs to do next.
  2. Acting: The agent takes an action (like searching for information).
  3. Observing: The agent examines the results of its action.

This cycle repeats until the agent has gathered enough information to answer the user’s question.

Why LangGraph?

LangGraph is a framework built on top of LangChain that lets you define agent workflows as graphs. A graph (in this context) is a data structure consisting of nodes (steps in your process) connected by edges (the paths between steps). Each node in the graph represents a step in your agent’s process, and edges define how information flows between steps. This structure allows for complex flows like loops and conditional branching. For example, your agent can cycle between reasoning and action nodes until it gathers enough information. This makes complex agent behavior easy to understand and maintain.

Tutorial Structure

We’ll build two versions of a ReAct agent:

  1. Part 1: A simple hardcoded agent to understand the mechanics.
  2. Part 2: An LLM-powered agent that makes dynamic decisions.

Part 1: Understanding ReAct with a Simple Example

First, we’ll create a basic ReAct agent with hardcoded logic. This helps you understand how the ReAct loop works without the complexity of LLM integration.

Setting Up the State

Every LangGraph agent needs a state object that flows through the graph nodes. This state serves as shared memory that accumulates information. Nodes read the current state and add their contributions before passing it along.

from langgraph.graph import StateGraph, END

from typing import TypedDict, Annotated

import operator

Β 

# Define the state that flows through our graph

class AgentState(TypedDict):

Β Β Β Β messages: Annotated[list, operator.add]

Β Β Β Β next_action: str

Β Β Β Β iterations: int

Key Components:

  • StateGraph: The main class from LangGraph that defines our agent’s workflow.
  • AgentState: A TypedDict that defines what information our agent tracks.
    • messages: Uses operator.add to accumulate all thoughts, actions, and observations.
    • next_action: Tells the graph which node to execute next.
    • iterations: Counts how many reasoning cycles we’ve completed.

Creating a Mock Tool

In a real ReAct agent, tools are functions that perform actions in the world β€”Β like searching the web, querying databases, or calling APIs. For this example, we’ll use a simple mock search tool.

# Simple mock search tool

def search_tool(query: str) -> str:

Β Β Β Β # Simulate a search – in real usage, this would call an API

Β Β Β Β responses = {

Β Β Β Β Β Β Β Β “weather tokyo”: “Tokyo weather: 18Β°C, partly cloudy”,

Β Β Β Β Β Β Β Β “population japan”: “Japan population: approximately 125 million”,

Β Β Β Β }

Β Β Β Β return responses.get(query.lower(), f“No results found for: {query}”)

This function simulates a search engine with hardcoded responses. In production, this would call a real search API like Google, Bing, or a custom knowledge base.

The Reasoning Node β€” The β€œBrain” of ReAct

This is where the agent thinks about what to do next. In this simple version, we’re using hardcoded logic, but you’ll see how this becomes dynamic with an LLM in Part 2.

# Reasoning node – decides what to do

def reasoning_node(state: AgentState):

Β Β Β Β messages = state[“messages”]

Β Β Β Β iterations = state.get(“iterations”, 0)

Β Β Β Β 

Β Β Β Β # Simple logic: first search weather, then population, then finish

Β Β Β Β if iterations == 0:

Β Β Β Β Β Β Β Β return {“messages”: [“Thought: I need to check Tokyo weather”],

Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β “next_action”: “action”, “iterations”: iterations + 1}

Β Β Β Β elif iterations == 1:

Β Β Β Β Β Β Β Β return {“messages”: [“Thought: Now I need Japan’s population”],

Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β “next_action”: “action”, “iterations”: iterations + 1}

Β Β Β Β else:

Β Β Β Β Β Β Β Β return {“messages”: [“Thought: I have enough info to answer”],

Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β “next_action”: “end”, “iterations”: iterations + 1}

How it works:

The reasoning node examines the current state and decides:

  • Should we gather more information? (return "action")
  • Do we have enough to answer? (return "end")

Notice how each return value updates the state:

  1. Adds a β€œThought” message explaining the decision.
  2. Sets next_action to route to the next node.
  3. Increments the iteration counter.

This mimics how a human would approach a research task: β€œFirst I need weather info, then population data, then I can answer.”

The Action Node β€” Taking Action

Once the reasoning node decides to act, this node executes the chosen action and observes the results.

# Action node – executes the tool

def action_node(state: AgentState):

Β Β Β Β iterations = state[“iterations”]

Β Β Β Β 

Β Β Β Β # Choose query based on iteration

Β Β Β Β query = “weather tokyo” if iterations == 1 else “population japan”

Β Β Β Β result = search_tool(query)

Β Β Β Β 

Β Β Β Β return {“messages”: [f“Action: Searched for ‘{query}'”,

Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β f“Observation: {result}”],

Β Β Β Β Β Β Β Β Β Β Β Β “next_action”: “reasoning”}

Β 

# Router – decides next step

def route(state: AgentState):

Β Β Β Β return state[“next_action”]

The ReAct Cycle in Action:

  1. Action: Calls the search_tool with a query.
  2. Observation: Records what the tool returned.
  3. Routing: Sets next_action back to β€œreasoning” to continue the loop.

The router function is a simple helper that reads the next_action value and tells LangGraph where to go next.

Building and Executing the Graph

Now we assemble all the pieces into a LangGraph workflow. This is where the magic happens!

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

# Build the graph

workflow = StateGraph(AgentState)

workflow.add_node(“reasoning”, reasoning_node)

workflow.add_node(“action”, action_node)

Β 

# Define edges

workflow.set_entry_point(“reasoning”)

workflow.add_conditional_edges(“reasoning”, route, {

Β Β Β Β “action”: “action”,

Β Β Β Β “end”: END

})

workflow.add_edge(“action”, “reasoning”)

Β 

# Compile and run

app = workflow.compile()

Β 

# Execute

result = app.invoke({“messages”: [“User: Tell me about Tokyo and Japan”],

Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β  “iterations”: 0, “next_action”: “”})

Β 

# Print the conversation flow

print(“\n=== ReAct Loop Output ===”)

for msg in result[“messages”]:

Β Β Β Β print(msg)

Understanding the Graph Structure:

  1. Add Nodes: We register our reasoning and action functions as nodes.
  2. Set Entry Point: The graph always starts at the reasoning node.
  3. Add Conditional Edges: Based on the reasoning node’s decision:
    • If next_action == "action" β†’ go to the action node.
    • If next_action == "end" β†’ stop execution.
  4. Add Fixed Edge: After action completes, always return to reasoning.

The app.invoke() call kicks off this entire process.

Output:

=== ReAct Loop Output ===

User: Tell me about Tokyo and Japan

Β 

Thought: I need to check Tokyo weather

Action: search(‘weather tokyo’)

Observation: Tokyo weather: 18Β°C, partly cloudy

Β 

Thought: Now I need Japan‘s population

Action: search(‘population japan‘)

Observation: Japan population: approximately 125 million

Β 

Thought: I have enough info to answer

Now let’s see how LLM-powered reasoning makes this pattern truly dynamic.

Part 2: LLM-Powered ReAct Agent

Now that you understand the mechanics, let’s build a real ReAct agent that uses an LLM to make intelligent decisions.

Why Use an LLM?

The hardcoded version works, but it’s inflexible β€” it can only handle the exact scenario we programmed. An LLM-powered agent can:

  • Understand different types of questions.
  • Decide dynamically what information to gather.
  • Adapt its reasoning based on what it learns.

Key Difference

Instead of hardcoded if/else logic, we’ll prompt the LLM to decide what to do next. The LLM becomes the β€œreasoning engine” of our agent.

Setting Up the LLM Environment

We’ll use OpenAI’s GPT-4o as our reasoning engine, but you could use any LLM (Anthropic, open-source models, etc.).

from langgraph.graph import StateGraph, END

from typing import TypedDict, Annotated

import operator

import os

from openai import OpenAI

Β 

client = OpenAI(api_key=os.environ.get(“OPENAI_API_KEY”))

Β 

class AgentStateLLM(TypedDict):

Β Β Β Β messages: Annotated[list, operator.add]

Β Β Β Β next_action: str

Β Β Β Β iteration_count: int

New State Definition:

AgentStateLLM is similar to AgentState, but we’ve renamed it to distinguish between the two examples. The structure is identical β€” we still track messages, actions, and iterations.

The LLM Tool β€” Gathering Information

Instead of a mock search, we’ll let the LLM answer queries using its own knowledge. This demonstrates how you can turn an LLM into a tool!

def llm_tool(query: str) -> str:

Β Β Β Β “”“Let the LLM answer the query directly using its knowledge”“”

Β Β Β Β response = client.chat.completions.create(

Β Β Β Β Β Β Β Β model=“gpt-4o”,

Β Β Β Β Β Β Β Β max_tokens=150,

Β Β Β Β Β Β Β Β messages=[{“role”: “user”, “content”: f“Answer this query briefly: {query}”}]

Β Β Β Β )

Β Β Β Β return response.choices[0].message.content.strip()

This function makes a simple API call to GPT-4 with the query. The LLM responds with factual information, which our agent will use in its reasoning.

Note: In production, you might combine this with web search, databases, or other tools for more accurate, up-to-date information.

LLM-Powered Reasoning β€” The Core Innovation

This is where ReAct truly shines. Instead of hardcoded logic, we prompt the LLM to decide what information to gather next.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

def reasoning_node_llm(state: AgentStateLLM):

Β Β Β Β iteration_count = state.get(“iteration_count”, 0)

Β Β Β Β if iteration_count >= 3:

Β Β Β Β Β Β Β Β return {“messages”: [“Thought: I have gathered enough information”],

Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β “next_action”: “end”, “iteration_count”: iteration_count}

Β Β Β Β 

Β Β Β Β history = “\n”.join(state[“messages”])

Β Β Β Β prompt = f“”“You are an AI agent answering: “Tell me about Tokyo and Japan“

Β 

Conversation so far:

{history}

Β 

Queries completed: {iteration_count}/3

Β 

You MUST make exactly 3 queries to gather information.

Respond ONLY with: QUERY: <your specific question>

Β 

Do NOT be conversational. Do NOT thank the user. ONLY output: QUERY: <question>”“”

Β Β Β Β 

Β Β Β Β decision = client.chat.completions.create(

Β Β Β Β Β Β Β Β model=“gpt-4o”, max_tokens=100,

Β Β Β Β Β Β Β Β messages=[{“role”: “user”, “content”: prompt}]

Β Β Β Β ).choices[0].message.content.strip()

Β Β Β Β 

Β Β Β Β if decision.startswith(“QUERY:”):

Β Β Β Β Β Β Β Β return {“messages”: [f“Thought: {decision}”], “next_action”: “action”,

Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β “iteration_count”: iteration_count}

Β Β Β Β return {“messages”: [f“Thought: {decision}”], “next_action”: “end”,

Β Β Β Β Β Β Β Β Β Β Β Β “iteration_count”: iteration_count}

How This Works:

  1. Context Building: We include the conversation history so the LLM knows what’s already been gathered.
  2. Structured Prompting: We give clear instructions to output in a specific format (QUERY: <question>).
  3. Iteration Control: We enforce a maximum of 3 queries to prevent infinite loops.
  4. Decision Parsing: We check if the LLM wants to take action or finish.

The Prompt Strategy:

The prompt tells the LLM:

  • What question it’s trying to answer
  • What information has been gathered so far
  • How many queries it’s allowed to make
  • Exactly how to format its response
  • To not be conversational

LLMs are trained to be helpful and chatty. For agent workflows, we need concise, structured outputs. This directive keeps responses focused on the task.

Executing the Action

The action node works similarly to the hardcoded version, but now it processes the LLM’s dynamically generated query.

def action_node_llm(state: AgentStateLLM):

Β Β Β Β last_thought = state[“messages”][–1]

Β Β Β Β query = last_thought.replace(“Thought: QUERY:”, “”).strip()

Β Β Β Β result = llm_tool(query)

Β Β Β Β return {“messages”: [f“Action: query(‘{query}’)”, f“Observation: {result}”],

Β Β Β Β Β Β Β Β Β Β Β Β “next_action”: “reasoning”,

Β Β Β Β Β Β Β Β Β Β Β Β “iteration_count”: state.get(“iteration_count”, 0) + 1}

The Process:

  1. Extract the query from the LLM’s reasoning (removing the β€œThought: QUERY:” prefix).
  2. Execute the query using our llm_tool.
  3. Record both the action and observation.
  4. Route back to reasoning for the next decision.

Notice how this is more flexible than the hardcoded version β€” the agent can ask for any information it thinks is relevant!

Building the LLM-Powered Graph

The graph structure is identical to Part 1, but now the reasoning node uses LLM intelligence instead of hardcoded rules.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

workflow_llm = StateGraph(AgentStateLLM)

workflow_llm.add_node(“reasoning”, reasoning_node_llm)

workflow_llm.add_node(“action”, action_node_llm)

workflow_llm.set_entry_point(“reasoning”)

workflow_llm.add_conditional_edges(“reasoning”, lambda s: s[“next_action”],

Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β  {“action”: “action”, “end”: END})

workflow_llm.add_edge(“action”, “reasoning”)

Β 

app_llm = workflow_llm.compile()

result_llm = app_llm.invoke({

Β Β Β Β “messages”: [“User: Tell me about Tokyo and Japan”],

Β Β Β Β “next_action”: “”,

Β Β Β Β “iteration_count”: 0

})

Β 

print(“\n=== LLM-Powered ReAct (No Mock Data) ===”)

for msg in result_llm[“messages”]:

Β Β Β Β print(msg)

What’s Different:

  • Same graph topology (reasoning ↔ action with conditional routing).
  • Same state management approach.
  • Only the reasoning logic changed – from if/else to LLM prompting.

This demonstrates the power of LangGraph: you can swap components while keeping the workflow structure intact!

The Output:

You’ll see the agent autonomously decide what information to gather. Each iteration shows:

  • Thought: What the LLM decided to ask about.
  • Action: The query being executed.
  • Observation: The information gathered.

Watch how the LLM strategically gathers information to build a complete answer!

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

=== LLM–Powered ReAct (No Mock Data) ===

User: Tell me about Tokyo and Japan

Β 

Thought: QUERY: What is the history and significance of Tokyo in Japan?

Β 

Action: query(‘What is the history and significance of Tokyo in Japan?’)

Β 

Observation: Tokyo, originally known as Edo, has a rich history and significant role in Japan.

It began as a small fishing village until Tokugawa Ieyasu established it as the center of

his shogunate in 1603, marking the start of the Edo period. During this time, Edo flourished

as a political and cultural hub, becoming one of the world‘s largest cities by the 18th century.

Β 

In 1868, after the Meiji Restoration, the emperor moved from Kyoto to Edo, renaming it Tokyo,

meaning “Eastern Capital”. This transformation marked the beginning of Tokyo’s modernization

and rapid development. Over the 20th century, Tokyo faced challenges, including the Great

Kanto Earthquake in 1923 and heavy bombings

Β 

Thought: QUERY: What are the major cultural and economic contributions of Tokyo to Japan?

Β 

Action: query(‘What are the major cultural and economic contributions of Tokyo to Japan?’)

Β 

Observation: Tokyo, as the capital of Japan, is a major cultural and economic powerhouse.

Culturally, Tokyo is a hub for traditional and contemporary arts, including theater, music,

and visual arts. The city is home to numerous museums, galleries, and cultural sites such as

the Tokyo National Museum, Senso–ji Temple, and the Meiji Shrine. It also hosts international

events like the Tokyo International Film Festival and various fashion weeks, contributing to

its reputation as a global fashion and cultural center.

Β 

Economically, Tokyo is one of the world‘s leading financial centers. It hosts the Tokyo Stock

Exchange, one of the largest stock exchanges globally, and is the headquarters for numerous

multinational corporations. The city’s advanced infrastructure and innovation in technology

and industry make it a focal

Β 

Thought: QUERY: What are the key historical and cultural aspects of Japan as a whole?

Β 

Action: query(‘What are the key historical and cultural aspects of Japan as a whole?’)

Β 

Observation: Japan boasts a rich tapestry of historical and cultural aspects, shaped by centuries

of development. Historically, Japan‘s culture was influenced by its isolation as an island

nation, leading to a unique blend of indigenous practices and foreign influences. Key historical

periods include the Jomon and Yayoi eras, characterized by early settlement and culture, and the

subsequent periods of imperial rule and samurai governance, such as the Heian, Kamakura, and Edo

periods. These periods fostered developments like the tea ceremony, calligraphy, and kabuki theater.

Β 

Culturally, Japan is known for its Shinto and Buddhist traditions, which coexist seamlessly.

Its aesthetic principles emphasize simplicity and nature, reflected in traditional architecture,

gardens, and arts such as ukiyo–e prints and later

Β 

Thought: I have gathered enough information

Wrapping Up

You’ve now built two ReAct agents with LangGraph β€” one with hardcoded logic to learn the mechanics, and one powered by an LLM that makes dynamic decisions.

The key insight? LangGraph lets you separate your workflow structure from the intelligence that drives it. The graph topology stayed the same between Part 1 and Part 2, but swapping hardcoded logic for LLM reasoning transformed a rigid script into an adaptive agent.

From here, you can extend these concepts by adding real tools (web search, calculators, databases), implementing tool selection logic, or even building multi-agent systems where multiple ReAct agents collaborate.



Source_link

Related Posts

Top 8 3D Point Cloud Annotation Companies in 2026
Al, Analytics and Automation

Top 8 3D Point Cloud Annotation Companies in 2026

November 13, 2025
Talk to Your TV β€” Bitmovin’s Agentic AI Hub Quietly Redefines How We Watch
Al, Analytics and Automation

Talk to Your TV β€” Bitmovin’s Agentic AI Hub Quietly Redefines How We Watch

November 13, 2025
How to Build a Fully Functional Custom GPT-style Conversational AI Locally Using Hugging Face Transformers
Al, Analytics and Automation

How to Build a Fully Functional Custom GPT-style Conversational AI Locally Using Hugging Face Transformers

November 13, 2025
Datasets for Training a Language Model
Al, Analytics and Automation

Datasets for Training a Language Model

November 13, 2025
PR Newswire via Morningstar PR Newswire Introduces AI-Led Platform Redefining the Future of Public Relations
Al, Analytics and Automation

PR Newswire via Morningstar PR Newswire Introduces AI-Led Platform Redefining the Future of Public Relations

November 12, 2025
How to Build an End-to-End Interactive Analytics Dashboard Using PyGWalker Features for Insightful Data Exploration
Al, Analytics and Automation

How to Build an End-to-End Interactive Analytics Dashboard Using PyGWalker Features for Insightful Data Exploration

November 12, 2025
Next Post
Offload Patterns for East–West Traffic

Offload Patterns for East–West Traffic

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

Communication Effectiveness Skills For Business Leaders

Communication Effectiveness Skills For Business Leaders

June 10, 2025
Trump ends trade talks with Canada over a digital services tax

Trump ends trade talks with Canada over a digital services tax

June 28, 2025
15 Trending Songs on TikTok in 2025 (+ How to Use Them)

15 Trending Songs on TikTok in 2025 (+ How to Use Them)

June 18, 2025
App Development Cost in Singapore: Pricing Breakdown & Insights

App Development Cost in Singapore: Pricing Breakdown & Insights

June 22, 2025
7 Best EOR Platforms for Software Companies in 2025

7 Best EOR Platforms for Software Companies in 2025

June 21, 2025

EDITOR'S PICK

From Pixels to Perfect Replicas

From Pixels to Perfect Replicas

August 21, 2025
Ram ends EV pickup truck plans

Ram ends EV pickup truck plans

September 13, 2025
12 Most Common Phishing Attacks With Examples

12 Most Common Phishing Attacks With Examples

August 1, 2025
AI Readiness Is Already High In Advanced Frontline Marketing Teams

AI Readiness Is Already High In Advanced Frontline Marketing Teams

June 2, 2025

About

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow us

Categories

  • Account Based Marketing
  • Ad Management
  • Al, Analytics and Automation
  • Brand Management
  • Channel Marketing
  • Digital Marketing
  • Direct Marketing
  • Event Management
  • Google Marketing
  • Marketing Attribution and Consulting
  • Marketing Automation
  • Mobile Marketing
  • PR Solutions
  • Social Media Management
  • Technology And Software
  • Uncategorized

Recent Posts

  • Full list of winners: The inaugural Zenith Awards
  • Who is Johnson Wen? The Ariana Grande Stage Invader
  • Offload Patterns for East–West Traffic
  • Building ReAct Agents with LangGraph: A Beginner’s Guide
  • About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?