On this article, you’ll discover ways to implement state-managed interruptions in LangGraph so an agent workflow can pause for human approval earlier than resuming execution.
Matters we’ll cowl embody:
- What state-managed interruptions are and why they matter in agentic AI methods.
- Easy methods to outline a easy LangGraph workflow with a shared agent state and executable nodes.
- Easy methods to pause execution, replace the saved state with human approval, and resume the workflow.
Learn on for all the information.

Constructing a ‘Human-in-the-Loop’ Approval Gate for Autonomous Brokers
Picture by Editor
Introduction
In agentic AI methods, when an agent’s execution pipeline is deliberately halted, we’ve got what is named a state-managed interruption. Identical to a saved online game, the “state” of a paused agent — its lively variables, context, reminiscence, and deliberate actions — is persistently saved, with the agent positioned in a sleep or ready state till an exterior set off resumes its execution.
The importance of state-managed interruptions has grown alongside progress in extremely autonomous, agent-based AI purposes for a number of causes. Not solely do they act as efficient security guardrails to recuperate from in any other case irreversible actions in high-stakes settings, however additionally they allow human-in-the-loop approval and correction. A human supervisor can reconfigure the state of a paused agent and forestall undesired penalties earlier than actions are carried out primarily based on an incorrect response.
LangGraph, an open-source library for constructing stateful massive language mannequin (LLM) purposes, helps agent-based workflows with human-in-the-loop mechanisms and state-managed interruptions, thereby enhancing robustness towards errors.
This text brings all of those components collectively and exhibits, step-by-step, the way to implement state-managed interruptions utilizing LangGraph in Python underneath a human-in-the-loop method. Whereas a lot of the instance processes outlined under are supposed to be automated by an agent, we can even present the way to make the workflow cease at a key level the place human overview is required earlier than execution resumes.
Step-by-Step Information
First, we pip set up langgraph and make the mandatory imports for this sensible instance:
from typing import TypedDict from langgraph.graph import StateGraph, END from langgraph.checkpoint.reminiscence import MemorySaver |
Discover that one of many imported lessons is called StateGraph. LangGraph makes use of state graphs to mannequin cyclic, complicated workflows that contain brokers. There are states representing the system’s shared reminiscence (a.ok.a. the info payload) and nodes representing actions that outline the execution logic used to replace this state. Each states and nodes must be explicitly outlined and checkpointed. Let’s try this now.
class AgentState(TypedDict): draft: str accredited: bool despatched: bool |
The agent state is structured equally to a Python dictionary as a result of it inherits from TypedDict. The state acts like our “save file” as it’s handed between nodes.
Concerning nodes, we’ll outline two of them, every representing an motion: drafting an e-mail and sending it.
def draft_node(state: AgentState): print(“[Agent]: Drafting the e-mail…”) # The agent builds a draft and updates the state return {“draft”: “Good day! Your server replace is able to be deployed.”, “accredited”: False, “despatched”: False}
def send_node(state: AgentState): print(f“[Agent]: Waking again up! Checking approval standing…”) if state.get(“accredited”): print(“[System]: SENDING EMAIL ->”, state[“draft”]) return {“despatched”: True} else: print(“[System]: Draft was rejected. Electronic mail aborted.”) return {“despatched”: False} |
The draft_node() perform simulates an agent motion that drafts an e-mail. To make the agent carry out an actual motion, you’ll exchange the print() statements that simulate the conduct with precise directions that execute it. The important thing element to note right here is the article returned by the perform: a dictionary whose fields match these within the agent state class we outlined earlier.
In the meantime, the send_node() perform simulates the motion of sending the e-mail. However there’s a catch: the core logic for the human-in-the-loop mechanism lives right here, particularly within the test on the accredited standing. Provided that the accredited area has been set to True — by a human, as we’ll see, or by a simulated human intervention — is the e-mail truly despatched. As soon as once more, the actions are simulated by way of easy print() statements for the sake of simplicity, conserving the give attention to the state-managed interruption mechanism.
What else do we’d like? An agent workflow is described by a graph with a number of linked states. Let’s outline a easy, linear sequence of actions as follows:
workflow = StateGraph(AgentState)
# Including motion nodes workflow.add_node(“draft_message”, draft_node) workflow.add_node(“send_message”, send_node)
# Connecting nodes by way of edges: Begin -> Draft -> Ship -> Finish workflow.set_entry_point(“draft_message”) workflow.add_edge(“draft_message”, “send_message”) workflow.add_edge(“send_message”, END) |
To implement the database-like mechanism that saves the agent state, and to introduce the state-managed interruption when the agent is about to ship a message, we use this code:
# MemorySaver is like our “database” for saving states reminiscence = MemorySaver()
# THIS IS A KEY PART OF OUR PROGRAM: telling the agent to pause earlier than sending app = workflow.compile( checkpointer=reminiscence, interrupt_before=[“send_message”] ) |
Now comes the actual motion. We are going to execute the motion graph outlined a couple of moments in the past. Discover under {that a} thread ID is used so the reminiscence can preserve observe of the workflow state throughout executions.
config = {“configurable”: {“thread_id”: “demo-thread-1”}} initial_state = {“draft”: “”, “accredited”: False, “despatched”: False}
print(“n— RUNNING INITIAL GRAPH —“) # The graph will run ‘draft_node’, then hit the breakpoint and pause. for occasion in app.stream(initial_state, config): move |
Subsequent comes the human-in-the-loop second, the place the stream is paused and human approval is simulated by setting accredited to True:
print(“n— GRAPH PAUSED —“) current_state = app.get_state(config) print(f“Subsequent node to execute: {current_state.subsequent}”) # Ought to present ‘send_message’ print(f“Present Draft: ‘{current_state.values[‘draft’]}'”)
# Simulating a human reviewing and approving the e-mail draft print(“n [Human]: Reviewing draft… Appears to be like good. Approving!”)
# IMPORTANT: the state is up to date with the human’s determination app.update_state(config, {“accredited”: True}) |
This resumes the graph and completes execution.
print(“n— RESUMING GRAPH —“) # We move ‘None’, because the enter tells the graph to only resume the place it left off for occasion in app.stream(None, config): move
print(“n— FINAL STATE —“) print(app.get_state(config).values) |
The general output printed by this simulated workflow ought to appear to be this:
—– RUNNING INITIAL GRAPH —– [Agent]: Drafting the e-mail...
—– GRAPH PAUSED —– Subsequent node to execute: (‘send_message’,) Present Draft: ‘Good day! Your server replace is able to be deployed.’
[Human]: Reviewing draft... Appears to be like good. Approving!
—– RESUMING GRAPH —– [Agent]: Waking again up! Checking approval standing... [System]: SENDING EMAIL -> Good day! Your server replace is prepared to be deployed.
—– FINAL STATE —– {‘draft’: ‘Good day! Your server replace is able to be deployed.’, ‘accredited’: True, ‘despatched’: True} |
Wrapping Up
This text illustrated the way to implement state-managed interruptions in agent-based workflows by introducing human-in-the-loop mechanisms — an vital functionality in essential, high-stakes eventualities the place full autonomy will not be fascinating. We used LangGraph, a robust library for constructing agent-driven LLM purposes, to simulate a workflow ruled by these guidelines.
