Agentic Design Patterns with AutoGen

By Shazia Zahoor

Contact : smarttechaigroup@gmail.com

Introduction

Agentic design patterns are structured approaches to building AI systems composed of autonomous agents that can reason, collaborate, and act independently or together to achieve complex goals. These patterns draw inspiration from software engineering and multi-agent systems, and are especially relevant in the era of large language models (LLMs).

Key Concepts in Agentic Design Patterns:

  • Reflection: Agents evaluate and improve their own outputs through feedback loops.
  • Tool Use: Agents use external tools, APIs, or functions to extend their capabilities.
  • Planning: Agents generate step-by-step plans to solve tasks before executing them.
  • Multi-Agent Collaboration: Multiple agents with specialized roles interact and share information to complete a task more efficiently.

Agentic design patterns help developers create modular, explainable, and scalable AI workflows, making it easier to automate tasks that require judgment, iteration, or multi-step reasoning. These patterns are foundational to frameworks like AutoGen, LangGraph, and CrewAI..

Lesson 1: Multi-Agent Conversation

Multi-Agent Conversation on Stand up Comedy using AI agentic design patterns showcases how multiple AI agents with distinct comedic roles—like a joke writer and a critic—collaborate to create and refine humorous content. Using patterns such as reflection (for joke improvement) and role specialization, agents engage in back-and-forth dialogue to generate, evaluate, and enhance stand up routines, demonstrating creative teamwork through structured AI interactions.

Example script for Stand up comedy between AI agents

In [ ]:





from utils import get_openai_api_key
OPENAI_API_KEY = get_openai_api_key()
llm_config = {"model": "gpt-3.5-turbo"}

Define an AutoGen agent

In [ ]:





from autogen import ConversableAgent
agent = ConversableAgent(
    name="chatbot",
    llm_config=llm_config,
    human_input_mode="NEVER",
)

In [ ]:





reply = agent.generate_reply(
    messages=[{"content": "Tell me a joke.", "role": "user"}]
)
print(reply)

In [ ]:





reply = agent.generate_reply(
    messages=[{"content": "Repeat the joke.", "role": "user"}]
)
print(reply)

Conversation

Setting up a conversation between two agents, Cathy and Joe, where the memory of their interactions is retained.

In [ ]:





cathy = ConversableAgent(
    name="cathy",
    system_message=
    "Your name is Cathy and you are a stand-up comedian.",
    llm_config=llm_config,
    human_input_mode="NEVER",
)
joe = ConversableAgent(
    name="joe",
    system_message=
    "Your name is Joe and you are a stand-up comedian. "
    "Start the next joke from the punchline of the previous joke.",
    llm_config=llm_config,
    human_input_mode="NEVER",
)

Note: You might get a slightly different response (set of jokes) than what is shown in the video

In [ ]:





chat_result = joe.initiate_chat(
    recipient=cathy, 
    message="I'm Joe. Cathy, let's keep the jokes rolling.",
    max_turns=2,
)

You can print out:

  1. Chat history
  2. Cost
  3. Summary of the conversation

In [ ]:





import pprint
pprint.pprint(chat_result.chat_history)

In [ ]:





pprint.pprint(chat_result.cost)

In [ ]:





pprint.pprint(chat_result.summary)

Get a better summary of the conversation

In [ ]:





chat_result = joe.initiate_chat(
    cathy, 
    message="I'm Joe. Cathy, let's keep the jokes rolling.", 
    max_turns=2, 
    summary_method="reflection_with_llm",
    summary_prompt="Summarize the conversation",
)

In [ ]:





pprint.pprint(chat_result.summary)

Chat Termination

Chat can be terminated using a termination conditions.

In [ ]:





cathy = ConversableAgent(
    name="cathy",
    system_message=
    "Your name is Cathy and you are a stand-up comedian. "
    "When you're ready to end the conversation, say 'I gotta go'.",
    llm_config=llm_config,
    human_input_mode="NEVER",
    is_termination_msg=lambda msg: "I gotta go" in msg["content"],
)
joe = ConversableAgent(
    name="joe",
    system_message=
    "Your name is Joe and you are a stand-up comedian. "
    "When you're ready to end the conversation, say 'I gotta go'.",
    llm_config=llm_config,
    human_input_mode="NEVER",
    is_termination_msg=lambda msg: "I gotta go" in msg["content"] or "Goodbye" in msg["content"],
)

In [ ]:





chat_result = joe.initiate_chat(
    recipient=cathy,
    message="I'm Joe. Cathy, let's keep the jokes rolling."
)

In [ ]:





cathy.send(message=“What’s last joke we talked about?”, recipient=joe)

Lesson 2 : Sequential Chats script
Example script : Customer Onboarding of AI agent by AI agent

A sequential chat on Customer Onboarding using AI agentic design patterns involves a series of specialized AI agents guiding a new user through an onboarding process step by step. Each agent handles a specific stage—such as welcoming the user, collecting information, explaining features, and answering questions—passing control in a structured sequence.

By applying patterns like role specialization and workflow planning, the agents collaborate smoothly, ensuring a consistent and personalized onboarding experience. This approach mirrors a real customer support journey while leveraging the scalability and efficiency of autonomous AI systems.

Setup

In [ ]:





llm_config={"model": "gpt-3.5-turbo"}

In [ ]:





from autogen import ConversableAgent

Creating the needed agents

In [ ]:





onboarding_personal_information_agent = ConversableAgent(
    name="Onboarding Personal Information Agent",
    system_message='''You are a helpful customer onboarding agent,
    you are here to help new customers get started with our product.
    Your job is to gather customer's name and location.
    Do not ask for other information. Return 'TERMINATE' 
    when you have gathered all the information.''',
    llm_config=llm_config,
    code_execution_config=False,
    human_input_mode="NEVER",
)

In [ ]:





onboarding_topic_preference_agent = ConversableAgent(
    name="Onboarding Topic preference Agent",
    system_message='''You are a helpful customer onboarding agent,
    you are here to help new customers get started with our product.
    Your job is to gather customer's preferences on news topics.
    Do not ask for other information.
    Return 'TERMINATE' when you have gathered all the information.''',
    llm_config=llm_config,
    code_execution_config=False,
    human_input_mode="NEVER",
)

In [ ]:





customer_engagement_agent = ConversableAgent(
    name="Customer Engagement Agent",
    system_message='''You are a helpful customer service agent
    here to provide fun for the customer based on the user's
    personal information and topic preferences.
    This could include fun facts, jokes, or interesting stories.
    Make sure to make it engaging and fun!
    Return 'TERMINATE' when you are done.''',
    llm_config=llm_config,
    code_execution_config=False,
    human_input_mode="NEVER",
    is_termination_msg=lambda msg: "terminate" in msg.get("content").lower(),
)

In [ ]:





customer_proxy_agent = ConversableAgent(
    name="customer_proxy_agent",
    llm_config=False,
    code_execution_config=False,
    human_input_mode="ALWAYS",
    is_termination_msg=lambda msg: "terminate" in msg.get("content").lower(),
)

Creating tasks

Now, you can craft a series of tasks to facilitate the onboarding process.

In [ ]:





chats = [
    {
        "sender": onboarding_personal_information_agent,
        "recipient": customer_proxy_agent,
        "message": 
            "Hello, I'm here to help you get started with our product."
            "Could you tell me your name and location?",
        "summary_method": "reflection_with_llm",
        "summary_args": {
            "summary_prompt" : "Return the customer information "
                             "into as JSON object only: "
                             "{'name': '', 'location': ''}",
        },
        "max_turns": 2,
        "clear_history" : True
    },
    {
        "sender": onboarding_topic_preference_agent,
        "recipient": customer_proxy_agent,
        "message": 
                "Great! Could you tell me what topics you are "
                "interested in reading about?",
        "summary_method": "reflection_with_llm",
        "max_turns": 1,
        "clear_history" : False
    },
    {
        "sender": customer_proxy_agent,
        "recipient": customer_engagement_agent,
        "message": "Let's find something fun to read.",
        "max_turns": 1,
        "summary_method": "reflection_with_llm",
    },
]

Start the onboarding process

Note: You might get a slightly different response than what’s shown in the video. Feel free to try different inputs, such as name, location, and preferences.

In [ ]:





from autogen import initiate_chats
chat_results = initiate_chats(chats)

In [ ]:





for chat_result in chat_results:
    print(chat_result.summary)
    print("\n")

In [ ]:





for chat_result in chat_results:
    print(chat_result.cost)

print(“\n”)

Lesson 3 : Reflection on blogpost writing by AI agents

Reflection in Blogpost Writing using AI agentic design patterns involves an AI writer agent drafting a blog post, followed by a critic agent reviewing the content for clarity, coherence, and quality. The writer then revises the post based on the feedback. This loop, based on the reflection pattern, enables continuous improvement and refinement, simulating a thoughtful editorial process entirely driven by collaborative AI agents.

Example: Blogpost writing by AI agents

Setup

In [ ]:





llm_config = {"model": "gpt-3.5-turbo"}

The task!

In [ ]:





task = '''
        Write a concise but engaging blogpost about
       DeepLearning.AI. Make sure the blogpost is
       within 100 words.
       '''

Create a writer agent

In [ ]:





import autogen
writer = autogen.AssistantAgent(
    name="Writer",
    system_message="You are a writer. You write engaging and concise " 
        "blogpost (with title) on given topics. You must polish your "
        "writing based on the feedback you receive and give a refined "
        "version. Only return your final work without additional comments.",
    llm_config=llm_config,
)

In [ ]:





reply = writer.generate_reply(messages=[{"content": task, "role": "user"}])

In [ ]:





print(reply)

Adding reflection

Create a critic agent to reflect on the work of the writer agent.

In [ ]:





critic = autogen.AssistantAgent(
    name="Critic",
    is_termination_msg=lambda x: x.get("content", "").find("TERMINATE") >= 0,
    llm_config=llm_config,
    system_message="You are a critic. You review the work of "
                "the writer and provide constructive "
                "feedback to help improve the quality of the content.",
)

In [ ]:





res = critic.initiate_chat(
    recipient=writer,
    message=task,
    max_turns=2,
    summary_method="last_msg"
)

Nested chat

In [ ]:





SEO_reviewer = autogen.AssistantAgent(
    name="SEO Reviewer",
    llm_config=llm_config,
    system_message="You are an SEO reviewer, known for "
        "your ability to optimize content for search engines, "
        "ensuring that it ranks well and attracts organic traffic. " 
        "Make sure your suggestion is concise (within 3 bullet points), "
        "concrete and to the point. "
        "Begin the review by stating your role.",
)

In [ ]:





legal_reviewer = autogen.AssistantAgent(
    name="Legal Reviewer",
    llm_config=llm_config,
    system_message="You are a legal reviewer, known for "
        "your ability to ensure that content is legally compliant "
        "and free from any potential legal issues. "
        "Make sure your suggestion is concise (within 3 bullet points), "
        "concrete and to the point. "
        "Begin the review by stating your role.",
)

In [ ]:





ethics_reviewer = autogen.AssistantAgent(
    name="Ethics Reviewer",
    llm_config=llm_config,
    system_message="You are an ethics reviewer, known for "
        "your ability to ensure that content is ethically sound "
        "and free from any potential ethical issues. " 
        "Make sure your suggestion is concise (within 3 bullet points), "
        "concrete and to the point. "
        "Begin the review by stating your role. ",
)

In [ ]:





meta_reviewer = autogen.AssistantAgent(
    name="Meta Reviewer",
    llm_config=llm_config,
    system_message="You are a meta reviewer, you aggragate and review "
    "the work of other reviewers and give a final suggestion on the content.",
)

Orchestrate the nested chats to solve the task

In [ ]:





def reflection_message(recipient, messages, sender, config):
    return f'''Review the following content. 
            \n\n {recipient.chat_messages_for_summary(sender)[-1]['content']}'''
review_chats = [
    {
     "recipient": SEO_reviewer, 
     "message": reflection_message, 
     "summary_method": "reflection_with_llm",
     "summary_args": {"summary_prompt" : 
        "Return review into as JSON object only:"
        "{'Reviewer': '', 'Review': ''}. Here Reviewer should be your role",},
     "max_turns": 1},
    {
    "recipient": legal_reviewer, "message": reflection_message, 
     "summary_method": "reflection_with_llm",
     "summary_args": {"summary_prompt" : 
        "Return review into as JSON object only:"
        "{'Reviewer': '', 'Review': ''}.",},
     "max_turns": 1},
    {"recipient": ethics_reviewer, "message": reflection_message, 
     "summary_method": "reflection_with_llm",
     "summary_args": {"summary_prompt" : 
        "Return review into as JSON object only:"
        "{'reviewer': '', 'review': ''}",},
     "max_turns": 1},
     {"recipient": meta_reviewer, 
      "message": "Aggregrate feedback from all reviewers and give final suggestions on the writing.", 
     "max_turns": 1},
]

In [ ]:





critic.register_nested_chats(
    review_chats,
    trigger=writer,
)

Note: You might get a slightly different response than what’s shown in the video. Feel free to try different task.

In [ ]:





res = critic.initiate_chat(
    recipient=writer,
    message=task,
    max_turns=2,
    summary_method="last_msg"
)

Get the summary

In [ ]:





print(res.summary)

Lesson 4 : Tool Use by AI agents to play conversational chess

Tool Use in Conversational Chess between AI agents showcases how agents can enhance their interactions by integrating external tools. In this setup, a chess-playing agent engages in dialogue with an opponent agent, while calling a chess engine or game logic tool to validate moves, update the board state, and suggest strategies.

Using the tool use pattern in agentic design, agents combine natural conversation with precise, rule-based gameplay—demonstrating how AI agents can access and apply external functions to deliver rich, interactive experiences like a real-time chess match.

Example: Conversational Chess

Setup
llm_config = {"model": "gpt-4-turbo"}
import chess
import chess.svg
from typing_extensions import Annotated
Initialize the chess board
board = chess.Board()
made_move = False
Define the needed tools
1. Tool for getting legal moves
def get_legal_moves(
    
) -> Annotated[str, "A list of legal moves in UCI format"]:
    return "Possible moves are: " + ",".join(
        [str(move) for move in board.legal_moves]
    )
2. Tool for making a move on the board
def make_move(
    move: Annotated[str, "A move in UCI format."]
) -> Annotated[str, "Result of the move."]:
    move = chess.Move.from_uci(move)
    board.push_uci(str(move))
    global made_move
    made_move = True
    
    # Display the board.
    display(
        chess.svg.board(
            board,
            arrows=[(move.from_square, move.to_square)],
            fill={move.from_square: "gray"},
            size=200
        )
    )
    
    # Get the piece name.
    piece = board.piece_at(move.to_square)
    piece_symbol = piece.unicode_symbol()
    piece_name = (
        chess.piece_name(piece.piece_type).capitalize()
        if piece_symbol.isupper()
        else chess.piece_name(piece.piece_type)
    )
    return f"Moved {piece_name} ({piece_symbol}) from "\
    f"{chess.SQUARE_NAMES[move.from_square]} to "\
    f"{chess.SQUARE_NAMES[move.to_square]}."
Create agents
You will create the player agents and a board proxy agents for the chess board.

from autogen import ConversableAgent
# Player white agent
player_white = ConversableAgent(
    name="Player White",
    system_message="You are a chess player and you play as white. "
    "First call get_legal_moves(), to get a list of legal moves. "
    "Then call make_move(move) to make a move.",
    llm_config=llm_config,
)
# Player black agent
player_black = ConversableAgent(
    name="Player Black",
    system_message="You are a chess player and you play as black. "
    "First call get_legal_moves(), to get a list of legal moves. "
    "Then call make_move(move) to make a move.",
    llm_config=llm_config,
)
def check_made_move(msg):
    global made_move
    if made_move:
        made_move = False
        return True
    else:
        return False
​
board_proxy = ConversableAgent(
    name="Board Proxy",
    llm_config=False,
    is_termination_msg=check_made_move,
    default_auto_reply="Please make a move.",
    human_input_mode="NEVER",
)
Register the tools
A tool must be registered for the agent that calls the tool and the agent that executes the tool.

from autogen import register_function
for caller in [player_white, player_black]:
    register_function(
        get_legal_moves,
        caller=caller,
        executor=board_proxy,
        name="get_legal_moves",
        description="Get legal moves.",
    )
    
    register_function(
        make_move,
        caller=caller,
        executor=board_proxy,
        name="make_move",
        description="Call this tool to make a move.",
    )
player_black.llm_config["tools"]
Register the nested chats
Each player agent will have a nested chat with the board proxy agent to make moves on the chess board.

player_white.register_nested_chats(
    trigger=player_black,
    chat_queue=[
        {
            "sender": board_proxy,
            "recipient": player_white,
            "summary_method": "last_msg",
        }
    ],
)
​
player_black.register_nested_chats(
    trigger=player_white,
    chat_queue=[
        {
            "sender": board_proxy,
            "recipient": player_black,
            "summary_method": "last_msg",
        }
    ],
)
Start the Game
The game will start with the first message.

Note: In this lesson, you will use GPT 4 for better results. Please note that the lesson has a quota limit. If you want to explore the code in this lesson further, we recommend trying it locally with your own API key.

Note: You might get a slightly different moves than what's shown in the video.

board = chess.Board()
​
chat_result = player_black.initiate_chat(
    player_white,
    message="Let's play chess! Your move.",
    max_turns=2,
)
Adding a fun chitchat to the game!
player_white = ConversableAgent(
    name="Player White",
    system_message="You are a chess player and you play as white. "
    "First call get_legal_moves(), to get a list of legal moves. "
    "Then call make_move(move) to make a move. "
    "After a move is made, chitchat to make the game fun.",
    llm_config=llm_config,
)
player_black = ConversableAgent(
    name="Player Black",
    system_message="You are a chess player and you play as black. "
    "First call get_legal_moves(), to get a list of legal moves. "
    "Then call make_move(move) to make a move. "
    "After a move is made, chitchat to make the game fun.",
    llm_config=llm_config,
)
for caller in [player_white, player_black]:
    register_function(
        get_legal_moves,
        caller=caller,
        executor=board_proxy,
        name="get_legal_moves",
        description="Get legal moves.",
    )
​
    register_function(
        make_move,
        caller=caller,
        executor=board_proxy,
        name="make_move",
        description="Call this tool to make a move.",
    )
​
player_white.register_nested_chats(
    trigger=player_black,
    chat_queue=[
        {
            "sender": board_proxy,
            "recipient": player_white,
            "summary_method": "last_msg",
            "silent": True,
        }
    ],
)
​
player_black.register_nested_chats(
    trigger=player_white,
    chat_queue=[
        {
            "sender": board_proxy,
            "recipient": player_black,
            "summary_method": "last_msg",
            "silent": True,
        }
    ],
)
board = chess.Board()
​
chat_result = player_black.initiate_chat(
    player_white,
    message="Let's play chess! Your move.",
    max_turns=2,
)
Note: To add human input to this game, add human_input_mode="ALWAYS" for both player agents.

Lesson 5: Coding and Financial Analysis by AI Agents

Coding and Financial Analysis using agentic design patterns involves AI agents working collaboratively to generate, execute, and interpret code for financial tasks. A planner agent outlines the analysis steps, a coder agent writes Python scripts (e.g., for stock data retrieval or trend analysis), and an evaluator agent verifies results and insights.

By applying patterns like tool use, reflection, and multi-agent collaboration, these agents automate complex financial workflows—transforming raw data into actionable insights with minimal human input. This approach showcases the power of agentic AI in streamlining technical, data-driven processes.

Setup
llm_config = {"model": "gpt-4-turbo"}
Define a code executor
from autogen.coding import LocalCommandLineCodeExecutor
executor = LocalCommandLineCodeExecutor(
timeout=60,
work_dir="coding",
)
Create agents
from autogen import ConversableAgent, AssistantAgent
1. Agent with code executor configuration
code_executor_agent = ConversableAgent(
name="code_executor_agent",
llm_config=False,
code_execution_config={"executor": executor},
human_input_mode="ALWAYS",
default_auto_reply=
"Please continue. If everything is done, reply 'TERMINATE'.",
)
2. Agent with code writing capability
code_writer_agent = AssistantAgent(
name="code_writer_agent",
llm_config=llm_config,
code_execution_config=False,
human_input_mode="NEVER",
)
code_writer_agent_system_message = code_writer_agent.system_message
print(code_writer_agent_system_message)
The task!
Ask the two agents to collaborate on a stock analysis task.

import datetime

today = datetime.datetime.now().date()
message = f"Today is {today}. "\
"Create a plot showing stock gain YTD for NVDA and TLSA. "\
"Make sure the code is in markdown code block and save the figure"\
" to a file ytd_stock_gains.png."""
Note: In this lesson, you will use GPT 4 for better results. Please note that the lesson has a quota limit. If you want to explore the code in this lesson further, we recommend trying it locally with your own API key.

Note: You might see a different set of outputs than those shown in the video. The agents collaborate to generate the code needed for your task, and they might produce code with errors in the process. However, they will ultimately provide a correct code in the end.

chat_result = code_executor_agent.initiate_chat(
code_writer_agent,
message=message,
)
Let's see the plot!
Note:

Your plot might differ from the one shown in the video because the LLM's freestyle code generation could choose a different plot type, such as a bar plot.
You can re-run the previous cell and check the generated code. If it produces a bar plot, remember you can directly specify your preference by asking for a specific plot type instead of a bar plot.
import os
from IPython.display import Image

Image(os.path.join("coding", "ytd_stock_gains.png"))
Note: The agent will automatically save the code in a .py file and the plot in a .png file. To access and check the files generated by the agents, go to the File menu and select Open.... Then, open the folder named coding to find all the generated files.

User-Defined Functions
Instead of asking LLM to generate the code for downloading stock data and plotting charts each time, you can define functions for these two tasks and have LLM call these functions in the code.

def get_stock_prices(stock_symbols, start_date, end_date):
"""Get the stock prices for the given stock symbols between
the start and end dates.

Args:
stock_symbols (str or list): The stock symbols to get the
prices for.
start_date (str): The start date in the format
'YYYY-MM-DD'.
end_date (str): The end date in the format 'YYYY-MM-DD'.

Returns:
pandas.DataFrame: The stock prices for the given stock
symbols indexed by date, with one column per stock
symbol.
"""
import yfinance

stock_data = yfinance.download(
stock_symbols, start=start_date, end=end_date
)
return stock_data.get("Close")
def plot_stock_prices(stock_prices, filename):
"""Plot the stock prices for the given stock symbols.

Args:
stock_prices (pandas.DataFrame): The stock prices for the
given stock symbols.
"""
import matplotlib.pyplot as plt

plt.figure(figsize=(10, 5))
for column in stock_prices.columns:
plt.plot(
stock_prices.index, stock_prices[column], label=column
)
plt.title("Stock Prices")
plt.xlabel("Date")
plt.ylabel("Price")
plt.grid(True)
plt.savefig(filename)
Create a new executor with the user-defined functions
executor = LocalCommandLineCodeExecutor(
timeout=60,
work_dir="coding",
functions=[get_stock_prices, plot_stock_prices],
)
code_writer_agent_system_message += executor.format_functions_for_prompt()
print(code_writer_agent_system_message)
Let's update the agents with the new system message
code_writer_agent = ConversableAgent(
name="code_writer_agent",
system_message=code_writer_agent_system_message,
llm_config=llm_config,
code_execution_config=False,
human_input_mode="NEVER",
)
code_executor_agent = ConversableAgent(
name="code_executor_agent",
llm_config=False,
code_execution_config={"executor": executor},
human_input_mode="ALWAYS",
default_auto_reply=
"Please continue. If everything is done, reply 'TERMINATE'.",
)
Start the same task again!
chat_result = code_executor_agent.initiate_chat(
code_writer_agent,
message=f"Today is {today}."
"Download the stock prices YTD for NVDA and TSLA and create"
"a plot. Make sure the code is in markdown code block and "
"save the figure to a file stock_prices_YTD_plot.png.",
)
Plot the results
Image(os.path.join("coding", "stock_prices_YTD_plot.png"))
Note: The agent will automatically save the code in a .py file and the plot in a .png file. To access and check the files generated by the agents, go to the File menu and select Open.... Then, open the folder named coding to find all the generated files.

Lesson 6: Planning and stock report generation by AI agents

Planning and Stock Report Generation using agentic design patterns involves AI agents collaborating to produce a comprehensive financial report. A planner agent first breaks down the task into structured steps—such as data collection, trend analysis, and summary writing. Specialized agents then execute each step, using tools like APIs or code to fetch stock data, perform analysis, and generate insights.

This process leverages the planning pattern, enabling the system to organize tasks logically and adaptively. The result is a well-structured, automated stock report—demonstrating how agentic AI can replicate complex analytical workflows with clarity and precision.

Setup
llm_config={"model": "gpt-4-turbo"}
The task!
task = "Write a blogpost about the stock price performance of "\
"Nvidia in the past month. Today's date is 2024-04-23."
Build a group chat
This group chat will include these agents:

User_proxy or Admin: to allow the user to comment on the report and ask the writer to refine it.
Planner: to determine relevant information needed to complete the task.
Engineer: to write code using the defined plan by the planner.
Executor: to execute the code written by the engineer.
Writer: to write the report.
import autogen
user_proxy = autogen.ConversableAgent(
name="Admin",
system_message="Give the task, and send "
"instructions to writer to refine the blog post.",
code_execution_config=False,
llm_config=llm_config,
human_input_mode="ALWAYS",
)
planner = autogen.ConversableAgent(
name="Planner",
system_message="Given a task, please determine "
"what information is needed to complete the task. "
"Please note that the information will all be retrieved using"
" Python code. Please only suggest information that can be "
"retrieved using Python code. "
"After each step is done by others, check the progress and "
"instruct the remaining steps. If a step fails, try to "
"workaround",
description="Planner. Given a task, determine what "
"information is needed to complete the task. "
"After each step is done by others, check the progress and "
"instruct the remaining steps",
llm_config=llm_config,
)
engineer = autogen.AssistantAgent(
name="Engineer",
llm_config=llm_config,
description="An engineer that writes code based on the plan "
"provided by the planner.",
)
Note: In this lesson, you'll use an alternative method of code execution by providing a dict config. However, you can always use the LocalCommandLineCodeExecutor if you prefer. For more details about code_execution_config, check this: https://microsoft.github.io/autogen/docs/reference/agentchat/conversable_agent/#__init__

executor = autogen.ConversableAgent(
name="Executor",
system_message="Execute the code written by the "
"engineer and report the result.",
human_input_mode="NEVER",
code_execution_config={
"last_n_messages": 3,
"work_dir": "coding",
"use_docker": False,
},
)
writer = autogen.ConversableAgent(
name="Writer",
llm_config=llm_config,
system_message="Writer."
"Please write blogs in markdown format (with relevant titles)"
" and put the content in pseudo ```md``` code block. "
"You take feedback from the admin and refine your blog.",
description="Writer."
"Write blogs based on the code execution results and take "
"feedback from the admin to refine the blog."
)
Define the group chat
groupchat = autogen.GroupChat(
agents=[user_proxy, engineer, writer, executor, planner],
messages=[],
max_round=10,
)
manager = autogen.GroupChatManager(
groupchat=groupchat, llm_config=llm_config
)

Start the group chat!
Note: In this lesson, you will use GPT 4 for better results. Please note that the lesson has a quota limit. If you want to explore the code in this lesson further, we recommend trying it locally with your own API key.

groupchat_result = user_proxy.initiate_chat(
manager,
message=task,
)
Add a speaker selection policy
user_proxy = autogen.ConversableAgent(
name="Admin",
system_message="Give the task, and send "
"instructions to writer to refine the blog post.",
code_execution_config=False,
llm_config=llm_config,
human_input_mode="ALWAYS",
)

planner = autogen.ConversableAgent(
name="Planner",
system_message="Given a task, please determine "
"what information is needed to complete the task. "
"Please note that the information will all be retrieved using"
" Python code. Please only suggest information that can be "
"retrieved using Python code. "
"After each step is done by others, check the progress and "
"instruct the remaining steps. If a step fails, try to "
"workaround",
description="Given a task, determine what "
"information is needed to complete the task. "
"After each step is done by others, check the progress and "
"instruct the remaining steps",
llm_config=llm_config,
)

engineer = autogen.AssistantAgent(
name="Engineer",
llm_config=llm_config,
description="Write code based on the plan "
"provided by the planner.",
)

writer = autogen.ConversableAgent(
name="Writer",
llm_config=llm_config,
system_message="Writer. "
"Please write blogs in markdown format (with relevant titles)"
" and put the content in pseudo ```md``` code block. "
"You take feedback from the admin and refine your blog.",
description="After all the info is available, "
"write blogs based on the code execution results and take "
"feedback from the admin to refine the blog. ",
)

executor = autogen.ConversableAgent(
name="Executor",
description="Execute the code written by the "
"engineer and report the result.",
human_input_mode="NEVER",
code_execution_config={
"last_n_messages": 3,
"work_dir": "coding",
"use_docker": False,
},
)
groupchat = autogen.GroupChat(
agents=[user_proxy, engineer, writer, executor, planner],
messages=[],
max_round=10,
allowed_or_disallowed_speaker_transitions={
user_proxy: [engineer, writer, executor, planner],
engineer: [user_proxy, executor],
writer: [user_proxy, planner],
executor: [user_proxy, engineer, planner],
planner: [user_proxy, engineer, writer],
},
speaker_transitions_type="allowed",
)
manager = autogen.GroupChatManager(
groupchat=groupchat, llm_config=llm_config
)

groupchat_result = user_proxy.initiate_chat(
manager,
message=task,
)
Note: You might experience slightly different interactions between the agents. The engineer agent may write incorrect code, which the executor agent will report and send back for correction. This process could go through multiple rounds.

Conclusion on AI Agentic Design Patterns


This AI Agentic Design Patterns with AutoGen blog offers a powerful introduction to building intelligent, collaborative AI systems using large language models. By exploring key design patterns like reflection, tool use, planning, and multi-agent collaboration, learners can gain hands-on experience in constructing structured, modular workflows that mirror human-like reasoning and coordination.

Whether you’re automating customer support, analyzing data, or creating interactive applications, these agentic design principles provide a flexible foundation for solving real-world problems. By the end of the blog, you would have not only understood the logic behind agentic AI but also have practical skills to start building your own multi-agent systems with confidence using the AutoGen framework.

This blog is a valuable stepping stone for developers, data scientists, and AI enthusiasts eager to advance their capabilities in the evolving landscape of intelligent agents.

Contact us at smarttechaigroup@gmail.com for any specific project requirements

Subscribe on your favorite platform

Contact: smarttechaigroup@gmail.com