This blog offers an overview of multi-agent systems, highlighting their advantages and how they differ from single-agent systems. It includes three examples and provides instructions for developing multi-agent systems using various methods.
Blog Summary:
In this blog, we will explore the exciting world of multi-agent systems, starting with an overview of their advantages and how they differ from single-agent systems.
We’ll also discuss the different ways enterprises can integrate AI agents based on their business needs, AI use cases for both SaaS (Copilot) and PaaS (Azure AI Foundry, Azure OpenAI, VS Code, Python), including custom orchestration layers and frameworks like AutoGen, Semantic Kernel, and LangChain.
By the end of this blog, you’ll have a comprehensive understanding of multi-agent systems and be ready to create your first multi-agent using Python and Azure OpenAI.
AI use case:
Before we dive-in, let’s understand the use case for developing the AI Agents.
There are two approaches to develop these AI Solutions.
- SaaS (Copilot): AI agents can be integrated into SaaS solutions like Microsoft 365 Copilot, providing users with intelligent assistance within their applications.
- PaaS (Azure AI Foundry, Azure OpenAI, VS Code, Python): Developers can use PaaS solutions to create custom AI agents. Tools like Azure AI Foundry, Azure OpenAI, Visual Studio Code, and Python offer the flexibility to build and deploy sophisticated AI agents tailored to specific requirements.
See below use case for developing your own custom AI Agents (SaaS or PaaS).

Refer this link for more details: AI Strategy – Process to develop an AI strategy – Cloud Adoption Framework
Note: We are referring to the PaaS solution here.
What are Agents?
If you are new to AI Agents and want to know more about this, refer to my previous blog to understand what AI Agent is and how to create your first AI Agent.
Create Your First AI Agent with Azure AI Service – Rajeev Singh | Coder, Blogger, YouTuber
Let’s focus on Multi-Agent now.
What is Multi-Agent?
Multi-agent systems involve multiple agents interacting and collaborating, which is beneficial for complex tasks requiring sophisticated coordination and specialized expertise.
Example of a Multi-Agent?
Let’s see below example for a Multi-Agent System.
This example is from Microsoft MS Lean Quickstart: Get Started with Multi-agent Applications
For the Contoso Creative Writer app, the goal is to help the marketing team at the fictitious company write well-researched, product-specific articles. The Contoso Creative Writer app consists of agents that help achieve this goal.

Multiple agents are created in this solution, and you need to manage workflow to manage these workflow (Sending start and complete messages for each agent task, invoke agents, and many more tasks)
Moving from Single Agent to Multi-Agent?
Now, we have an idea of what multi-agent is, lets understand why do we need multi-agent?
Creating a multi-agent system can offer several advantages over a single-agent system, depending on the complexity and requirements of your project.
Here are some key benefits:
Advantages of Multi-Agent Systems
- Specialization:
- Each agent can specialize for specific tasks. For example, one agent can focus on data analysis while another handles data visualization. This allows for more efficient and effective task execution.
- Parallel Processing:
- Multiple agents can work simultaneously on different parts of a task, leading to faster overall processing times. This is particularly useful for large data sets or complex workflows.
- Scalability:
- Multi-agent systems can be scaled more easily. You can add more agents to handle increased workload or to introduce new functionalities without overloading a single agent.
- Robustness:
- If one agent fails, others can continue to operate, making the system more robust and fault tolerant. This reduces the risk of a single point of failure.
- Collaboration:
- Agents can collaborate and share information to achieve a common goal. This can lead to more comprehensive and accurate results, as different agents bring their unique capabilities to the table.
Comparison with Single-Agent Systems
- Single-Agent Systems: Simpler to implement and manage, suitable for straightforward tasks with lower complexity and workload.
- Multi-Agent Systems: More complex to set up but offer greater efficiency, scalability, and robustness for complex and high-demand tasks.
Deep dive: How Multi Agent is different from simple Chat/Single Agent?
Now that we know what Multi-Agent is and what is the need of a multi-agent, let’s explore how it’s different from a simple Chat App that uses only one Agent.
Example taken here is from Microsoft Learn. Get Started with Multi-agent Applications

You can see the difference between this architecture and a simple chat is in the orchestration (see Processing services) required for processing the user request (prompt) in this application:
- The prompt query is expanded to extract relevant article query terms and relevant products retrieved through Bing Search and Azure AI Search.
- The expanded query is sent to a writer agent (chat model). The writer uses the provided query and grounding context to generate a draft article based on the designed prompt template.
- The draft article is sent to an editor agent (chat model). The editor assesses the article for acceptance based on the designed prompt template.
- An approved article is published as a blog post. The user interface enables you to view the progression of these tasks visually, so you can get an intuitive sense of the multi-agent coordination.
There are many other good reads about multi-agent architecture and how it works, we should cover this in another blog post.
Developing Multi-Agents
So far, we have covered lots of details on Multi-Agent, lets understand what is required to develop a Multi-Agents and get started with development of your First Multi-agent. 😊
- Manual Orchestration
- Using Prompty
- Using Any Orchestration Layer (Semantic Kernel, AutoGen, LangChain etc.)
Let’s see the summary of different options:
| Approach | Brief Description | Components Used | When to Use | When Not to Use |
| Promptly (Python) | A Python-based framework for creating multi-agent systems using prompt engineering. | Python, OpenAI API, custom prompt templates. | When you need flexibility and customization in prompt design. | When you require high-level abstraction or integration with other platforms. |
| Semantic Kernel | A framework for building AI applications with semantic understanding and reasoning capabilities. | Semantic models, knowledge graphs, reasoning engines. | When you need advanced semantic understanding and reasoning. | When you need simple, straightforward solutions. |
| AutoGen | An automated framework for generating and managing AI agents. | Pre-trained models, automation scripts, orchestration tools. | When you need rapid deployment and management of AI agents. | When you need highly customized or specialized agents. |
| LangChain | A framework for building applications with language models using chains of prompts. | Language models, chaining mechanisms, prompt templates. | When you need to build complex workflows with language models. | When you need simple, single-step interactions. |
| Hugging Face Transformers | A library for using pre-trained transformer models for various NLP tasks. | Pre-trained transformer models, tokenizers, pipelines. | When you need state-of-the-art NLP capabilities. | When you need lightweight or simple solutions. |
Note: For creating your first Multi-Agent, we are not using any of this approach. The idea is to get you an overview of these concepts.
What is an Orchestration Layer?
As explained earlier, when you plan for Multi-Agent, you need to manage the workflow and communication between the agents and many more tasks, means, you need some kind of Orchestration Layer. We will cover the manual approach where you create your own Orchestration Layer and understand the pain-points and limitations.
Multi-agent orchestration frameworks:
Azure AI Agent Service works out-of-the-box with multi-agent orchestration frameworks that are wireline compatible with the Assistants API, such as AutoGen, a state-of-the-art research SDK for Python created by Microsoft Research, and Semantic Kernel, an enterprise AI SDK for Python, .NET, and Java.
When building a new multi-agent solution, start with building singleton agents with Azure AI Agent Service to get the most reliable, scalable, and secure agents. We did this in my previous blog, refer to Create Your First AI Agent with Azure AI Service
You can then orchestrate these agents together AutoGen is constantly evolving to find the best collaboration patterns for agents (and humans) to work together. Features that show production value with AutoGen can then be moved into Semantic Kernel if you’re looking for production support and non-breaking changes.

If you want to explore more on this, refer to this post from Microsoft:
Azure AI Agent Service: Revolutionizing AI Agent Development and Deployment
Demo: Getting started with Multi-Agent Development
We will explore how to create a multi-agent without any Orchestration and not use any layer such as SK, LangChain, AutoGen etc.
We will have to create an additional layer called Orchestration Layer that handles the workflow between multiple agents.
Project Structure:
Below table explains the high-level Project structure used to complete this demo.
| Code | Description |
| main.py | The main entry point of the application. It initializes the app and sets up routes. |
| agent.py | Defines the agent logic for handling tasks. |
| orchestrator.py | Orchestrates the workflow between multiple agents. |
| config.py | Contains configuration settings for the application. |
| requirements.txt | Lists the dependencies required for the application. (Optional) |
What are the additional Configurations required with this approach?
Here are the key requirements and additional configurations you need to consider:
- Manual Orchestration:
- Description: Without an orchestration framework, you’ll need to manually coordinate the interactions between agents. This involves defining the sequence of tasks and managing communication between agents.
- Implementation: Implement a custom orchestrator in your code to handle task assignments and results aggregation.
- Agent Communication:
- Description: Ensure that agents can communicate effectively. This includes passing data between agents and handling responses.
- Implementation: Use inter-process communication (IPC) mechanisms or shared data structures to facilitate communication.
- Task Management:
- Description: Manage the lifecycle of tasks, including initiation, execution, and completion.
- Implementation: Create a task manager module to track the status of tasks and ensure they are completed in the correct order.
- Error Handling and Recovery:
- Description: Implement robust error handling to manage failures and ensure the system can recover gracefully.
- Implementation: Add try-except blocks and logging to capture and handle errors. Implement retry mechanisms for failed tasks.
- Configuration Management:
- Description: Manage configuration settings for agents and tasks.
- Implementation: Use a configuration file (e.g., config.py) to store settings and parameters. Ensure the configuration is easily adjustable.
- Logging and Monitoring:
- Description: Implement logging and monitoring to track the performance and behavior of agents.
- Implementation: Use logging libraries to capture detailed logs. Set up monitoring tools to visualize the system’s performance.
- Security and Authentication:
- Description: Ensure secure communication and authentication between agents.
- Implementation: Use secure communication protocols (e.g., HTTPS) and implement authentication mechanisms (e.g., API keys, OAuth).
Demo:
Below are the 3 examples covered in this section.
| # | Demo Name | Desc |
| 1 | Creating your First Multi-Agent | Focus on: How do you create Agents and Initialize agents. Create and Initialize Orchestration. Run the workflow |
| 2 | Multi-Agent, FoodAgent and MealSuggestion | Same as above but enhanced version to have more logic built on Orchestration. This doesn’t use any modal (gpt-40 etc.) |
| 3 | Multi-Agent, FoodAgent and MealSuggestion with OpenAI modal | Standard code which shows the use of OpenAI modal and how to handle the request with Agents based on intent. and Answer any other LLM query as well |
Demo1: Creating your First Multi-Agent
Prerequisites
- Visual Studio Code
- Python
- Set Up a Virtual Environment (Optional but Recommended):
Create a virtual environment to manage dependencies:
python -m venv venv

Code:
As explained earlier, the solution needs the below components/python files.
main.py
agent.py
orchestrator.py
config.py
requirements.txt
main.py
from config import settings
from researcher_agent import ResearcherAgent
from writer_agent import WriterAgent
from orchestrator import Orchestrator
# Initialize agents
researcher = ResearcherAgent(name="ResearcherAgent")
writer = WriterAgent(name="WriterAgent")
# Initialize orchestrator
orchestrator = Orchestrator(researcher, writer)
# Run the workflow
research_task = settings["research_task"]
write_task = settings["write_task"]
result = orchestrator.run(research_task, write_task)
# Print the result
print(result)researcher_agent.py
class ResearcherAgent:
def __init__(self, name):
self.name = name
def perform_task(self, task):
# Simulate researching information
print(f"{self.name} is performing task: {task}")
research_data = "Climate change refers to long-term changes in temperature and weather patterns."
return research_data
writer_agent.py
class WriterAgent:
def __init__(self, name):
self.name = name
def perform_task(self, task, research_data):
# Simulate writing a summary based on research data
print(f"{self.name} is performing task: {task}")
summary = f"Summary: {research_data}"
return summary
orchestrator.py
class Orchestrator:
def __init__(self, researcher, writer):
self.researcher = researcher
self.writer = writer
def run(self, research_task, write_task):
# Research phase
research_data = self.researcher.perform_task(research_task)
# Writing phase
summary = self.writer.perform_task(write_task, research_data)
return summaryconfig.py
# Configuration settings for the agents and tasks
settings = {
"research_task": "Find information about climate change",
"write_task": "Summarize the information about climate change"
}requirements.txt
Not used in this demo, it can be used to install the prerequisites and dependencies for your project.
Run the program:
Execute the main script to run the workflow: python main.py

Flow of the Code
- Initialization:
- The ResearcherAgent and WriterAgent are initialized with their respective names.
- The Orchestrator is initialized with the two agents.
- Running the Workflow:
- The orchestrator starts the workflow by calling the run method with the research and write tasks.
- Research Phase:
- The ResearcherAgent performs the research task: “Find information about climate change”.
- The output will be:
- ResearcherAgent is performing task: Find information about climate change.
- Writing Phase:
- The WriterAgent uses the research data provided by the ResearcherAgent to perform the write task: “Summarize the information about climate change”.
- The output will be:
- WriterAgent is performing task: Summarize the information about climate change.
- Final Output:
- The orchestrator collects the summary generated by the WriterAgent and prints the result.
- The final output will be Summary: Climate change refers to long-term changes in temperature and weather patterns.
Conclusion:
The idea is to get started and simply execute your First Hello World kind of Multi-Agent.
Demo2: Multi-Agent, FoodAgent and MealSuggestion
Now, we have seen a simple code to implement Multi-Agent, let’s now create another example and understand the limitation of these agents creating without an Orchestration (AutoGen, SK, LangChain) and then we will focus on creating Multi-Agent using an Orchestration Layer e.g., Semantic Kernal, in next blog (upcoming blog).
- food_agent.py: Contains the FoodAgent class with methods to get food information and calorie information.
- mealsuggestionagent.py: Contains the MealSuggestionAgent class with methods to suggest meals and snacks.
- orchestrator.py: Contains the Orchestrator class that handles user input and delegates tasks to the appropriate agent. It also includes a simple loop to simulate a back-and-forth chat with the user.
Code
This code uses static code with no model used, so based on logic written Agent responds.
food_agent.py
class FoodAgent:
def get_food_info(self, food):
# Simulate fetching food information
return f"The food {food} is rich in vitamins and minerals."
def get_calories(self, food):
# Simulate fetching calorie information
return f"The food {food} contains approximately 200 calories per serving."Meal_suggestion_agent.py
class MealSuggestionAgent:
def suggest_meal(self, preference):
# Simulate suggesting a meal based on preference
if "vegetarian" in preference.lower():
return "How about a delicious vegetarian stir-fry with tofu and vegetables?"
elif "low-carb" in preference.lower():
return "How about a grilled chicken salad with a variety of fresh greens?"
else:
return "How about a classic spaghetti Bolognese with a side of garlic bread?"
def suggest_snack(self):
# Simulate suggesting a snack
return "How about some fresh fruit or a handful of nuts for a healthy snack?"orchestration.py
from food_agent import FoodAgent
from meal_suggestion_agent import MealSuggestionAgent
class Orchestrator:
def __init__(self):
self.food_agent = FoodAgent()
self.meal_suggestion_agent = MealSuggestionAgent()
def handle_request(self, user_input):
if "food info" in user_input.lower():
food = user_input.split("food info")[-1].strip()
return self.food_agent.get_food_info(food)
elif "calories" in user_input.lower():
food = user_input.split("calories")[-1].strip()
return self.food_agent.get_calories(food)
elif "suggest meal" in user_input.lower():
preference = user_input.split("suggest meal")[-1].strip()
return self.meal_suggestion_agent.suggest_meal(preference)
elif "suggest snack" in user_input.lower():
return self.meal_suggestion_agent.suggest_snack()
else:
return "I'm sorry, I didn't understand that request."
if __name__ == "__main__":
orchestrator = Orchestrator()
while True:
user_input = input("User > ")
if user_input.lower() in ["exit", "quit"]:
break
response = orchestrator.handle_request(user_input)
print(f"Assistant > {response}")Run the program
You can run the orchestrator.py file to start the multi-agent system. The orchestrator will handle user input and communicate with the appropriate agent to get the desired response.
Note:
This code doesn’t use any model for the agent, you can add Model and see the behavior of the program. (See Demo#3)

Let’s understand the output.
- User Input: User > food info apple
- The program extracts “apple” and calls get_food_info(“apple”).
- The response “The food apple is rich in vitamins and minerals.” is printed.
Note: I didn’t provide apple in user input, so the Assistant responds without apple, there is no model used here, else Assistant would have asked which food 😊
- User Input: User > suggest meal.
- The program didn’t get anything like vegetarian, so it goes to else statement and calls suggest_meal else section
- The response “How about a classic spaghetti Bolognese with a side of garlic bread?” is printed.
- User Input: User > suggest meal vegetarian.
- This time, ask with vegeration input as well, The program now extracts “vegetarian” and calls suggest_meal(“vegetarian”).
- The response “How about a delicious vegetarian stir-fried with tofu and vegetables?” is printed.
You can ask a few more questions and check the response:
- User Input: User > suggest snack.
- The program calls suggest_snack ().
- The response “How about some fresh fruit or a handful of nuts for a healthy snack?” is printed.
- User Input: User > exit
- The loop breaks, and the program ends.
Flow of the Program
1. Initialization:
- When you run orchestration.py, it first imports the FoodAgent and MealSuggestionAgent classes.
- An instance of the Orchestrator class is created, which in turn initializes instances of FoodAgent and MealSuggestionAgent.
2. User Input Loop:
- The program enters a while loop, continuously prompting the user to input with User > .
- The user can type in their request, such as asking for food information, calorie information, meal suggestions, or snack suggestions.
3. Handling Requests:
- The handle_request method of the Orchestrator class processes the user input.
- Depending on the keywords in the user input, the method calls the appropriate method from either FoodAgent or MealSuggestionAgent.
4. Processing Specific Requests:
- Food Information: If the user input contains “food info”, the program extracts the food item from the input and calls get_food_info from FoodAgent.
- Calorie Information: If the user input contains “calories”, the program extracts the food item from the input and calls get_calories from FoodAgent.
- Meal Suggestion: If the user input contains “suggest meal”, the program extracts the preference from the input and calls suggest_meal from MealSuggestionAgent.
- Snack Suggestion: If the user input contains “suggest snack”, the program calls suggest_snack from MealSuggestionAgent
5. Output:
- The response from the appropriate method is printed to the console with Assistant >.
- The loop continues until the user types “exit” or “quit”, at which point the loop breaks and the program ends.
Demo3: Multi-Agent, FoodAgent and MealSuggestion with OpenAI modal
Why use ChatGPT model?
When using the ChatGPT model, you can indeed simplify the logic by leveraging the model’s natural language understanding capabilities.
Instead of writing explicit if-else conditions for each type of request, you can pass the entire user input to the model and let it determine the appropriate response.
Here’s an updated version of the code that simplifies the logic by using the ChatGPT model to handle all user inputs:
Prerequisites:
You will need to create Azure OpenAI, refer this url to create Azure OpenAI Service, How-to: Create and deploy an Azure OpenAI Service resource – Azure OpenAI
Once Azure AI Service is created, Deploy this model, Follow the steps mentioned in section “Deploy a model.”
Code:
Code consists of three main classes: FoodAgent, MealSuggestionAgent, and Orchestrator.
- FoodAgent: Provides methods to fetch food information and calorie content.
- MealSuggestionAgent: Provides methods to suggest meals and snacks based on user preferences.
- Orchestrator: Integrates these agents and uses the Azure OpenAI service to handle user requests and generate responses.
Code for food_agent.py, meal_suggestion_agent.py remains the same.
Orchestration.py
- __init__: Initializes the Orchestrator with instances of FoodAgent, MealSuggestionAgent, and AzureOpenAI.
- handle_request: Handles user input, constructs a prompt, and sends it to the Azure OpenAI service to generate a response.
from food_agent import FoodAgent
from meal_suggestion_agent import MealSuggestionAgent
from openai import AzureOpenAI
import os
# Set environment variables (replace with actual values)
os.environ["AZURE_OPENAI_ENDPOINT"] = "<Your EndPoint>"
os.environ["AZURE_OPENAI_API_KEY"] = "<Your Key>"
os.environ["OPENAI_API_VERSION"] = "2023-06-01-preview" # Set the API version
class Orchestrator:
def __init__(self):
self.food_agent = FoodAgent()
self.meal_suggestion_agent = MealSuggestionAgent()
self.azure_openai = AzureOpenAI(
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2023-06-01-preview"
)
self.model_id = "gpt-35-turbo"
def handle_request(self, user_input):
prompt = f"User input: {user_input}\n\n"
response = self.azure_openai.completions.create(
model=self.model_id,
prompt=prompt,
max_tokens=150
)
return response.choices[0].text.strip()
main.py
main: Creates an instance of Orchestrator, prompts the user for input, and handles the input until the user exits.
from orchestrator import Orchestrator
def main():
orchestrator = Orchestrator()
while True:
user_input = input("User > ")
if user_input.lower() in ["exit", "quit"]:
break
response = orchestrator.handle_request(user_input)
print(f"Assistant > {response}")
if __name__ == "__main__":
main()Run the program
Run the main.py file
And ask questions.
Example 1: Food Information

Example 2: Meal Suggestion (Vegetarian)

Example 3: Meal Suggestion (Low-Carb)
User Prompt:
User > suggest meal low-carb
Program Response:
Assistant > How about a grilled chicken salad with a variety of fresh greens?
Example 4: Snack Suggestion
User Prompt:
User > suggest snack
Program Response:
Assistant > How about some fresh fruit or a handful of nuts for a healthy snack?
Example 5: ChatGPT Query
User Prompt:
User > chatgpt What is the capital of France?
Program Response:
Assistant > The capital of France is Paris.
Explanation of the Code Flow
- FoodAgent and MealSuggestionAgent Classes:
- These classes simulate fetching food information and suggesting meals/snacks based on user preferences.
- Orchestrator Class:
- This class initializes instances of FoodAgent and MealSuggestionAgent.
- It handles user requests by passing the entire input to the ChatGPT model, which determines the appropriate response based on the provided prompt.
- Main Function:
- The main.py file contains the main function that creates an instance of Orchestrator and enters a loop to continuously accept user input and provide responses.
By using the ChatGPT model, you can simplify the logic and let the model handle the complexity of understanding and responding to user inputs. This approach leverages the model’s natural language processing capabilities to provide more flexible and intelligent responses.
What’s next:
Next step is to understand various options available for handling the Orchestration layer.
Let’s see below table to understand the comparison of your code With SK vs Without SK.
| Feature/ Aspect | With Semantic Kernel (SK) | Without Semantic Kernel (SK) |
| Orchestration | Handled by Semantic Kernel | Manually handled by the Orchestrator class |
| Agent Registration | Agents are registered using kernel.Plugins.AddFromType<AgentName>(“AgentName”) | Agents are instantiated and managed manually in the Orchestrator class |
| AI Model Integration | Integrated via Kernel.CreateBuilder().AddAzureOpenAIChatCompletion(modelId, endpoint, apiKey) | Integrated manually using OpenAI API in the Orchestrator class |
| Logging | Can be enabled using builder.Services.AddLogging(…) | Logging needs to be implemented manually if required |
| Conversation History | Managed by Semantic Kernel | Managed manually using a list in the Orchestrator class |
| Function Execution Settings | Configured using OpenAIPromptExecutionSettings | Not applicable; function execution settings need to be managed manually |
| Agent Communication | Handled by Semantic Kernel | Handled manually by the Orchestrator class |
| Code Complexity | Simplified by Semantic Kernel | More complex due to manual orchestration and management |
| Extensibility | Easily extensible by adding new plugins to the kernel | Requires manual updates to the Orchestrator class to add new agents or functionalities |
| Example Code | kernel.Plugins.AddFromType<FoodAgent>(“FoodAgent”); | self.food_agent = FoodAgent() |
| AI Response Handling | var result = await chatCompletionService.GetChatMessageContentAsync(…) | response = openai.Completion.create(…) |
We will be covering in next blog, getting started with Semantic Kernal and creating multi-agent.
Conclusion
It’s time to wrap things up!
In conclusion, multi-agent systems offer a robust and scalable solution for complex tasks that require specialized expertise and sophisticated coordination. By leveraging multiple agents, these systems can achieve parallel processing, enhance fault tolerance, and improved collaboration, making them ideal for high-demand projects.
The development of multi-agent systems can be approached through various methods, including manual orchestration and the use of frameworks like Semantic Kernel, AutoGen, and LangChain. Each method offers unique advantages and is suited to different use cases, allowing developers to choose the best approach for their specific needs.
I hope you enjoyed this post and found it helpful in getting started with creating your first multi-agents 😊
As the field of artificial intelligence continues to evolve, multi-agent systems will play a crucial role in driving innovation and efficiency across various industries.
Connect with me on LinkedIn Rajeev Singh | LinkedInand don’t forget to like, comment, and repost to maximize the reach of this post!
References:
Get Started with Multi-agent Applications Using Azure OpenAI | Microsoft Learn
azureai-samples/scenarios/Assistants/multi-agent/README.md at main · Azure-Samples/azureai-samples
note: Copilot, GitHub Copilot and AI have been used to create this blog.

[…] We also delved into Multi-Agent systems. To learn more about this, check out the blog Kickstart Your Journey with Multi-Agent Systems: Kickstart Your Journey with Multi-Agent Systems: Build Your First Multi-Agent Using Python and Azure… […]
[…] the previous post, I delved into multi-agent applications, highlighting how complex tasks can benefit from […]