Close this search box.

Welcome back! We have talked about this many times, Microsoft Graph API allows you to access data and services from Microsoft 365, Azure, and Windows platforms. It has a common interface for dealing with resources such as users, groups, mails, calendars, files, and more. You can use it to query, create, update, and delete these resources. However, sometimes you may have to do multiple actions on different resources in one request, or deal with dependencies between them. For example, you may want to make a new user and give them a license, or change a file and send a notification. In these cases, sending several requests to the Graph API can be ineffective and expensive, as each request takes time and uses your quota limit.

Fortunately, Microsoft Graph API offers a batch endpoint that allows you to combine multiple requests into a single JSON payload and send it to the server as a POST request. The batch endpoint then processes each request individually and returns a JSON response with the results. This way, you can reduce the number of HTTP connections, improve performance, and simplify your code. In this blog post, we will explore some advanced strategies for using the batch endpoint, such as batching dependent requests, handling errors, and throttling.

Batching dependent requests

One of the challenges of using the batch endpoint is handling dependencies between requests. For example, if you want to create a new user and assign them a license, you need to wait for the user creation to succeed before you can use the user ID to assign the license. However, the batch endpoint does not guarantee the order of execution of the requests, nor does it support referencing the results of one request in another. So how can you handle such scenarios?

The solution is to use the dependsOn property in your batch requests. This property allows you to specify an array of request IDs that must be completed before the current request can be executed. The request IDs are the values of the id property that you assign to each request in the batch payload. For example, the following batch payload creates a new user and assigns them a license, using the dependsOn property to indicate the dependency:

					import requests
import json

# Endpoint for the batch request
batch_url = "$batch"

# Replace with your actual access token
access_token = "YOUR_ACCESS_TOKEN"

# Headers for the batch request
headers = {
    "Authorization": f"Bearer {access_token}",
    "Content-Type": "application/json"

# Batch payload
batch_payload = {
    "requests": [
            "id": "1",
            "method": "POST",
            "url": "/users",
            "headers": {
                "Content-Type": "application/json"
            "body": {
                "accountEnabled": True,
                "displayName": "Rezwanur Rahman",
                "mailNickname": "rezwanur.rahman",
                "userPrincipalName": "",
                "passwordProfile": {
                    "forceChangePasswordNextSignIn": True,
                    "password": "P@ssw0rd"
            "id": "2",
            "method": "POST",
            "url": "/users/",
            "headers": {
                "Content-Type": "application/json"
            "body": {
                "addLicenses": [
                        "disabledPlans": [],
                        "skuId": "sku-id-of-the-license"
                "removeLicenses": []
            "dependsOn": ["1"]

# Send the request
response =, headers=headers, data=json.dumps(batch_payload))

# Check the response
if response.status_code == 200:
    print("Batch request successful.")
    print("Failed to execute batch request.")
    print(f"Status code: {response.status_code}")


The batch endpoint will execute the request with ID 1 first, and then use the user ID from the response to execute the request with ID 2. If the request with ID 1 fails, the request with ID 2 will not be executed and will return an error. Note that you can use the dependsOn property to create complex dependency graphs, as long as there are no circular dependencies.

Handling errors

Another challenge of using the batch endpoint is handling errors. Since the batch endpoint processes each request individually, it is possible that some requests succeed and some fail. For example, if you send a batch request with 10 requests, and one of them fails due to a validation error, the batch endpoint will return a 200 OK response with a JSON payload that contains the results of each request, including the error details for the failed request. However, if the batch request itself is invalid, such as having a malformed JSON payload or exceeding the size limit, the batch endpoint will return a 400 Bad Request response with an error object that describes the problem.

Therefore, when you use the batch endpoint, you need to check the status code of both the batch response and the individual responses. If the batch response status code is not 200 OK, you need to handle the error at the batch level. If the batch response status code is 200 OK, you need to iterate over the individual responses and check their status codes. If any of the individual responses has a status code that is not 2xx, you need to handle the error at the request level. For example, the following code snippet shows how to handle errors in C# using the Microsoft Graph SDK:

					using Microsoft.Graph;
using System;
using System.Net.Http;
using System.Threading.Tasks;

class Program
    static async Task Main()
        // Initialize the GraphServiceClient (Assuming it's already authenticated)
        GraphServiceClient graphClient = new GraphServiceClient(/* Authentication Provider */);

        // Create a batch request content object
        var batchRequestContent = new BatchRequestContent();

        // Add requests to the batch request content
        // Request 1: Get user profile
        batchRequestContent.AddBatchRequestStep(new BatchRequestStep("1", new HttpRequestMessage(HttpMethod.Get, "/me")));
        // Request 2: Get root drive items
        batchRequestContent.AddBatchRequestStep(new BatchRequestStep("2", new HttpRequestMessage(HttpMethod.Get, "/me/drive/root/children")));
        // Request 3: Get user messages
        batchRequestContent.AddBatchRequestStep(new BatchRequestStep("3", new HttpRequestMessage(HttpMethod.Get, "/me/messages")));

        // Send the batch request to the Graph API
        var batchResponse = await graphClient.Batch.Request().PostAsync(batchRequestContent);

        // Check the batch response status code
        if (batchResponse.IsSuccessStatusCode)
            // Iterate over the individual responses in the batch
            foreach (var response in batchResponse.Responses)
                // Check the individual response status code
                if (response.Value.IsSuccessStatusCode)
                    // Handle the successful response
                    var content = await response.Value.Content.ReadAsStringAsync();
                    // Handle the failed response
                    var error = await response.Value.Content.ReadAsObjectAsync<GraphErrorResponse>();
                    Console.WriteLine($"Error: {error.Error.Message}");
            // Handle the batch level error
            var error = await batchResponse.Content.ReadAsObjectAsync<GraphErrorResponse>();
            Console.WriteLine($"Batch Error: {error.Error.Message}");


The batch endpoint also supports returning HTTP responses in the same order as the requests, regardless of the execution order. This can be useful if you want to map the responses to the requests without using the request IDs. To enable this feature, you need to add a Prefer: respond-async header to the batch request. However, note that this may increase the response time, as the batch endpoint will wait for all the requests to complete before returning the response.


The last challenge of using the batch endpoint is throttling. Throttling is a mechanism that the Graph API uses to limit the number of requests that a client can send in a certain period of time, to prevent overloading the service and ensure fair usage. Throttling can occur at different levels, such as the tenant level, the application level, or the service level. When a client exceeds the throttling limit, the Graph API returns a 429 Too Many Requests response with a Retry-After header that indicates how long the client should wait before retrying the request.

When you use the batch endpoint, you need to be aware of how throttling affects your requests. Since the batch endpoint combines multiple requests into a single request, it can help you reduce the chance of hitting the throttling limit at the tenant or application level. However, it does not exempt you from the throttling limit at the service level, as each request in the batch payload still counts as a separate request for the target service. For example, if you send a batch request with 10 requests to the /users endpoint, which has a limit of 15 requests per 10 seconds, you will consume 10 of the 15 available requests, and if you send another batch request with 10 requests to the same endpoint within 10 seconds, you will likely get throttled.

Therefore, when you use the batch endpoint, you need to follow some best practices to avoid or handle throttling. Some of these best practices are:

  • Limit the number of requests in a batch payload to a reasonable amount. The batch endpoint allows you to include up to 20 requests in a batch payload, but that does not mean you should always use the maximum number. Depending on the target service and the throttling limit, you may want to use a smaller number to avoid exhausting your quota.  
  • Use the Retry-After header to implement a retry policy. If you receive a 429 response from the batch endpoint or from an individual request, you should wait for the number of seconds specified in the Retry-After header before retrying the request. You can use a back-off strategy to increase the waiting time if you receive consecutive 429 responses.  

In this blog post, we have learned some advanced strategies for using the Microsoft Graph API batch endpoint, such as batching dependent requests, handling errors, and throttling. By using these strategies, you can optimize your requests and reduce network latency, while accessing data and services across Microsoft 365, Azure, and Windows platforms.

About Author

Rezwanur Rahman

Rezwanur Rahman is the Microsoft Graph MVP, located in Innsbruck, Austria. He is the ex-Microsoft employee at Microsoft Bangladesh, and Microsoft Technical Support Lead for Microsoft 365 Global Support. He is a software engineer graduate and currently contributing technical knowledge on Microsoft Copilot and ChatGPT.