Quantcast
Channel: Active questions tagged retry-logic - Stack Overflow
Viewing all articles
Browse latest Browse all 950

Polly v8 retry with Azure Service Bus and 3rd party API

$
0
0

I'd like to kindly ask for help with implementing retry strategy using Polly v8 in combination with Azure Service Bus and 3rd party API.

Setup

Messages are received from an ASB subscription. The receiving module is out of my control, I only provide a callback executed for every message received. AFAIK a ServiceBusProcessor is used in the background with following parameters:

  • AutoCompleteMessages = false
  • MaxAutoLockRenewalDuration = 10 minutes
  • MaxConcurrentCalls = 10
  • PrefetchCount = 0
  • ReceiveMode = peek lock

When a message is received, 3rd party API has to be called to finish processing of the message. If the API is not responding or returns status code 500, an exception is thrown and as a result the ASB makes another delivery attempt. After 5 unsuccessful attempts the message ends up in the dead-letter queue.

Without retry

I have the ASB send 3 messages at once and I assume they're processed in parallel as MaxConcurentCalls is 10. The log timestamps support this assumption. Simulated API always makes the message processing fail. The log screenshot shows how messages are received by the callback method, and I don't see anything wrong with it maybe except for the 10 second gap right before the 5th delivery attempt - see red line in the picture below. IDK what causes it, maybe the blackboxed receiver, and I don't really care. Also the 5th attempt to deliver message 3 didn't happen for some reason.

Log entries without retry strategy

Goal

I would like to add the retry strategy in case the API isn't available and perhaps also circuit breaker to stop calling the API completely. Maybe it's redundant considering the repeated delivery from ASB, but I'm not the one pushing it. The retry delay is TBD, so I use 3 sec for now. Other Polly retry settings are 1 retry attempt, constant backoff, jitter.

I would expect following behavior:
ASB makes 1st delivery attempt of 3 messages. They are quickly processed with API call failing so the retry strategy waits for 3 seconds and calls the API again with another failure. With only 1 retry the message is not processed and ASB makes 2nd delivery attempt etc.The whole process shouldn't take much more than 3 seconds for one delivery attempt and 15 seconds for all 5 delivery attempts. (Maybe include the extra 10 second pause before the 5th delivery attempt, so 25 seconds top.)

Problem

What I get instead is on the log screenshot below. 1st delivery attempt is as usual, but then the retries are executed one-by-one for each message. I would expect the retries to be retried in parallel. The way retry works for me delays everything horribly and is unusable considering we occasionally get quite a few messages from the ASB. Applying circuit breaker makes no sense because it's impossible for it to even kick in when the failing API is called once in 3 seconds. The log screenshot is not complete but the total processing time for all 5 delivery attempts is around 50 seconds.

Note: Entries "Event handler..." are created by the callback method when message processing starts. Entries "Retry attempt..." are created in the OnRetry event handler provided by Polly.

Log entries with retry strategy

Question

So my question is - what is happening here? Frankly I have no clue. I looked into Polly source code, but I don't know what to look for. Maybe it's the way it is supposed to work, but it doesn't seem very useful for me. And in the first place - is it even the way to go?

Thanks in advance.

Pseudocode for the message processing callback:

protected override async Task CallbackMethod(MessageClass message){   log.LogWarning("Event handler message processing start, message id: {messageId}", message.Id);   var dataFromApi = await _someApi.GetImportantValue(message.Data); //always fails   FinishMessageProcessing(message, dataFromApi);}

Resiliency pipeline configuration:

services.AddHttpClient(OnlineApi.OnlineHttpClientName, httpClient => {    httpClient.BaseAddress = new Uri("https://always-return-500.xyz"); })    //custom event handler, adds bearer token to http request header, must stay    .AddHttpMessageHandler<AuthorizationMessageHandler>()    .AddResilienceHandler("retry", (builder, context) =>    {        builder.AddRetry(new RetryStrategyOptions<HttpResponseMessage>()        {            ShouldHandle = new PredicateBuilder<HttpResponseMessage>()                .Handle<Exception>(x => x is not OperationCanceledException)                .HandleResult(x => x.StatusCode >= HttpStatusCode.InternalServerError),            BackoffType = DelayBackoffType.Constant,            Delay = TimeSpan.FromSeconds(3),            MaxDelay = TimeSpan.FromSeconds(3),            MaxRetryAttempts = 1,            Name = "constant-retry",            UseJitter = true,            OnRetry = args =>            {                GetLog(context)?.LogError("Retry attempt {attemptNo}, outcome: {outcome}", args.AttemptNumber, args.Outcome);                return ValueTask.CompletedTask;            }        });    });

Viewing all articles
Browse latest Browse all 950

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>