Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retry backoff example #881

Closed
wants to merge 4 commits into from
Closed

Retry backoff example #881

wants to merge 4 commits into from

Conversation

taras
Copy link
Member

@taras taras commented Jan 8, 2024

Motivation

As part of #869, I wanted to convert an example by @thr1ve. in Discord to Structured Concurrency with Effection.

  • only retry if the error matches specific error/s (otherwise fail)
  • begin with a delay of startDelay ms between retries
  • double the delay each retry attempt up to a max of maxDelay ms
  • retry a maximum of 5 times

This is what it looks like in Effect.ts

const startDelay = 200
const maxDelay = 500

export const retryPolicy = pipe(
  Schedule.intersect(
    Schedule.linear(startDelay).pipe(Schedule.union(Schedule.spaced(maxDelay))),
    Schedule.recurs(5)
  ),
  Schedule.whileInput(
    (error: ParsedArangoError) =>
      error._tag === 'LockTimeout' ||
      error._tag === 'ConflictDetected' ||
      error._tag === 'TransactionNotFound'
  )
);
image

It was pretty straightforward and fun to implement in Effection.

The logic is this:

  1. call the operation, if we get an unknown error then throw which will stop the operation
  2. otherwise, start retrying up to 5 times. After every attempt, double the delay before the next attempt.
  3. To add a maxDelay, I created a timeout function that I'm racing against the retry mechanism.

This is what it looks like in Effection

export function* retryBackoffExample<T>(
  op: () => Operation<T>,
  { attempts = 5, startDelay = 5, maxDelay = 200 } = {},
): Operation<T | undefined> {
  try {
    return yield* op();
  } catch (e) {
    if (!isKnownError(e)) {
      throw e;
    }
  }
  
  let lastError: Error;
  function* retry() {
    let delay = startDelay;
    for (let attempt = 0; attempt < attempts; attempt++) {
      yield* sleep(delay);
      delay = delay * 2;

      try {
        return yield* op();
      } catch (e) {
        if (isKnownError(e)) {
          lastError = e;
          if (attempt + 1 < attempts) {
            continue;
          }
        }
        throw e;
      }
    }
  }

  function* timeout(): Operation<undefined> {
    yield* sleep(maxDelay);

    throw lastError || new Error('Timeout');
  }

  return yield* race([retry(), timeout()]);
}

It's longer, but it doesn't require tears.

Approach

  1. Created examples directory with retry-backoff.ts example.
  2. Created a test file

TODO:

  • I don't know how to test the backoff mechanism without module stubbing which I really don't want to do.

@cowboyd
Copy link
Member

cowboyd commented Jan 8, 2024

I'd omit the race since it can just be part of the flow of the operation:

export function* retryBackoffExample<T>(
  op: () => Operation<T>,
  { attempts = 5, startDelay = 5, maxDelay = 200 } = {},
): Operation<T> {

  yield* spawn(function* () ) {
    yield* sleep(maxDelay);
    throw new Error('timeout');
  });

  let currentDelay = startDelay;
  let lastError: Error;
  for (let attempt = 0; attempt < attempts; attempts++) {
    try {
      return yield* op();
    } catch (e) {
      if (!isKnownError(e)) {
        throw e;
      } else {
        lastError = e;
        yield* sleep(currentDelay);
        currentDelay = currentDelay * 2;
      }
  }
  throw lastError;
}

@taras
Copy link
Member Author

taras commented Jan 8, 2024

I initially wrote it this way but it didn't match the requirements

  1. The maxDelay would apply to the entire operation, not only the error handling
  2. Retry attempts would include the initial operation

@cowboyd
Copy link
Member

cowboyd commented Jan 8, 2024

Ah, ok. I misunderstood then. There is no timeout on the operation as a whole, but just a cap on the delay to wait between operations. In that case, I think it can be expressed even more succinctly with a single loop:

import { Operation, sleep } from "effection";

export function* retryBackoffExample<T>(
  op: () => Operation<T>,
  { attempts = 5, startDelay = 5, maxDelay = 200 } = {},
): Operation<T> {
  
  let current = { attempt: 0, delay: startDelay, error: new Error() };

  while (current.attempt < attempts) {
    try {
      return yield* op();
    } catch (error) {
      if (!isKnownError(error)) {
        throw error;
      } else {
        yield* sleep(current.delay);
	current.attempt++;
        current.error = error;
        current.delay = Math.min(current.delay * 2, maxDelay);
      }
    }
  }
  throw current.error;
}

If we wanted to, we could easily imagine writing a withTimeout() to limit the time allowed for the entire operation. Decoupling total operation timeout from retry logic seems like the proper factoring.

@cowboyd
Copy link
Member

cowboyd commented Mar 4, 2024

@taras

What about this? shorter and no tears.

import { call, type Operation, race, sleep } from "effection";

export function* retryBackoffExample<T>(
  op: () => Operation<T>,
  { maxAttempts = 5, startDelay = 5, maxDelay = 200 } = {},
): Operation<T> {
  return race([
    timeout(maxDelay),
    call(function* () {
      let attempt = 0;
      for (let delay = startDelay;; delay = delay * 2) {
        try {
          return yield* op();
        } catch (error) {
          if (!isKnownError(error) || attempt++ >= attempts) {
            throw error;
          }
          yield* sleep(delay);
        }
      }
    }),
  ]);
}

function* timeout(afterMillis: number): Operation<never> {
  yield* sleep(afterMillis);
  throw new Error("timeout");
}

@taras
Copy link
Member Author

taras commented Dec 16, 2024

We need to move this to contrib.

@cowboyd cowboyd closed this Dec 17, 2024
@cowboyd
Copy link
Member

cowboyd commented Dec 17, 2024

Moved to thefrontside/effection-contrib#30

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants