PK!c55LICENSEThe MIT License (MIT) Copyright (c) 2014 litl, LLC. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. PK!?J * * README.rstbackoff ======= .. image:: https://travis-ci.org/litl/backoff.svg?branch=master :target: https://travis-ci.org/litl/backoff?branch=master .. image:: https://coveralls.io/repos/litl/backoff/badge.svg?branch=master :target: https://coveralls.io/r/litl/backoff?branch=master .. image:: https://img.shields.io/pypi/v/backoff.svg :target: https://pypi.python.org/pypi/backoff **Function decoration for backoff and retry** This module provides function decorators which can be used to wrap a function such that it will be retried until some condition is met. It is meant to be of use when accessing unreliable resources with the potential for intermittent failures i.e. network resources and external APIs. Somewhat more generally, it may also be of use for dynamically polling resources for externally generated content. Decorators support both regular functions for synchronous code and `asyncio `_'s coroutines for asynchronous code. Examples ======== Since Kenneth Reitz's `requests `_ module has become a defacto standard for synchronous HTTP clients in Python, networking examples below are written using it, but it is in no way required by the backoff module. @backoff.on_exception --------------------- The ``on_exception`` decorator is used to retry when a specified exception is raised. Here's an example using exponential backoff when any ``requests`` exception is raised: .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException) def get_url(url): return requests.get(url) The decorator will also accept a tuple of exceptions for cases where you want the same backoff behavior for more than one exception type: .. code-block:: python @backoff.on_exception(backoff.expo, (requests.exceptions.Timeout, requests.exceptions.ConnectionError)) def get_url(url): return requests.get(url) **Give Up Conditions** Optional keyword arguments can specify conditions under which to give up. The keyword argument ``max_time`` specifies the maximum amount of total time in seconds that can elapse before giving up. .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, max_time=60) def get_url(url): return requests.get(url) Keyword argument ``max_tries`` specifies the maximum number of calls to make to the target function before giving up. .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, max_tries=8, jitter=None) def get_url(url): return requests.get(url) In some cases the raised exception instance itself may need to be inspected in order to determine if it is a retryable condition. The ``giveup`` keyword arg can be used to specify a function which accepts the exception and returns a truthy value if the exception should not be retried: .. code-block:: python def fatal_code(e): return 400 <= e.response.status_code < 500 @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, max_time=300, giveup=fatal_code) def get_url(url): return requests.get(url) When a give up event occurs, the exception in question is reraised and so code calling an `on_exception`-decorated function may still need to do exception handling. @backoff.on_predicate --------------------- The ``on_predicate`` decorator is used to retry when a particular condition is true of the return value of the target function. This may be useful when polling a resource for externally generated content. Here's an example which uses a fibonacci sequence backoff when the return value of the target function is the empty list: .. code-block:: python @backoff.on_predicate(backoff.fibo, lambda x: x == [], max_value=13) def poll_for_messages(queue): return queue.get() Extra keyword arguments are passed when initializing the wait generator, so the ``max_value`` param above is passed as a keyword arg when initializing the fibo generator. When not specified, the predicate param defaults to the falsey test, so the above can more concisely be written: .. code-block:: python @backoff.on_predicate(backoff.fibo, max_value=13) def poll_for_message(queue) return queue.get() More simply, a function which continues polling every second until it gets a non-falsey result could be defined like like this: .. code-block:: python @backoff.on_predicate(backoff.constant, interval=1) def poll_for_message(queue) return queue.get() Jitter ------ A jitter algorithm can be supplied with the ``jitter`` keyword arg to either of the backoff decorators. This argument should be a function accepting the original unadulterated backoff value and returning it's jittered counterpart. As of version 1.2, the default jitter function ``backoff.full_jitter`` implements the 'Full Jitter' algorithm as defined in the AWS Architecture Blog's `Exponential Backoff And Jitter `_ post. Note that with this algorithm, the time yielded by the wait generator is actually the *maximum* amount of time to wait. Previous versions of backoff defaulted to adding some random number of milliseconds (up to 1s) to the raw sleep value. If desired, this behavior is now available as ``backoff.random_jitter``. Using multiple decorators ------------------------- The backoff decorators may also be combined to specify different backoff behavior for different cases: .. code-block:: python @backoff.on_predicate(backoff.fibo, max_value=13) @backoff.on_exception(backoff.expo, requests.exceptions.HTTPError, max_time=60) @backoff.on_exception(backoff.expo, requests.exceptions.TimeoutError, max_time=300) def poll_for_message(queue): return queue.get() Runtime Configuration --------------------- The decorator functions ``on_exception`` and ``on_predicate`` are generally evaluated at import time. This is fine when the keyword args are passed as constant values, but suppose we want to consult a dictionary with configuration options that only become available at runtime. The relevant values are not available at import time. Instead, decorator functions can be passed callables which are evaluated at runtime to obtain the value: .. code-block:: python def lookup_max_time(): # pretend we have a global reference to 'app' here # and that it has a dictionary-like 'config' property return app.config["BACKOFF_MAX_TIME"] @backoff.on_exception(backoff.expo, ValueError, max_time=lookup_max_time) Event handlers -------------- Both backoff decorators optionally accept event handler functions using the keyword arguments ``on_success``, ``on_backoff``, and ``on_giveup``. This may be useful in reporting statistics or performing other custom logging. Handlers must be callables with a unary signature accepting a dict argument. This dict contains the details of the invocation. Valid keys include: * *target*: reference to the function or method being invoked * *args*: positional arguments to func * *kwargs*: keyword arguments to func * *tries*: number of invocation tries so far * *elapsed*: elapsed time in seconds so far * *wait*: seconds to wait (``on_backoff`` handler only) * *value*: value triggering backoff (``on_predicate`` decorator only) A handler which prints the details of the backoff event could be implemented like so: .. code-block:: python def backoff_hdlr(details): print ("Backing off {wait:0.1f} seconds afters {tries} tries " "calling function {target} with args {args} and kwargs " "{kwargs}".format(**details)) @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, on_backoff=backoff_hdlr) def get_url(url): return requests.get(url) **Multiple handlers per event type** In all cases, iterables of handler functions are also accepted, which are called in turn. For example, you might provide a simple list of handler functions as the value of the ``on_backoff`` keyword arg: .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, on_backoff=[backoff_hdlr1, backoff_hdlr2]) def get_url(url): return requests.get(url) **Getting exception info** In the case of the ``on_exception`` decorator, all ``on_backoff`` and ``on_giveup`` handlers are called from within the except block for the exception being handled. Therefore exception info is available to the handler functions via the python standard library, specifically ``sys.exc_info()`` or the ``traceback`` module. Asynchronous code ----------------- Backoff supports asynchronous execution in Python 3.5 and above. To use backoff in asynchronous code based on `asyncio `_ you simply need to apply ``backoff.on_exception`` or ``backoff.on_predicate`` to coroutines. You can also use coroutines for the ``on_success``, ``on_backoff``, and ``on_giveup`` event handlers, with the interface otherwise being identical. The following examples use `aiohttp `_ asynchronous HTTP client/server library. .. code-block:: python @backoff.on_exception(backoff.expo, aiohttp.ClientError, max_time=60) async def get_url(url): async with aiohttp.ClientSession() as session: async with session.get(url) as response: return await response.text() Logging configuration --------------------- Errors and backoff and retry attempts are logged to the 'backoff' logger. By default, this logger is configured with a NullHandler, so there will be nothing output unless you configure a handler. Programmatically, this might be accomplished with something as simple as: .. code-block:: python logging.getLogger('backoff').addHandler(logging.StreamHandler()) The default logging level is INFO, which corresponds to logging anytime a retry event occurs. If you would instead like to log only when a giveup event occurs, set the logger level to ERROR. .. code-block:: python logging.getLogger('backoff').setLevel(logging.ERROR) PK!Dliibackoff/__init__.py# coding:utf-8 """ Function decoration for backoff and retry This module provides function decorators which can be used to wrap a function such that it will be retried until some condition is met. It is meant to be of use when accessing unreliable resources with the potential for intermittent failures i.e. network resources and external APIs. Somewhat more generally, it may also be of use for dynamically polling resources for externally generated content. For examples and full documentation see the README at https://github.com/litl/backoff """ from backoff._decorator import on_predicate, on_exception from backoff._jitter import full_jitter, random_jitter from backoff._wait_gen import constant, expo, fibo __all__ = [ 'on_predicate', 'on_exception', 'constant', 'expo', 'fibo', 'full_jitter', 'random_jitter' ] __version__ = '1.7.0' PK!jbackoff/_async.py# coding:utf-8 import datetime import functools # Python 3.4 code and syntax is allowed in this module! import asyncio from datetime import timedelta from backoff._common import (_handlers, _init_wait_gen, _log_backoff, _log_giveup, _maybe_call, _next_wait) def _ensure_coroutine(coro_or_func): if asyncio.iscoroutinefunction(coro_or_func): return coro_or_func else: return asyncio.coroutine(coro_or_func) def _ensure_coroutines(coros_or_funcs): return [_ensure_coroutine(f) for f in coros_or_funcs] async def _call_handlers(hdlrs, target, args, kwargs, tries, elapsed, **extra): details = { 'target': target, 'args': args, 'kwargs': kwargs, 'tries': tries, 'elapsed': elapsed, } details.update(extra) for hdlr in hdlrs: await hdlr(details) def retry_predicate(target, wait_gen, predicate, max_tries, max_time, jitter, on_success, on_backoff, on_giveup, wait_gen_kwargs): success_hdlrs = _ensure_coroutines(_handlers(on_success)) backoff_hdlrs = _ensure_coroutines(_handlers(on_backoff, _log_backoff)) giveup_hdlrs = _ensure_coroutines(_handlers(on_giveup, _log_giveup)) # Easy to implement, please report if you need this. assert not asyncio.iscoroutinefunction(max_tries) assert not asyncio.iscoroutinefunction(jitter) assert asyncio.iscoroutinefunction(target) @functools.wraps(target) async def retry(*args, **kwargs): # change names because python 2.x doesn't have nonlocal max_tries_ = _maybe_call(max_tries) max_time_ = _maybe_call(max_time) tries = 0 start = datetime.datetime.now() wait = _init_wait_gen(wait_gen, wait_gen_kwargs) while True: tries += 1 elapsed = timedelta.total_seconds(datetime.datetime.now() - start) details = (target, args, kwargs, tries, elapsed) ret = await target(*args, **kwargs) if predicate(ret): max_tries_exceeded = (tries == max_tries_) max_time_exceeded = (max_time_ is not None and elapsed >= max_time_) if max_tries_exceeded or max_time_exceeded: await _call_handlers( giveup_hdlrs, *details, value=ret) break seconds = _next_wait(wait, jitter, elapsed, max_time_) await _call_handlers( backoff_hdlrs, *details, value=ret, wait=seconds) # Note: there is no convenient way to pass explicit event # loop to decorator, so here we assume that either default # thread event loop is set and correct (it mostly is # by default), or Python >= 3.5.3 or Python >= 3.6 is used # where loop.get_event_loop() in coroutine guaranteed to # return correct value. # See for details: # # await asyncio.sleep(seconds) continue else: await _call_handlers(success_hdlrs, *details, value=ret) break return ret return retry def retry_exception(target, wait_gen, exception, max_tries, max_time, jitter, giveup, on_success, on_backoff, on_giveup, wait_gen_kwargs): success_hdlrs = _ensure_coroutines(_handlers(on_success)) backoff_hdlrs = _ensure_coroutines(_handlers(on_backoff, _log_backoff)) giveup_hdlrs = _ensure_coroutines(_handlers(on_giveup, _log_giveup)) giveup = _ensure_coroutine(giveup) # Easy to implement, please report if you need this. assert not asyncio.iscoroutinefunction(max_tries) assert not asyncio.iscoroutinefunction(jitter) @functools.wraps(target) async def retry(*args, **kwargs): # change names because python 2.x doesn't have nonlocal max_tries_ = _maybe_call(max_tries) max_time_ = _maybe_call(max_time) tries = 0 start = datetime.datetime.now() wait = _init_wait_gen(wait_gen, wait_gen_kwargs) while True: tries += 1 elapsed = timedelta.total_seconds(datetime.datetime.now() - start) details = (target, args, kwargs, tries, elapsed) try: ret = await target(*args, **kwargs) except exception as e: giveup_result = await giveup(e) max_tries_exceeded = (tries == max_tries_) max_time_exceeded = (max_time_ is not None and elapsed >= max_time_) if giveup_result or max_tries_exceeded or max_time_exceeded: await _call_handlers(giveup_hdlrs, *details) raise seconds = _next_wait(wait, jitter, elapsed, max_time_) await _call_handlers( backoff_hdlrs, *details, wait=seconds) # Note: there is no convenient way to pass explicit event # loop to decorator, so here we assume that either default # thread event loop is set and correct (it mostly is # by default), or Python >= 3.5.3 or Python >= 3.6 is used # where loop.get_event_loop() in coroutine guaranteed to # return correct value. # See for details: # # await asyncio.sleep(seconds) else: await _call_handlers(success_hdlrs, *details) return ret return retry PK!J backoff/_common.py# coding:utf-8 import logging import sys import traceback # Use module-specific logger with a default null handler. logger = logging.getLogger('backoff') logger.addHandler(logging.NullHandler()) # pragma: no cover logger.setLevel(logging.INFO) # Evaluate arg that can be either a fixed value or a callable. def _maybe_call(f, *args, **kwargs): return f(*args, **kwargs) if callable(f) else f def _init_wait_gen(wait_gen, wait_gen_kwargs): # there are no dictionary comprehensions in python 2.6 kwargs = {k: _maybe_call(v) for k, v in wait_gen_kwargs.items()} return wait_gen(**kwargs) def _next_wait(wait, jitter, elapsed, max_time): value = next(wait) try: if jitter is not None: seconds = jitter(value) else: seconds = value except TypeError: # support deprecated nullary jitter function signature # which returns a delta rather than a jittered value seconds = value + jitter() # don't sleep longer than remaining alloted max_time if max_time is not None: seconds = min(seconds, max_time - elapsed) return seconds # Create default handler list from keyword argument def _handlers(hdlr, default=None): defaults = [default] if default is not None else [] if hdlr is None: return defaults if hasattr(hdlr, '__iter__'): return defaults + list(hdlr) return defaults + [hdlr] # Default backoff handler def _log_backoff(details): fmt = "Backing off {0}(...) for {1:.1f}s" msg = fmt.format(details['target'].__name__, details['wait']) exc_typ, exc, _ = sys.exc_info() if exc is not None: exc_fmt = traceback.format_exception_only(exc_typ, exc)[-1] msg = "{} ({})".format(msg, exc_fmt.rstrip("\n")) else: msg = "{} ({})".format(msg, details['value']) logger.info(msg) # Default giveup handler def _log_giveup(details): fmt = "Giving up {0}(...) after {1} tries" msg = fmt.format(details['target'].__name__, details['tries']) exc_typ, exc, _ = sys.exc_info() if exc is not None: exc_fmt = traceback.format_exception_only(exc_typ, exc)[-1] msg = "{} ({})".format(msg, exc_fmt.rstrip("\n")) else: msg = "{} ({})".format(msg, details['value']) logger.error(msg) PK!  backoff/_decorator.py# coding:utf-8 from __future__ import unicode_literals import operator import sys from backoff._jitter import full_jitter from backoff import _sync def on_predicate(wait_gen, predicate=operator.not_, max_tries=None, max_time=None, jitter=full_jitter, on_success=None, on_backoff=None, on_giveup=None, **wait_gen_kwargs): """Returns decorator for backoff and retry triggered by predicate. Args: wait_gen: A generator yielding successive wait times in seconds. predicate: A function which when called on the return value of the target function will trigger backoff when considered truthily. If not specified, the default behavior is to backoff on falsey return values. max_tries: The maximum number of attempts to make before giving up. In the case of failure, the result of the last attempt will be returned. The default value of None means there is no limit to the number of tries. If a callable is passed, it will be evaluated at runtime and its return value used. max_time: The maximum total amount of time to try for before giving up. If this time expires, the result of the last attempt will be returned. If a callable is passed, it will be evaluated at runtime and its return value used. jitter: A function of the value yielded by wait_gen returning the actual time to wait. This distributes wait times stochastically in order to avoid timing collisions across concurrent clients. Wait times are jittered by default using the full_jitter function. Jittering may be disabled altogether by passing jitter=None. on_success: Callable (or iterable of callables) with a unary signature to be called in the event of success. The parameter is a dict containing details about the invocation. on_backoff: Callable (or iterable of callables) with a unary signature to be called in the event of a backoff. The parameter is a dict containing details about the invocation. on_giveup: Callable (or iterable of callables) with a unary signature to be called in the event that max_tries is exceeded. The parameter is a dict containing details about the invocation. **wait_gen_kwargs: Any additional keyword args specified will be passed to wait_gen when it is initialized. Any callable args will first be evaluated and their return values passed. This is useful for runtime configuration. """ def decorate(target): retry = None if sys.version_info >= (3, 5): # pragma: python=3.5 import asyncio if asyncio.iscoroutinefunction(target): import backoff._async retry = backoff._async.retry_predicate elif _is_event_loop() and _is_current_task(): # Verify that sync version is not being run from coroutine # (that would lead to event loop hiccups). raise TypeError( "backoff.on_predicate applied to a regular function " "inside coroutine, this will lead to event loop " "hiccups. Use backoff.on_predicate on coroutines in " "asynchronous code.") if retry is None: retry = _sync.retry_predicate return retry(target, wait_gen, predicate, max_tries, max_time, jitter, on_success, on_backoff, on_giveup, wait_gen_kwargs) # Return a function which decorates a target with a retry loop. return decorate def on_exception(wait_gen, exception, max_tries=None, max_time=None, jitter=full_jitter, giveup=lambda e: False, on_success=None, on_backoff=None, on_giveup=None, **wait_gen_kwargs): """Returns decorator for backoff and retry triggered by exception. Args: wait_gen: A generator yielding successive wait times in seconds. exception: An exception type (or tuple of types) which triggers backoff. max_tries: The maximum number of attempts to make before giving up. Once exhausted, the exception will be allowed to escape. The default value of None means their is no limit to the number of tries. If a callable is passed, it will be evaluated at runtime and its return value used. max_time: The maximum total amount of time to try for before giving up. Once expired, the exception will be allowed to escape. If a callable is passed, it will be evaluated at runtime and its return value used. jitter: A function of the value yielded by wait_gen returning the actual time to wait. This distributes wait times stochastically in order to avoid timing collisions across concurrent clients. Wait times are jittered by default using the full_jitter function. Jittering may be disabled altogether by passing jitter=None. giveup: Function accepting an exception instance and returning whether or not to give up. Optional. The default is to always continue. on_success: Callable (or iterable of callables) with a unary signature to be called in the event of success. The parameter is a dict containing details about the invocation. on_backoff: Callable (or iterable of callables) with a unary signature to be called in the event of a backoff. The parameter is a dict containing details about the invocation. on_giveup: Callable (or iterable of callables) with a unary signature to be called in the event that max_tries is exceeded. The parameter is a dict containing details about the invocation. **wait_gen_kwargs: Any additional keyword args specified will be passed to wait_gen when it is initialized. Any callable args will first be evaluated and their return values passed. This is useful for runtime configuration. """ def decorate(target): retry = None if sys.version_info[:2] >= (3, 5): # pragma: python=3.5 import asyncio if asyncio.iscoroutinefunction(target): import backoff._async retry = backoff._async.retry_exception elif _is_event_loop() and _is_current_task(): # Verify that sync version is not being run from coroutine # (that would lead to event loop hiccups). raise TypeError( "backoff.on_exception applied to a regular function " "inside coroutine, this will lead to event loop " "hiccups. Use backoff.on_exception on coroutines in " "asynchronous code.") if retry is None: retry = _sync.retry_exception return retry(target, wait_gen, exception, max_tries, max_time, jitter, giveup, on_success, on_backoff, on_giveup, wait_gen_kwargs) # Return a function which decorates a target with a retry loop. return decorate def _is_event_loop(): # pragma: no cover import asyncio try: if sys.version_info >= (3, 7): asyncio.get_running_loop() asyncio.get_event_loop() except RuntimeError: return False else: return True def _is_current_task(): # pragma: no cover import asyncio if sys.version_info >= (3, 7): return asyncio.current_task() is not None return asyncio.Task.current_task() is not None PK!Ebackoff/_jitter.py# coding:utf-8 import random def random_jitter(value): """Jitter the value a random number of milliseconds. This adds up to 1 second of additional time to the original value. Prior to backoff version 1.2 this was the default jitter behavior. Args: value: The unadulterated backoff value. """ return value + random.random() def full_jitter(value): """Jitter the value across the full range (0 to value). This corresponds to the "Full Jitter" algorithm specified in the AWS blog's post on the performance of various jitter algorithms. (http://www.awsarchitectureblog.com/2015/03/backoff.html) Args: value: The unadulterated backoff value. """ return random.uniform(0, value) PK!˸backoff/_sync.py# coding:utf-8 import datetime import functools import time from datetime import timedelta from backoff._common import (_handlers, _init_wait_gen, _log_backoff, _log_giveup, _maybe_call, _next_wait) def _call_handlers(hdlrs, target, args, kwargs, tries, elapsed, **extra): details = { 'target': target, 'args': args, 'kwargs': kwargs, 'tries': tries, 'elapsed': elapsed, } details.update(extra) for hdlr in hdlrs: hdlr(details) def retry_predicate(target, wait_gen, predicate, max_tries, max_time, jitter, on_success, on_backoff, on_giveup, wait_gen_kwargs): success_hdlrs = _handlers(on_success) backoff_hdlrs = _handlers(on_backoff, _log_backoff) giveup_hdlrs = _handlers(on_giveup, _log_giveup) @functools.wraps(target) def retry(*args, **kwargs): # change names because python 2.x doesn't have nonlocal max_tries_ = _maybe_call(max_tries) max_time_ = _maybe_call(max_time) tries = 0 start = datetime.datetime.now() wait = _init_wait_gen(wait_gen, wait_gen_kwargs) while True: tries += 1 elapsed = timedelta.total_seconds(datetime.datetime.now() - start) details = (target, args, kwargs, tries, elapsed) ret = target(*args, **kwargs) if predicate(ret): max_tries_exceeded = (tries == max_tries_) max_time_exceeded = (max_time_ is not None and elapsed >= max_time_) if max_tries_exceeded or max_time_exceeded: _call_handlers(giveup_hdlrs, *details, value=ret) break seconds = _next_wait(wait, jitter, elapsed, max_time_) _call_handlers(backoff_hdlrs, *details, value=ret, wait=seconds) time.sleep(seconds) continue else: _call_handlers(success_hdlrs, *details, value=ret) break return ret return retry def retry_exception(target, wait_gen, exception, max_tries, max_time, jitter, giveup, on_success, on_backoff, on_giveup, wait_gen_kwargs): success_hdlrs = _handlers(on_success) backoff_hdlrs = _handlers(on_backoff, _log_backoff) giveup_hdlrs = _handlers(on_giveup, _log_giveup) @functools.wraps(target) def retry(*args, **kwargs): # change names because python 2.x doesn't have nonlocal max_tries_ = _maybe_call(max_tries) max_time_ = _maybe_call(max_time) tries = 0 start = datetime.datetime.now() wait = _init_wait_gen(wait_gen, wait_gen_kwargs) while True: tries += 1 elapsed = timedelta.total_seconds(datetime.datetime.now() - start) details = (target, args, kwargs, tries, elapsed) try: ret = target(*args, **kwargs) except exception as e: max_tries_exceeded = (tries == max_tries_) max_time_exceeded = (max_time_ is not None and elapsed >= max_time_) if giveup(e) or max_tries_exceeded or max_time_exceeded: _call_handlers(giveup_hdlrs, *details) raise seconds = _next_wait(wait, jitter, elapsed, max_time_) _call_handlers(backoff_hdlrs, *details, wait=seconds) time.sleep(seconds) else: _call_handlers(success_hdlrs, *details) return ret return retry PK!d`backoff/_wait_gen.py# coding:utf-8 def expo(base=2, factor=1, max_value=None): """Generator for exponential decay. Args: base: The mathematical base of the exponentiation operation factor: Factor to multiply the exponentation by. max_value: The maximum value to yield. Once the value in the true exponential sequence exceeds this, the value of max_value will forever after be yielded. """ n = 0 while True: a = factor * base ** n if max_value is None or a < max_value: yield a n += 1 else: yield max_value def fibo(max_value=None): """Generator for fibonaccial decay. Args: max_value: The maximum value to yield. Once the value in the true fibonacci sequence exceeds this, the value of max_value will forever after be yielded. """ a = 1 b = 1 while True: if max_value is None or a < max_value: yield a a, b = b, a + b else: yield max_value def constant(interval=1): """Generator for constant intervals. Args: interval: The constant value in seconds to yield. """ while True: yield interval PK!c55backoff-1.7.0.dist-info/LICENSEThe MIT License (MIT) Copyright (c) 2014 litl, LLC. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. PK!HWYbackoff-1.7.0.dist-info/WHEEL A н#Z;/"b&F]xzwC;dhfCSTֻ0*Ri.4œh6-]{H, JPK!H]wJ]/ backoff-1.7.0.dist-info/METADATAZsܶ9Q]7ڍk,;P_g;]d)N}8H`>삯Sߔuc8?JDWf6˺W7QvVxօ 1U YOUޮL9Ѕε_0a}y{ U; |}!^NX4ȏ#3GbdG Y<Hok>RZ1!kb V| ؖm׈(aP?x7AKA/V7 ?j7,KOŸ,s+r}V^kwP" GI%l {iG-AXrm-)`M?nd]wyZSެ}x=T5i\ްB 68m=nzH+x?ٻ.HOx>-wwo2^F^}C)x4-) /Ϡ* >i B@;NIi GZ@=zJNn])8]iVN^Xk/_?4y [^ @ U \pV챔JW}>'ژms&ݔͅ/j+!+̓x;Eh3 D8k7#N_nTRR+1̟kVX zt2WFk?OpװP*'oBgu$EHw!sMYV{Z1Q^DD$kfbg1ɐ`>u> Z/$|aeo!a8T'EyײlyOc´ g=iB,%}$ ёD, iсsķ ȏ5.?/@2H"Q㛢hm)Ќ̋˪ @ 33_+,xj!NIj@"Fދg5z] dcn|r!B5%WYpY[5R) b>m|8֢He05G='%sW_R;)HLqr"~i}?QK..*x(.nqd~ݣ [So## tIK@um4A7LPۓr@I:KfVWȣ!t>W;mC:yU>w,3b2P@#>w?KECi״K]Tͷr!], -33#(REPPƘ!M^'~*ʹ}2Bi(R[T)R_RٗgPi/ }&1cv1NYH \gokjǸdXgPF,l#XK* :u/[8;#4}%ފҝ C8xb=(R <>+͜/uxS+ک,\.!}|tI״}oW$K5p5 84փr҆Ob!;P.MЃ5NɀL@-j>uvA0eRR&`"g$Qdćͺgs-uv-l Ԕz"ʘźH)xSy'hGQjt`9B迟|~3IPr2TpNl4EF %tCgK)=1 V7"c.@E v/6H-U 鶐 [s}MH/i=㟾FKyS>D+kz;bDl(Y3r2ҟ _=6V)8.]O )d \ }yQw ?~Vҭ؜N{+ Egt T#)[AݨM ԓ).ѭo3R[_4%%% +nf@[_n Y{7HNS!F/hA_&܇8m@n!\u&x`EPЧdx!;CfkZ.6^AnQ @&UCҜ:׀d_,?PE0玘އt&Q¢{j]eS.МtG5Rlʢ}zs/;|t0T^B"F8zxJ ٓvRi2kjE!}Ca`vn[I\' 2"T,}|)}?q' Ш"m\a J I.+>n\Nbackoff-1.7.0.dist-info/RECORDr@} " F#D@0.΂ ~2 KkxO6s ;_K4ڬЪ^Nck 퀤Z] h㸱I+@':[ ]SwK@- K$1^6+ߚ2FJ3y; _e+enuxRA7e8|̄C6n""9dj-}ԊfSꪪ YPQYTIqyzvuҼ݈UHRiXO*N }Hܨv莂򢸧]:jY מգEa^q,V}RWXOp7(!r'o2CPg1%Sg+d׎ߜ<->O<IRZQ]_~YVΙ:bT{U > aŰ<RX8EN]Ov]Z(ێO@34&QYwvHv5((j~!GZK`ʇM [&ؒaMfߤ9}΀cIJ?3*bPK!c55LICENSEPK!?J * * ZREADME.rstPK!Dlii.backoff/__init__.pyPK!j<2backoff/_async.pyPK!J Ibackoff/_common.pyPK!  2Sbackoff/_decorator.pyPK!Eusbackoff/_jitter.pyPK!˸vbackoff/_sync.pyPK!d`backoff/_wait_gen.pyPK!c55backoff-1.7.0.dist-info/LICENSEPK!HWYbackoff-1.7.0.dist-info/WHEELPK!H]wJ]/ backoff-1.7.0.dist-info/METADATAPK!H:>ybackoff-1.7.0.dist-info/RECORDPK b