{ "info": { "author": "Stephane \"Twidi\" Angel", "author_email": "s.angel@twidi.com", "bugtrack_url": null, "classifiers": [ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 2", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.5", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: PyPy", "Topic :: Software Development :: Libraries :: Python Modules" ], "description": "|PyPI Version| |Build Status|\n\nredis-limpyd-jobs\n=================\n\nA queue/jobs system based on\n`redis-limpyd `__\n(`redis `__ orm (sort of) in python)\n\nWhere to find it:\n\n- Github repository: https://github.com/limpyd/redis-limpyd-jobs\n- Pypi package: https://pypi.python.org/pypi/redis-limpyd-jobs\n- Documentation: http://documentup.com/limpyd/redis-limpyd-jobs\n\nInstall:\n\nPython versions 2.7, and 3.5 to 3.8 are supported (CPython and PyPy).\n\nRedis-server versions >= 3 are supported.\n\nRedis-py versions >= 3 are supported.\n\nRedis-limpyd versions >= 2 are supported.\n\nYou can still use limpyd-extensions versions < 2 if you need something older than the above requirements.\n\n.. code:: bash\n\n pip install redis-limpyd-jobs\n\nNote that you actually need the\n`redis-limpyd-extensions `__ (min v1.0)\nin addition to `redis-limpyd `__ (min v1.2)\n(both are automatically installed via pypi)\n\nHow it works\n------------\n\n``redis-limpyd-jobs`` provides three ``limpyd`` models (``Queue``,\n``Job``, ``Error``), and a ``Worker`` class.\n\nThese models implement the minimum stuff you need to run jobs\nasynchronously:\n\n- Use the ``Job`` model to store things to do\n- The ``Queue`` model will store a list of jobs, with a priority system\n- The ``Error`` model will store all errors\n- The ``Worker`` class is to be used in a process to go through a queue\n and run jobs\n\nSimple example\n--------------\n\n.. code:: python\n\n from limpyd_jobs import STATUSES, Queue, Job, Worker\n\n # The function to run when a job is called by the worker\n def do_stuff(job, queue):\n # here do stuff with your job\n pass\n\n # Create a first job, name 'job:1', in a queue named 'myqueue', with a\n # priority of 1. The higher the priority, the sooner the job will run\n job1 = Job.add_job(identifier='job:1', queue_name='myqueue', priority=1)\n\n # Add another job in the same queue, with a higher priority, and a different\n # identifier (if the same was used, no new job would be added, but the\n # existing job's priority would have been updated)\n job2 = Job.add_job(identifier='job:2', queue_name='myqueue', priority=2)\n\n # Create a worker for the queue used previously, asking to call the\n # \"do_stuff\" function for each job, and to stop after 2 jobs\n worker = Worker(queues='myqueue', callback=do_stuff, max_loops=2)\n\n # Now really run the jobs\n worker.run()\n\n # Here our jobs are done, our queue is empty\n queue1 = Queue.get_queue('myqueue', priority=1)\n queue2 = Queue.get_queue('myqueue', priority=2)\n # nothing waiting\n print queue1.waiting.lmembers(), queue2.waiting.lmembers()\n >> [] []\n # two jobs in success (show PKs of jobs)\n print queue1.success.lmembers(), queue2.success.lmembers()\n >> ['limpyd_jobs.models.Job:1', 'limpyd_jobs.models.Job:2']\n\n # Check our jobs statuses\n print job1.status.hget() == STATUSES.SUCCESS\n >> True\n print job2.status.hget() == STATUSES.SUCCESS\n >> True\n\nYou notice how it works:\n\n- ``Job.add_job`` to create a job\n- ``Worker()`` to create a worker, with ``callback`` argument to set\n which function to run for each job\n- ``worker.run`` to launch a worker.\n\nNotice that you can run as much workers as you want, even on the same\nqueue name. Internally, we use the ``blpop`` redis command to get jobs\natomically.\n\nBut you can also run only one worker, having only one queue, doing\ndifferent stuff in the callback depending on the ``idenfitier``\nattribute of the job.\n\nWorkers are able to catch SIGINT/SIGTERM signals, finishing executing\nthe current job before exiting. Useful if used, for example, with\nsupervisord.\n\nIf you want to store more information in a job, queue or error, or want\nto have a different behavior in a worker, it's easy because you can\ncreate subclasses of everything in ``limpyd-jobs``, the ``limpyd``\nmodels or the ``Worker`` class.\n\nModels\n------\n\nJob\n~~~\n\nA Job stores all needed informations about a task to run.\n\nNote: If you want to subclass the Job model to add your own fields,\n``run`` method, or whatever, note that the class must be at the first\nlevel of a python module (ie not in a parent class or function) to work.\n\nJob fields\n^^^^^^^^^^\n\n``identifier``\n''''''''''''''\n\nA string (``InstanceHashField``, indexed) to identify the job.\n\nWhen using the (recommended) ``add_job`` class method, you can't have\nmany jobs with the same identifier in a waiting queue. If you create a\nnew job with an identifier while an other with the same is still in the\nsame waiting queue, what is done depends on the priority of the two\njobs: - if the new job has a lower (or equal) priority, it's discarded -\nif the new job has a higher priority, the priority of the existing job\nis updated to the higher.\n\nIn both cases the ``add_job`` class method returns the existing job,\ndiscarding the new one.\n\nA common way of using the identifier is to, at least, store a way to\nidentify the object on which we want the task to apply: - you can have\none or more queue for a unique task, and store only the ``id`` of an\nobject on the ``identifier`` field - you can have one or more queue each\ndoing many tasks, then you may want to store the task too in the\n``identifier`` field: \"task:id\"\n\nNote that by subclassing the ``Job`` model, you are able to add new\nfields to a Job to store the task and other needed parameters, as\narguments (size for a photo to resize, a message to send...)\n\n``status``\n''''''''''\n\nA string (``InstanceHashField``, indexed) to store the actual status of\nthe job.\n\nIt's a single letter but we provide a class to help using it verbosely:\n``STATUSES``\n\n.. code:: python\n\n from limpyd_jobs import STATUSES\n print STATUSES.SUCCESS\n >> \"s\"\n\nWhen a job is created via the ``add_job`` class method, its status is\nset to ``STATUSES.WAITING``, or ``STATUSES.DELAYED`` if it'is delayed by\nsetting ``delayed_until``. When it selected by the worker to execute it,\nthe status passes to ``STATUSES.RUNNING``. When finished, it's one of\n``STATUSES.SUCCESS`` or ``STATUSES.ERROR``. An other available status is\n``STATUSES.CANCELED``, useful if you want to cancel a job without\nremoving it from its queue.\n\nYou can also display the full string of a status:\n\n.. code:: python\n\n print STATUSES.by_value(my_job.status.hget())\n >> \"SUCCESS\"\n\n``priority``\n''''''''''''\n\nA string (``InstanceHashField``, indexed, default = 0) to store the\npriority of the job.\n\nThe priority of a job determines in which Queue object it will be\nstored. A worker listen for all queues with some names and different\npriorities, but respecting the priority (reverse) order: the higher the\npriority, the sooner the job will be executed.\n\nWe choose to use the \"\\`\"higher priority is better\" way of doing things\nto give the possibility to always add a job in a higher priority than\nany other ones.\n\nDirectly updating the priority of a job will not change the queue in\nwhich it's stored. But when you add a job via the (recommended)\n``add_job`` class method, if a job with the same identifier exists, its\npriority will be updated (only if the new one is higher) and the job\nwill be moved to the higher priority queue.\n\n``added``\n'''''''''\n\nA string (``InstanceHashField``) to store the date and time (a string\nrepresentation of ``datetime.utcnow()``) of the time the job was added\nto its queue.\n\nIt's useful in combination of the ``end`` field to calculate the job\nduration.\n\n``start``\n'''''''''\n\nA string (``InstanceHashField``) to store the date and time (a string\nrepresentation of ``datetime.utcnow()``) of the time the job was fetched\nfrom the queue, just before the callback is called.\n\nIt's useful in combination of the ``end`` field to calculate the job\nduration.\n\n``end``\n'''''''\n\nA string (``InstanceHashField``) to store the date and time (a string\nrepresentation of ``datetime.utcnow()``) of the moment the job was set\nas finished or in error, just after the has finished.\n\nIt's useful in combination of the ``start`` field to calculate the job\nduration.\n\n``tries``\n'''''''''\n\nA integer saved as a string (``InstanceHashField``) to store the number\nof times the job was executed. It can be more than one if it was\nrequeued after an error.\n\n``delayed_until``\n'''''''''''''''''\n\nThe string representation (``InstanceHashField``) of a ``datetime``\nobject until when the job may be in the ``delayed`` list (a redis\nsorted-set) of the queue.\n\nIt can be set when calling ``add_job`` by passing either a\n``delayed_until`` argument, which must be a ``datetime``, or a\n``delayed_for`` argument, which must be a number of seconds (int or\nfloat) or a ``timedelta`` object. The ``delayed_for`` argument will be\nadded to the current time (``datetime.utcnow()``) to compute\n``delayed_until``.\n\nIf a job is in error after its execution and if the worker has a\npositive ``requeue_delay_delta`` attribute, the ``delayed_until`` field\nwill be set accordingly, useful to retry a erroneous job after a certain\ndelay.\n\n``queued``\n''''''''''\n\nThis field is set to ``'1'`` when it's currently managed by a queue:\nwaiting, delayed, running. This flag is set when calling\n``enqueue_or_delay``, and removed by the worker when the job is\ncanceled, is finished with success, or finished with error and not\nrequeued. It's this field that is checked to test if the same job\nalready exists when ``add_job`` is called.\n\n``cancel_on_error``\n'''''''''''''''''''\n\nYou must be set this field to a ``True`` value (don't forget that Redis\nstores Strings, so ``0`` will be saved as ``\"0\"`` so it will be\n``True``... so don't set it to ``False`` or ``0`` if you want a\n``False`` value: yo can let it empty) if you don't want the job to be\nrequeued in case of error.\n\nNote that if you want to do this for all jobs a a class, you may want to\nset to ``True`` the ``always_cancel_on_error`` attribute of this class.\n\nJob attributes\n^^^^^^^^^^^^^^\n\n``queue_model``\n'''''''''''''''\n\nWhen adding jobs via the ``add_job`` method, the model defined in this\nattribute will be used to get or create a queue. It's set by default to\n``Queue`` but if you want to update it to your own model, you must\nsubclass the ``Job`` model too, and update this attribute.\n\n``queue_name``\n''''''''''''''\n\n``None`` by default, can be set when overriding the ``Job`` class to\navoid passing the ``queue_name`` argument to the job's methods\n(especially ``add_job``)\n\nNote that if you don't subclass the ``Job`` model, you can pass the\n``queue_model`` argument to the ``add_job`` method.\n\n``always_cancel_on_error``\n''''''''''''''''''''''''''\n\nSet this attribute to True if you want all your jobs of this class not\nbe be requeued in case of error. If you let it to its default value of\n``False``, you can still do it job by job by setting their field\n``cancel_on_error`` to a ``True`` value.\n\nJob properties and methods\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n``ident`` (property)\n''''''''''''''''''''\n\nThe ``ident`` property is a string representation of the model + the\nprimary key of the job, saved in queues, allowing the retrieval of the\nJob.\n\n``must_be_cancelled_on_error`` (property)\n'''''''''''''''''''''''''''''''''''''''''\n\nThe ``must_be_cancelled_on_error`` property returns a Boolean indicating\nif, in case of error during its execution, the job must NOT be requeued.\n\nBy default it will be ``False``, but there is to way to change this\nbehavior:\n\n- setting the ``always_cancel_on_error`` of your job's class to\n ``True``.\n- setting the ``cancel_on_error`` field of your job to a ``True`` value\n\n``duration`` (property)\n'''''''''''''''''''''''\n\nThe ``duration`` property simply returns the time used to compute the\njob. The return value is a ``datetime.timedelta`` object if the\n``start`` and ``end`` fields are set, or ``None`` on the other case.\n\n``run`` (method)\n''''''''''''''''\n\nIt's the main method of the job, the only one you must override, to do\nsome tuff when the job is executed by the worker.\n\nThe return value of this method will be passed to the ``job_success`` of\nthe worker, then, if defined, to the ``on_success`` method of the job.\n\nBy default a ``NotImplemented`` error is raised.\n\nArguments:\n\n- ``queue``: The queue from which the job was fetched.\n\n``requeue`` (method)\n''''''''''''''''''''\n\nThe ``requeue`` method allow a job to be put back in the waiting (or\ndelayed) queue when its execution failed.\n\nArguments:\n\n- ``queue_name=None`` The queue name in which to save the job. If not\n defined, will use the job's class one. If both are undefined, an\n exception is raised.\n\n- ``priority=None`` The new priority of the new job. If not defined,\n the job will keep its actual priority.\n\n- ``delayed_until=None`` Set this to a ``datetime`` object to set the\n date on which the job will be really requeued. The real\n ``delayed_until`` can also be set by passing the ``delayed_for``\n argument.\n\n- ``delayed_for=None`` A number of seconds (as a int, float or a\n ``timedelta`` object) to wait before the job will be really requeued.\n It will compute the ``delayed_until`` field of the job.\n\n- ``queue_model=None`` The model to use to store queues. By default,\n it's set to ``Queue``, defined in the ``queue_model`` attribute of\n the ``Job`` model. If the argument is not set, the attribute will be\n used. Be careful to set it as attribute in your subclass, or as\n argument in ``requeue`` or the default ``Queue`` model will be used\n and jobs won't be saved in the expected queue model.\n\n``enqueue_or_delay`` (method)\n'''''''''''''''''''''''''''''\n\nIt's the method, called in ``add_job`` and ``requeue`` that will either\nput the job in the waiting or delayed queue, depending of\n``delayed_until``. If this argument is defined and in the future, the\njob is delayed, else it's simply queued.\n\nThis method also set the ``queued`` flag of the job to ``'1'``.\n\nArguments:\n\n- ``queue_name=None`` The queue name in which to save the job. If not\n defined, will use the job's class one. If both are undefined, an\n exception is raised.\n\n- ``priority=None`` The new priority of the new job. Use the job's\n actual one if not defined.\n\n- ``delayed_until=None`` The date (must be either a ``datetime`` object\n of the string representation of one) until when the job will remain\n in the delayed queue. It will not be processed until this date.\n\n- ``prepend=False`` Set to ``True`` to add the job at the start of the\n waiting list, to be the first to be executed (only if not delayed)\n\n- ``queue_model=None`` The model to use to store queues. See\n ``add_job`` and ``requeue``.\n\n``on_started`` (ghost method)\n'''''''''''''''''''''''''''''\n\nThis method, if defined on your job model (it's not there by default, ie\n\"ghost\") is called when the job is fetched by the worker and about to be\nexecuted (\"waiting\" status)\n\nArguments:\n\n- ``queue``: The queue from which the job was fetched.\n\n``on_success`` (ghost method)\n'''''''''''''''''''''''''''''\n\nThis method, if defined on your job model (it's not there by default, ie\n\"ghost\") is called by the worker when the job's execution was a success\n(it did not raise any exception).\n\nArguments:\n\n- ``queue``: The queue from which the job was fetched.\n\n- ``result`` The data returned by the ``execute`` method of the worker,\n which call and return the result of the ``run`` method of the job (or\n the ``callback`` provided to the worker)\n\n``on_error`` (ghost method)\n'''''''''''''''''''''''''''\n\nThis method, if defined on your job model (it's not there by default, ie\n\"ghost\") is called by the worker when the job's execution failed (an\nexception was raised)\n\nArguments:\n\n- ``queue``: The queue from which the job was fetched.\n\n- ``exception``: The exception that was raised during the execution.\n\n- ``traceback``: The traceback at the time of the exception, if the\n ``save_tracebacks`` attribute of the worker was set to ``True``\n\n``on_skipped`` (ghost method)\n'''''''''''''''''''''''''''''\n\nThis method, if defined on your job model (it's not there by default, ie\n\"ghost\") is called when the job, just fetched by the worker, could not\nbe executed because of its status, not \"waiting\". Another possible\nreason is that the job was canceled during its execution (by settings\nits status to ``STATUSES.CANCELED``)\n\n- ``queue``: The queue from which the job was fetched.\n\n``on_requeued`` (ghost method)\n''''''''''''''''''''''''''''''\n\nThis method, if defined on your job model (it's not there by default, ie\n\"ghost\") is called by the worker when the job failed and has been\nrequeued by the worker.\n\n- ``queue``: The queue from which the job was fetched.\n\n``on_delayed`` (ghost method)\n'''''''''''''''''''''''''''''\n\nThis method, if defined on your job model (it's not there by default, ie\n\"ghost\") is called by the worker when the job was delayed (by settings\nits status to ``STATUSES.DELAYED``) during its execution (note that you\nmay also want to set the ``delayed_until`` of the job value to a correct\none datetime (a string represetation of an utc datetime), or the worker\nwill delay it for 60 seconds).\n\nIt can also be called if the job's status was set to\n``STATUSES.DELAYED`` while still in the ``waiting`` list of the queue.\n\n- ``queue``: The queue from which the job was fetched.\n\nJob class methods\n^^^^^^^^^^^^^^^^^\n\n``add_job``\n'''''''''''\n\nThe ``add_job`` class method is the main (and recommended) way to create\na job. It will check if a job with the same identifier already exists in\na queue (not finished) and if one is found, update its priority (and\nmove it in the correct queue). If no existing job is found, a new one\nwill be created and added to a queue.\n\nArguments:\n\n- ``identifier`` The value for the ``identifier`` field.\n\n- ``queue_name=None`` The queue name in which to save the job. If not\n defined, will use the class one. If both are undefined, an exception\n is raised.\n\n- ``priority=0`` The priority of the new job, or the new priority of an\n already existing job, if this priority is higher of the existing one.\n\n- ``queue_model`` The model to use to store queues. By default, it's\n set to ``Queue``, defined in the ``queue_model`` attribute of the\n ``Job`` model. If the argument is not set, the attribute will be\n used. Be careful to set it as attribute in your subclass, or as\n argument in ``add_job`` or the default ``Queue`` model will be used\n and jobs won't be saved in the expected queue model.\n\n- ``prepend=False`` By default, all new jobs are added at the end of\n the waiting list (and taken from the start, it's a fifo list), but\n you can force jobs to be added at the beginning of the waiting list\n to be the first to be executed, simply by setting the ``prepend``\n argument to ``True``. If the job already exists, it will be moved at\n the beginning of the list.\n\n- ``delayed_until=None`` Set this to a ``datetime`` object to set the\n job to be executed in the future. If defined and in the future, the\n job will be added to the delayed list (a redis sorted-set) instead of\n the waiting one. The real ``delayed_until`` can also be set by\n passing the ``delayed_for`` argument.\n\n- ``delayed_for=None`` A number of seconds (as a int, float or a\n ``timedelta`` object) to wait before adding the job to the waiting\n list. It will compute the ``delayed_until`` field of the job.\n\nIf you use a subclass of the ``Job`` model, you can pass additional\narguments to the ``add_job`` method simply by passing them as named\narguments, they will be save if a new job is created (but not if an\nexisting job is found in a waiting queue)\n\n``get_model_repr``\n''''''''''''''''''\n\nReturns the string representation of the model, used to compute the\n``ident`` property of a job.\n\n``get_from_ident``\n''''''''''''''''''\n\nReturns a job from a string previously got via the ``ident`` property of\na job.\n\nArguments:\n\n- ``ident`` A string including the modele representation of a job and\n it's primary key, as returned by the ``ident`` property.\n\nQueue\n~~~~~\n\nA Queue stores a list of waiting jobs with a given priority, and keep a\nlist of successful jobs and ones on error.\n\nQueue fields\n^^^^^^^^^^^^\n\n``name``\n''''''''\n\nA string (``InstanceHashField``, indexed), used by the ``add_job``\nmethod to find the queue in which to store it. Many queues can have the\nsame names, but different priorities.\n\nThis name is also used by a worker to find which queues it needs to wait\nfor.\n\n``priority``\n''''''''''''\n\nA string (``InstanceHashField``, indexed, default = 0), to store the\npriority of a queue's jobs. All jobs in a queue are considered having\nthis priority. It's why, as said for the ``property`` fields of the\n``Job`` model, changing the property of a job doesn't change its real\nproperty. But adding (via the ``add_job`` class method of the ``Job``\nmodel) a new job with the same identifier for the same queue's name can\nupdate the job's priority by moving it to another queue with the correct\npriority.\n\nAs already said, the higher the priority, the sooner the jobs in a queue\nwill be executed. If a queue has a priority of 2, and another queue of\nthe same name has a priority of 0, or 1, *all* jobs in the one with the\npriority of 2 will be executed (at least fetched) before the others,\nregardless of the number of workers.\n\n``waiting``\n'''''''''''\n\nA list (``ListField``) to store the primary keys of job in the waiting\nstatus. It's a fifo list: jobs are appended to the right (via\n``rpush``), and fetched from the left (via ``blpop``)\n\nWhen fetched, a job from this list is executed, then pushed in the\n``success`` or ``error`` list, depending if the callback raised an\nexception or not. If a job in this waiting list is not in the waiting\nstatus, it will be skipped by the worker.\n\n``success``\n'''''''''''\n\nA list (``ListField``) to store the primary keys of jobs fetched from\nthe waiting list and successfully executed.\n\n``error``\n'''''''''\n\nA list (``ListField``) to store the primary keys of jobs fetched from\nthe waiting list for which the execution failed.\n\n``delayed``\n'''''''''''\n\nA sorted set (``SortedSetField``) to store delayed jobs, ones having a\n``delayed_until`` datetime in the future. The timestamp representation\nof the ``delayed_until`` field is used as the score for this sorted-set,\nto ease the retrieval of jobs that are now ready.\n\nQueue attributes\n^^^^^^^^^^^^^^^^\n\nThe ``Queue`` model has no specific attributes.\n\nQueue properties and methods\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n``first_delayed`` (property)\n''''''''''''''''''''''''''''\n\nReturns a tuple representing the first job to be ready in the delayed\nqueue. It's a tuple with the job's pk and the timestamp representation\nof it's ``delayed_until`` value (it's the score of the sorted\\_set).\n\nReturns None if the delayed queue is empty.\n\n``first_delayed_time`` (property)\n'''''''''''''''''''''''''''''''''\n\nReturn the timestamp representation of the first delayed job to be\nready, or None if the delayed queue is empty.\n\n``delay_job`` (method)\n''''''''''''''''''''''\n\nPut a job in the delayed queue.\n\nArguments:\n\n- ``job`` The job to delay.\n\n- ``delayed_until`` A ``datetime`` object specifying when the job\n should be put back in the waiting queue. It will be converted into a\n timestamp used as the score of the delayed list, which is a redis\n sorted-set.\n\n``enqueue_job`` (method)\n''''''''''''''''''''''''\n\nPut a job in the waiting list.\n\nArguments:\n\n- ``job`` The job to enqueue.\n\n- ``prepend=False`` Set to ``True`` to add the job at the start of the\n waiting list, to be the first to be executed.\n\n``requeue_delayed_jobs`` (method)\n'''''''''''''''''''''''''''''''''\n\nThis method will check for all jobs in the delayed queue that are now\nready to be executed and put them back in the waiting list.\n\nThis method will return the list of failures, each failure being a tuple\nwith the value returned by the ``ident`` property of a job, and the\nmessage of the raised exception causing the failure.\n\nNot that the status of the jobs is changed only if their status was\n``STATUSES.DELAYED``. It allows to cancel a delayed job before.\n\nQueue class methods\n^^^^^^^^^^^^^^^^^^^\n\n``get_queue``\n'''''''''''''\n\nThe ``get_queue`` class method is the recommended way to get a ``Queue``\nobject. Given a name and a priority, it will return the found queue or\ncreate a queue if no matching one exist.\n\nArguments:\n\n- ``name`` The name of the queue to get or create.\n\n- ``priority`` The priority of the queue to get or create.\n\nIf you use a subclass of the ``Queue`` model, you can pass additional\narguments to the ``get_queue`` method simply by passing them as named\narguments, they will be saved if a new queue is created (but not if an\nexisting queue is found)\n\n``get_waiting_keys``\n''''''''''''''''''''\n\nThe ``get_waiting_keys`` class method returns all the existing (waiting)\nqueues with the given names, sorted by priority (reverse order: the\nhighest priorities come first), then by names. The returned value is a\nlist of redis keys for each ``waiting`` lists of matching queues. It's\nused internally by the workers as argument to the ``blpop`` redis\ncommand.\n\nArguments:\n\n- ``names`` The names of the queues to take into accounts (can be a\n string if a single name, or a list of strings)\n\n``count_waiting_jobs``\n''''''''''''''''''''''\n\nThe ``count_waiting_jobs`` class method returns the number of jobs still\nwaiting for the given queue names, combining all priorities.\n\nArguments:\n\n- ``names`` The names of the queues to take into accounts (can be a\n string if a single name, or a list of strings)\n\n``count_delayed_jobs``\n''''''''''''''''''''''\n\nThe ``count_delayed_jobs`` class method returns the number of jobs still\ndelayed for the given queue names, combining all priorities.\n\nArguments:\n\n- ``names`` The names of the queues to take into accounts (can be a\n string if a single name, or a list of strings)\n\n``get_all``\n'''''''''''\n\nThe ``get_all`` class method returns a list of queues for the given\nnames.\n\nArguments:\n\n- ``name`` The names of the queues to take into accounts (can be a\n string if a single name, or a list of strings)\n\n``get_all_by_priority``\n'''''''''''''''''''''''\n\nThe ``get_all_by_priority`` class method returns a list of queues for\nthe given names, ordered by priorities (the highest priority first),\nthen names.\n\nArguments:\n\n- ``name`` The names of the queues to take into accounts (can be a\n string if a single name, or a list of strings)\n\nError\n~~~~~\n\nThe ``Error`` model is used to store errors from the jobs that are not\nsuccessfully executed by a worker.\n\nIts main purpose is to be able to filter errors, by queue name, job\nmodel, job identifier, date, exception class name or code. You can use\nyour own subclass of the ``Error`` model and then store additional\nfields, and filter on them.\n\nError fields\n^^^^^^^^^^^^\n\n``job_model_repr``\n''''''''''''''''''\n\nA string (``InstanceHashField``, indexed) to store the string\nrepresentation of the job's model.\n\n``job_pk``\n''''''''''\n\nA string (``InstanceHashField``, indexed) to store the primary key of\nthe job which generated the error.\n\n``idenfitier``\n''''''''''''''\n\nA string (``InstanceHashField``, indexed) to store the identifier of the\njob that failed.\n\n``queue_name``\n''''''''''''''\n\nA string (``InstanceHashField``, indexed) to store the name of the queue\nthe job was in when it failed.\n\n``date_time``\n'''''''''''''\n\nA string (``InstanceHashField``, indexed with ``SimpleDateTimeIndex``) to\nstore the date and time (to the second) of the error (a string representation of\n``datetime.utcnow()``). This field is indexed so you can filter\nerrors by date and time (string mode, not by parts of date and time, ie\n``date_time__gt='2017-01-01'``), useful to graph errors.\n\n\n``date``\n''''''''\n\nDEPRECATED: this is replaced by ``date_time`` but kept for now for compatibility\n\nA string (``InstanceHashField``, indexed) to store the date (only the\ndate, not the time) of the error (a string representation of\n``datetime.utcnow().date()``). This field is indexed so you can filter\nerrors by date, useful to graph errors.\n\n``time``\n''''''''\n\nDEPRECATED: this is replaced by ``date_time`` but kept for now for compatibility\n\nA string (``InstanceHashField``) to store the time (only the time, not\nthe date) of the error (a string representation of\n``datetime.utcnow().time()``).\n\n``type``\n''''''''\n\nA string (``InstanceHashField``, indexed) to store the type of error.\nIt's the class' name of the originally raised exception.\n\n``code``\n''''''''\n\nA string (``InstanceHashField``, indexed) to store the value of the\n``code`` attribute of the originally raised exception. Nothing is stored\nhere if there is no such attribute.\n\n``message``\n'''''''''''\n\nA string (``InstanceHashField``) to store the string representation of\nthe originally raised exception.\n\n``traceback``\n'''''''''''''\n\nA string (``InstanceHashField``) to store the string representation of\nthe traceback of the originally raised exception (the worker may not\nhave filled it)\n\nError properties and methods\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n``datetime``\n''''''''''''\n\nThis property returns a ``datetime`` object based on the content of the\n``date_time`` field of an ``Error`` object.\n\nError class methods\n^^^^^^^^^^^^^^^^^^^\n\n``add_error``\n'''''''''''''\n\nThe ``add_error`` class method is the main (and recommended) way to add\nan entry on the ``Error`` model, by accepting simple arguments that will\nbe break down (``job`` becomes ``identifier`` and ``job_pk``, ``when``\nbecomes ``date`` and ``time``, ``error`` becomes ``code`` and\n``message``)\n\nArguments:\n\n- ``queue_name`` The name of the queue the job came from.\n\n- ``job`` The job which generated the error, from which we'll extract\n ``job_pk`` and ``identifier``\n\n- ``error`` An exception from which we'll extract the code and the\n message.\n\n- ``when=None`` A ``datetime`` object from which we'll extract the date\n and time.\n\n If not filled, ``datetime.utcnow()`` will be used.\n\n- ``trace=None`` The traceback, stringyfied, to store.\n\nIf you use a subclass of the ``Error`` model, you can pass additional\narguments to the ``add_error`` method simply by passing them as named\narguments, they will be save in the object to be created.\n\n``collection_for_job``\n''''''''''''''''''''''\n\nThe ``collection_for_job`` is a helper to retrieve the errors assiated\nwith a given job, more precisely for all the instances of this job with\nthe same identifier.\n\nThe result is a ``limpyd`` collection, to you can use ``filter``,\n``instances``... on it.\n\nArguments:\n\n- ``job`` The job for which we want errors\n\nThe worker(s)\n-------------\n\nThe Worker class\n~~~~~~~~~~~~~~~~\n\nThe ``Worker`` class does all the logic, working with ``Queue`` and\n``Job`` models.\n\nThe main behavior is: - reading queue keys for the given names - waiting\nfor a job available in the queues - executing the job - manage success\nor error - exit after a defined number of jobs or a maximum duration (if\ndefined), or when a ``SIGINT``/``SIGTERM`` signal is caught\n\nThe class is split in many short methods so that you can subclass it to\nchange/add/remove whatever you want.\n\nConstructor arguments and worker's attributes\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nEach of the following worker's attributes can be set by an argument in\nthe constructor, using the exact same name. It's why the two are\ndescribed here together.\n\n``queues``\n''''''''''\n\nNames of the queues to work with. It can be a list/tuple of strings, or\na string with names separated by a comma (no spaces), or without comma\nfor a single queue.\n\nNote that all queues must be from the same ``queue_model``.\n\nDefault to ``None``, but if not set and not defined in a subclass, will\nraise an ``LimpydJobsException``.\n\n``queue_model``\n'''''''''''''''\n\nThe model to use for queues. By default it's the ``Queue`` model\nincluded in ``limpyd_jobs``, but you can use a subclass of the default\nmodel to add fields, methods...\n\n``error_model``\n'''''''''''''''\n\nThe model to use for saving errors. By default it's the ``Error`` model\nincluded in ``limpyd_jobs``, but you can use a subclass of the default\nmodel to add fields, methods...\n\n``logger_name``\n'''''''''''''''\n\n``limpyd_jobs`` uses the python ``logging`` module, so this is the name\nto use for the logger created for the worker. The default value is\n``LOGGER_NAME``, with ``LOGGER_NAME`` defined in ``limpyd_jobs.workers``\nwith a value of \"limpyd-jobs\".\n\n``logger_level``\n''''''''''''''''\n\nIt's the level set for the logger created with the name defined in\n``logger_name``, default to ``logging.INFO``.\n\n``save_errors``\n'''''''''''''''\n\nA boolean, default to ``True``, to indicate if we have to save errors in\nthe ``Error`` model (or the one defined in ``error_model``) when the\nexecution of the job is not successful.\n\n``save_tracebacks``\n'''''''''''''''''''\n\nA boolean, default to ``True``, to indicate if we have to save the\ntracebacks of exceptions in the ``Error`` model (or the one defined in\n``error_model``) when the execution of the job is not successful (and\nonly if ``save_errors`` is ``True``)\n\n``max_loops``\n'''''''''''''\n\nThe max number of loops (fetching + executing a job) to do in the worker\nlifetime, default to 1000. Note that after this number of loop, the\nworker ends (the ``run`` method cannot be executed again)\n\nThe aim is to avoid memory leaks become too important.\n\n``max_duration``\n''''''''''''''''\n\nIf defined, the worker will end when its ``run`` method was called for\nat least this number of seconds. By default it's set to ``None``, saying\nthere is no maximum duration.\n\n``terminate_gracefully``\n''''''''''''''''''''''''\n\nTo avoid interrupting the execution of a job, if\n``terminate_gracefully`` is set to ``True`` (the default), the\n``SIGINT`` and ``SIGTERM`` signals are caught, asking the worker to exit\nwhen the current jog is done.\n\n``callback``\n''''''''''''\n\nThe callback is the function to run when a job is fetched. By default\nit's the ``execute`` method of the worker (which calls the ``run``\nmethod of jobs, which, if not overridden, raises a ``NotImplemented``\nerror) , but you can pass any function that accept a job and a queue as\nargument.\n\nUsing the queue's name, and the job's identifier+model (via\n``job.ident``), you can manage many actions depending on the queue if\nneeded.\n\nIf this callback (or the ``execute`` method) raises an exception, the\njob is considered in error. In the other case, it's considered\nsuccessful and the return value is passed to the ``job_success`` method,\nto let you do what you want with it.\n\n``timeout``\n'''''''''''\n\nThe timeout is used as parameter to the ``blpop`` redis command we use\nto fetch jobs from waiting lists. It's 30 seconds by default but you can\nchange it to any positive number (in seconds). You can set it to ``0``\nif you don't want any timeout be applied to the ``blpop`` command.\n\nIt's better to always set a timeout, to reenter the main loop and call\nthe ``must_stop`` method to see if the worker must exit. Note that the\nnumber of loops is not updated in the case of the timeout occurred, so a\nlittle ``timeout`` won't alter the number of loops defined by\n``max_loops``.\n\n``fetch_priorities_delay``\n''''''''''''''''''''''''''\n\nThe ``fetch_priorities_delay`` is the delay between two fetches of the\nlist of priorities for the current worker.\n\nIf a job was added with a priority that did not exist when the worker\nrun was started, it will not be taken into account until this delay\nexpires.\n\nNote that if this delay is, say, 5 seconds (it's 25 by default), and the\n``timeout`` parameter is 30, you may wait 30 seconds before the new\npriority fetch because if there is no jobs in the priority queues\nactually managed by the worker, the time is in the redis hands.\n\n``fetch_delayed_delay``\n'''''''''''''''''''''''\n\nThe ``fetch_delayed_delay`` is the delay between two fetches of the\ndelayed jobs that are now ready in the queues managed by the worker.\n\nNote that if this delay is, say, 5 seconds (it's 25 by default), and the\n``timeout`` parameter is 30, you may wait 30 seconds before the new\ndelayed fetch because if there is no jobs in the priority queues\nactually managed by the worker, the time is in the redis hands.\n\n``requeue_times``\n'''''''''''''''''\n\nIt's the number of times a job will be requeued when its execution\nresults in a failure. It will then be put back in the same queue.\n\nThis attribute is 0 by default so by default a job won't be requeued.\n\n``requeue_priority_delta``\n''''''''''''''''''''''''''\n\nThis number will be added to the current priority of the job that will\nbe requeued. By default it's set to -1 to decrease the priority at each\nrequeue.\n\n``requeue_delay_delta``\n'''''''''''''''''''''''\n\nIt's a number of seconds to wait before adding back an erroneous job in\nthe waiting queue, set by default to 30: when a job failed to execute,\nit's put in the delayed queue for 30 seconds then it'll be put back in\nthe waiting queue (depending on the ``fetch_delayed_delay`` attribute)\n\nOther worker's attributes\n^^^^^^^^^^^^^^^^^^^^^^^^^\n\nIn case on subclassing, you can need these attributes, created and\ndefined during the use of the worker:\n\n``keys``\n''''''''\n\nA list of keys of queues waiting lists, which are listened by the worker\nfor new jobs. Filled by the ``update_keys`` method.\n\n``status``\n''''''''''\n\nThe current status of the worker. ``None`` by default until the ``run``\nmethod is called, after what it's set to ``\"starting\"`` while getting\nfor an available queue. Then it's set to ``\"waiting\"`` while the worker\nwaits for new jobs. When a job is fetched, the status is set to\n``\"running\"``. And finally, when the loop is over, it's set to\n``\"terminated\"``.\n\nIf the status is not ``None``, the ``run`` method cannot be called.\n\n``logger``\n''''''''''\n\nThe logger (from the ``logging`` python module) defined by the\n``set_logger`` method.\n\n``num_loops``\n'''''''''''''\n\nThe number of loops done by the worker, incremented each time a job is\nfetched from a waiting list, even if the job is skipped (bad status...),\nor in error. When this number equals the ``max_loops`` attribute, the\nworker ends.\n\n``end_forced``\n''''''''''''''\n\nWhen ``True``, ask for the worker to terminate itself after executing\nthe current job. It can be set to ``True`` manually, or when a\nSIGINT/SIGTERM signal is caught.\n\n``end_signal_caught``\n'''''''''''''''''''''\n\nThis boolean is set to ``True`` when a SIGINT/SIGTERM is caught (only if\nthe ``terminate_gracefully`` is ``True``)\n\n``start_date``\n''''''''''''''\n\n``None`` by default, set to ``datetime.utcnow()`` when the ``run``\nmethod starts.\n\n``end_date``\n''''''''''''\n\n``None`` by default, set to ``datetime.utcnow()`` when the ``run``\nmethod ends.\n\n``wanted_end_date``\n'''''''''''''''''''\n\nNone by default, it's computed to know when the worker must stop based\non the ``start_date`` and ``max_duration``. It will always be ``None``\nif no ``max_duration`` is defined.\n\n``connection``\n''''''''''''''\n\nIt's a property, not an attribute, to get the current connection to the\nredis server.\n\n``parameters``\n''''''''''''''\n\nIt's a tuple holding all parameters accepted by the worker's constructor\n\n.. code:: python\n\n parameters = ('queues', 'callback', 'queue_model', 'error_model',\n 'logger_name', 'logger_level', 'save_errors',\n 'save_tracebacks', 'max_loops', 'max_duration',\n 'terminate_gracefuly', 'timeout', 'fetch_priorities_delay',\n 'fetch_delayed_delay', 'requeue_times',\n 'requeue_priority_delta', 'requeue_delay_delta')\n\nWorker's methods\n^^^^^^^^^^^^^^^^\n\nAs said before, the ``Worker`` class in spit in many little methods, to\nease subclassing. Here is the list of public methods:\n\n``__init__``\n''''''''''''\n\nSignature:\n\n.. code:: python\n\n def __init__(self, queues=None, **kwargs):\n\nReturns nothing.\n\nIt's the constructor (you guessed it ;) ) of the ``Worker`` class,\nexpecting all arguments (defined in ``parameters``) that can also be\ndefined as class attributes.\n\nIt validates these arguments, prepares the logging and initializes other\nattributes.\n\nYou can override it to add, validate, initialize other arguments or\nattributes.\n\n``handle_end_signal``\n'''''''''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def handle_end_signal(self):\n\nReturns nothing.\n\nIt's called in the constructor if ``terminate_gracefully`` is ``True``.\nIt plugs the SIGINT and SIGTERM signal to the ``catch_end_signal``\nmethod.\n\nYou can override it to catch more signals or do some checked before\nplugging them to the ``catch_end_signal`` method.\n\n``stop_handling_end_signal``\n''''''''''''''''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def stop_handling_end_signal(self):\n\nReturns nothing.\n\nIt's called at the end of the ``run`` method, as we don't need to catch\nthe SIGINT and SIGTERM signals anymore. It's useful when launching a\nworker in a python shell to finally let the shell handle these signals.\nUseless in a script because the script is finished when the ``run``\nmethod exits.\n\n``set_logger``\n''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def set_logger(self):\n\nReturns nothing.\n\nIt's called in the constructor to initialize the logger, using\n``logger_name`` and ``logger_level``, saving it in ``self.logger``.\n\n``must_stop``\n'''''''''''''\n\nSignature:\n\n.. code:: python\n\n def must_stop(self):\n\nReturns boolean.\n\nIt's called on the main loop, to exit it on some conditions: an end\nsignal was caught, the ``max_loops`` number was reached, or\n``end_forced`` was set to ``True``.\n\n``wait_for_job``\n''''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def wait_for_job(self):\n\nReturns a tuple with a queue and a job\n\nThis method is called during the loop, to wait for an available job in\nthe waiting lists. When one job is fetched, returns the queue (an\ninstance of the model defined by ``queue_model``) on which the job was\nfound, and the job itself.\n\n``get_job``\n'''''''''''\n\nSignature:\n\n.. code:: python\n\n def get_job(self, job_ident):\n\nReturns a job.\n\nCalled during ``wait_for_job`` to get a real job object based on the\njob's ``ident`` (model + pk) fetched from the waiting lists.\n\n``get_queue``\n'''''''''''''\n\nSignature:\n\n.. code:: python\n\n def get_queue(self, queue_redis_key):\n\nReturns a Queue.\n\nCalled during ``wait_for_job`` to get a real queue object (an instance\nof the model defined by ``queue_model``) based on the key returned by\nredis telling us in which list the job was found. This key is not the\nprimary key of the queue, but the redis key of it's waiting field.\n\n``catch_end_signal``\n''''''''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def catch_end_signal(self, signum, frame):\n\nReturns nothing.\n\nIt's called when a SIGINT/SIGTERM signal is caught. It's simply set\n``end_signal_caught`` and ``end_forced`` to ``True``, to tell the worker\nto terminate as soon as possible.\n\n``execute``\n'''''''''''\n\nSignature:\n\n.. code:: python\n\n def execute(self, job, queue):\n\nReturns nothing by default.\n\nThis method is called if no ``callback`` argument is provided when\ninitiating the worker and call the ``run`` method of the job, which\nraises a ``NotImplementedError`` by default.\n\nIf the execution is successful, no return value is attended, but if any,\nit will be passed to the ``job_success`` method. And if an error\noccurred, an exception must be raised, which will be passed to the\n``job_error`` method.\n\n``update_keys``\n'''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def update_keys(self):\n\nReturns nothing.\n\nCalling this method updates the internal ``keys`` attributes, which\ncontains redis keys of the waiting lists of all queues listened by the\nworker.\n\nIt's actually called at the beginning of the ``run`` method, and at\nintervals depending on ``fetch_priorities_delay``. Note that if a queue\nwith a specific priority doesn't exist when this method is called, but\nlater, by adding a job with ``add_job``, the worker will ignore it\nunless this ``update_keys`` method was called again (programmatically or\nby waiting at least ``fetch_priorities_delay`` seconds)\n\n``run``\n'''''''\n\nSignature:\n\n.. code:: python\n\n def run(self):\n\nReturns nothing.\n\nIt's the main method of the worker, with all the logic: while we don't\nhave to stop (result of the ``must_stop`` method), fetch a job from\nredis, and if this job is really in waiting state, execute it, and do\nsomething depending of the status of the execution (success, error...).\n\nIn addition to the methods that do real stuff (``update_keys``,\n``wait_for_job``), some other methods are called during the execution:\n``run_started``, ``run_ended``, about the run, and ``job_skipped``,\n``job_started``, ``job_success`` and ``job_error`` about jobs. You can\noverride these methods in subclasses to adapt the behavior depending on\nyour needs.\n\n``run_started``\n'''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def run_started(self):\n\nReturns nothing.\n\nThis method is called in the ``run`` method after the keys are computed\nusing ``update_keys``, just before starting the loop. By default it does\nnothing but a log.info.\n\n``run_ended``\n'''''''''''''\n\nSignature:\n\n.. code:: python\n\n def run_ended(self):\n\nReturns nothing.\n\nThis method is called just before exiting the ``run`` method. By default\nit does nothing but a log.info.\n\n``job_skipped``\n'''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def job_skipped(self, job, queue):\n\nReturns nothing.\n\nWhen a job is fetched in the ``run`` method, its status is checked. If\nit's not ``STATUSES.WAITING``, this ``job_skipped`` method is called,\nwith two main arguments: the job and the queue in which it was found.\n\nThis method is also called when the job is canceled during its execution\n(ie if, when the execution is done, the job's status is\n``STATUSES.CANCELED``).\n\nThis method remove the ``queued`` flag of the job, logs the message\nreturned by the ``job_skipped_message`` method, then call, if defined,\nthe ``on_skipped`` method of the job.\n\n``job_skipped_message``\n'''''''''''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def job_skipped_message(self, job, queue):\n\nReturns a string to be logged in ``job_skipped``.\n\n``job_started``\n'''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def job_started(self, job, queue):\n\nReturns nothing.\n\nWhen the job is fetched and its status verified (it must be\n``STATUSES.WAITING``), the ``job_started`` method is called, just before\nthe callback (or the ``execute`` method if no ``callback`` is defined),\nwith the job and the queue in which it was found.\n\nThis method updates the ``start`` and ``status`` fields of the job, then\nlog the message returned by ``job_started_message`` and finally call, if\ndefined, the ``on_started`` method of the job.\n\n``job_started_message``\n'''''''''''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def job_started_message(self, job, queue):\n\nReturns a string to be logged in ``job_started``.\n\n``job_success``\n'''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def job_success(self, job, queue, job_result):\n\nReturns nothing.\n\nWhen the callback (or the ``execute`` method) is finished, without\nhaving raised any exception, the job is considered successful, and the\n``job_success`` method is called, with the job and the queue in which it\nwas found, and the return value of the callback method.\n\nNote that this method is not called, and so the job not considered a\n\"success\" if, when the execution is done, the status of the job is\neither ``STATUS.CANCELED`` or ``STATUS.DELAYED``. In these cases, the\nmethods ``job_skipped`` and ``job_delayed`` are called respectively.\n\nThis method remove the ``queued`` flag of the job, updates its ``end``\nand ``status`` fields, moves the job into the ``success`` list of the\nqueue, then log the message returned by ``job_success_message`` and\nfinally call, if defined, the ``on_success`` method of the job.\n\n``job_success_message``\n'''''''''''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def job_success_message(self, job, queue, job_result):\n\nReturns a string to be logged in ``job_success``.\n\n``job_delayed``\n'''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def job_delayed(self, job, queue):\n\nReturns nothing.\n\nWhen the callback (or the ``execute`` method) is finished, without\nhaving raised an exception, and the status of the job at this moment is\n``STATUSES.DELAYED``, the job is not successful but not in error: it\nwill be delayed.\n\nAnother way to have this method called if its a job is in the\n``waiting`` queue but its status was set to ``STATUSES.DELAYED``. In\nthis cas, the job is not executed, but delayed by calling this method.\n\nThis method check if the job has a ``delayed_until`` value, and if not,\nor if an invalid one, it is set to 60 seconds in the future. You may\nwant to explicitly set this value, or at least clear the field because\nif the job was initially delayed, the value may be set, but in the past,\nand the job will be delayed to this date, so, not delayed but just\nqueued.\n\nWith this value, the method ``enqueue_or_delay`` of the queue is called,\nto really delay the job.\n\nThen, log the message returned by ``job_delayed_message`` and finally\ncall, if defined, the ``on_delayed`` method of the job.\n\n``job_delayed_message``\n'''''''''''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def job_delayed_message(self, job, queue):\n\nReturns a string to be logged in ``job_delayed``.\n\n``job_error``\n'''''''''''''\n\nSignature:\n\n.. code:: python\n\n def job_error(self, job, queue, exception, trace=None):\n\nReturns nothing.\n\nWhen the callback (or the ``execute`` method) is terminated by raising\nan exception, the ``job_error`` method is called, with the job and the\nqueue in which it was found, and the raised exception and, if\n``save_tracebacks`` is ``True``, the traceback.\n\nThis method remove the ``queued`` flag of the job if it is no to be\nrequeued, updates its ``end`` and ``status`` fields, moves the job into\nthe ``error`` list of the queue, adds a new error object (if\n``save_errors`` is ``True``), then log the message returned by\n``job_error_message`` and call the ``on_error`` method of the job is\ncalled, if defined.\n\nAnd finally, if the ``must_be_cancelled_on_error`` property of the job\nis False, and the ``requeue_times`` worker attribute allows it\n(considering the ``tries`` attribute of the job, too), the\n``requeue_job`` method is called.\n\n``job_error_message``\n'''''''''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def job_error_message(self, job, queue, to_be_requeued_exception, trace=None):\n\nReturns a string to be logged in ``job_error``.\n\n``job_requeue_message``\n'''''''''''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def job_requeue_message(self, job, queue):\n\nReturns a string to be logged in ``job_error`` when the job was\nrequeued.\n\n``additional_error_fields``\n'''''''''''''''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def additional_error_fields(self, job, queue, exception, trace=None):\n\nReturns a dictionary of fields to add to the error object, empty by\ndefault.\n\nThis method is called by ``job_error`` to let you define a dictionary of\nfields/values to add to the error object which will be created, if you\nuse a subclass of the ``Error`` model, defined in ``error_model``.\n\nTo pass these additional fields to the error object, you have to\noverride this method in your own subclass.\n\n``requeue_job``\n'''''''''''''''\n\n.. code:: python\n\n def requeue_job(self, job, queue, priority, delayed_for=None):\n\nReturns nothing.\n\nThis method is called to requeue the job when its execution failed, and\nwill call the ``requeue`` method of the job, then its ``requeued`` one,\nand finally will log the message returned by ``job_requeue_message``.\n\n``id``\n''''''\n\nIt's a property returning a string identifying the current worker, used\nin logging to distinct log entries for each worker.\n\n``elapsed``\n'''''''''''\n\nIt's a property returning, when running the time elapsed since when the\n``run`` started. When the ``run`` method ends, it's the time between\n``start_date`` and ``end_date``.\n\nIf the ``run`` method is not called, it will be set to ``None``.\n\n``log``\n'''''''\n\nSignature:\n\n.. code:: python\n\n def log(self, message, level='info'):\n\nReturns nothing.\n\n``log`` is a simple wrapper around ``self.logger``, which automatically\nadd the ``id`` of the worker at the beginning. It can accepts a\n``level`` argument which is ``info`` by default.\n\n``set_status``\n''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def set_status(self, status):\n\nReturns nothing.\n\n``set_status`` simply update the worker's ``status`` field.\n\n``count_waiting_jobs``\n''''''''''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def count_waiting_jobs(self):\n\nReturns the number of jobs in waiting state that can be run by this\nworker.\n\n``count_delayed_jobs``\n''''''''''''''''''''''\n\nSignature:\n\n.. code:: python\n\n def count_delayed_jobs(self):\n\nReturns the number of jobs in the delayed queues managed by this worker.\n\nThe worker.py script\n~~~~~~~~~~~~~~~~~~~~\n\nTo help using ``limpyd_jobs``, an executable python script is provided:\n``scripts/worker.py`` (usable as ``limpyd-jobs-worker``, in your path,\nwhen installed from the package)\n\nThis script is highly configurable to help you launching workers without\nhaving to write a script or customize the one included.\n\nWith this script you don't have to write a custom worker too, because\nall arguments attended by a worker can be passed as arguments to the\nscript.\n\nThe script is based on a ``WorkerConfig`` class defined in\n``limpyd_jobs.workers``, that you can customize by subclassing it, and\nyou can tell the script to use your class instead of the default one.\n\nYou can even pass one or many python paths to add to ``sys.path``.\n\nThis script is designed to ease you as much as possible.\n\nInstead of explaining all arguments, see below the result of the\n``--help`` command for this script:\n\n::\n\n $ limpyd-jobs-worker --help\n Usage: worker.py [options]\n\n Run a worker using redis-limpyd-jobs\n\n Options:\n --pythonpath=PYTHONPATH\n A directory to add to the Python path, e.g.\n --pythonpath=/my/module\n --worker-config=WORKER_CONFIG\n The worker config class to use, e.g. --worker-\n config=my.module.MyWorkerConfig, default to\n limpyd_jobs.workers.WorkerConfig\n --print-options Print options used by the worker, e.g. --print-options\n --dry-run Won't execute any job, just starts the worker and\n finish it immediatly, e.g. --dry-run\n --queues=QUEUES Name of the Queues to handle, comma separated e.g.\n --queues=queue1,queue2\n --queue-model=QUEUE_MODEL\n Name of the Queue model to use, e.g. --queue-\n model=my.module.QueueModel\n --error-model=ERROR_MODEL\n Name of the Error model to use, e.g. --queue-\n model=my.module.ErrorModel\n --worker-class=WORKER_CLASS\n Name of the Worker class to use, e.g. --worker-\n class=my.module.WorkerClass\n --callback=CALLBACK The callback to call for each job, e.g. --worker-\n class=my.module.callback\n --logger-name=LOGGER_NAME\n The base name to use for logging, e.g. --logger-base-\n name=\"limpyd-jobs.%s\"\n --logger-level=LOGGER_LEVEL\n The level to use for logging, e.g. --worker-class=ERROR\n --save-errors Save job errors in the Error model, e.g. --save-errors\n --no-save-errors Do not save job errors in the Error model, e.g. --no-\n save-errors\n --save-tracebacks Save exception tracebacks on job error in the Error\n model, e.g. --save-tracebacks\n --no-save-tracebacks Do not save exception tracebacks on job error in the\n Error model, e.g. --no-save-tracebacks\n --max-loops=MAX_LOOPS\n Max number of jobs to run, e.g. --max-loops=100\n --max-duration=MAX_DURATION\n Max duration of the worker, in seconds (None by\n default), e.g. --max-duration=3600\n --terminate-gracefuly\n Intercept SIGTERM and SIGINT signals to stop\n gracefuly, e.g. --terminate-gracefuly\n --no-terminate-gracefuly\n Do NOT intercept SIGTERM and SIGINT signals, so don't\n stop gracefuly, e.g. --no-terminate-gracefuly\n --timeout=TIMEOUT Max delay (seconds) to wait for a redis BLPOP call (0\n for no timeout), e.g. --timeout=30\n --fetch-priorities-delay=FETCH_PRIORITIES_DELAY\n Min delay (seconds) to wait before fetching new\n priority queues, e.g. --fetch-priorities-delay=20\n --fetch-delayed-delay=FETCH_DELAYED_DELAY\n Min delay (seconds) to wait before updating delayed\n jobs, e.g. --fetch-delayed-delay=20\n --requeue-times=REQUEUE_TIMES\n Number of time to requeue a failing job (default to\n 0), e.g. --requeue-times=5\n --requeue-priority-delta=REQUEUE_PRIORITY_DELTA\n Delta to add to the actual priority of a failing job\n to be requeued (default to -1, ie one level lower),\n e.g. --requeue-priority-delta=-2\n --requeue-delay-delta=REQUEUE_DELAY_DELTA\n How much time (seconds) to delay a job to be requeued\n (default to 30), e.g. --requeue-delay-delta=15\n --database=DATABASE Redis database to use (host:port:db), e.g.\n --database=localhost:6379:15\n --no-title Do not update the title of the worker's process, e.g.\n --no-title\n --version show program's version number and exit\n -h, --help show this help message and exit\n\nExcept for ``--pythonpath``, ``--worker-config``,\n``--print-options``,\\ ``--dry-run``, ``--worker-class`` and\n``--no-title``, all options will be passed to the worker.\n\nSo, if you use the default models, the default worker with its default\noptions, and to launch a worker to work on the queue \"queue-name\", all\nyou need to do is:\n\n.. code:: bash\n\n limpyd-jobs-worker --queues=queue-name --callback=python.path.to.callback\n\nWe use the ``setproctitle`` module to display useful informations in the\nprocess name, to have stuff like this:\n\n::\n\n limpyd-jobs-worker#1566090 [init] queues=foo,bar\n limpyd-jobs-worker#1566090 [starting] queues=foo,bar loop=0/1000 waiting=10 delayed=0\n limpyd-jobs-worker#1566090 [running] queues=foo,bar loop=1/1000 waiting=9 delayed=2 duration=0:00:15\n limpyd-jobs-worker#1566090 [terminated] queues=foo,bar loop=10/1000 waiting=0 delayed=0 duration=0:12:27\n\nYou can disable it by passing the ``--no-title`` argument.\n\nNote that if no logging handler is set for the ``logger-name``, a\n``StreamHandler`` formatter will be automatically added by the script,\ngiven logs like:\n\n::\n\n [19122] 2013-10-02 00:51:24,158 (limpyd-jobs) WARNING [038480] [test|job:1] job skipped (current status: SUCCESS)\n\n(the format used is\n``\"[%(process)d] %(asctime)s (%(name)s) %(levelname)-8s %(message)s\"``)\n\nExecuting code before loading worker class\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nSometimes you may want to do some initialization work before even\nloading the Worker class, for example, using django, to add\n``django.setup()``\n\nFor this, simple override the ``WorkerConfig`` class:\n\n.. code:: python\n\n import django\n\n from limpyd_jobs.workers import WorkerConfig\n\n\n class MyWorkerConfig(WorkerConfig):\n def __init__(self, argv=None):\n\n django.setup()\n\n super(MyWorkerConfig, self).__init__(argv)\n\nAnd pass the python path to this class using the ``--worker-config``\noption to the ``limpyd-jobs-worker`` script.\n\nTests\n-----\n\nThe ``redis-limpyd-jobs`` package is fully tested (coverage: 100%).\n\nTo run the tests, which are not installed via the ``setup.py`` file, you\ncan do:\n\n::\n\n $ python run_tests.py\n [...]\n Ran 136 tests in 19.353s\n\n OK\n\nOr if you have ``nosetests`` installed:\n\n::\n\n $ nosetests\n [...]\n Ran 136 tests in 20.471s\n\n OK\n\nThe ``nosetests`` configuration is provided in the ``setup.cfg`` file\nand include the coverage, if ``nose-cov`` is installed.\n\nFinal words\n-----------\n\n- you can see a full example in ``example.py`` (in the source, not in\n the installed package)\n- to use ``limpyd_jobs`` models on your own redis database instead of\n the default one (``localhost:6379:db=0``), simply use the\n ``use_database`` method of the main model:\n\n .. code:: python\n\n from limpyd.contrib.database import PipelineDatabase\n from limpyd_jobs.models import BaseJobsModel\n\n database = PipelineDatabase(host='localhost', port=6379, db=15)\n BaseJobsModel.use_database(database)\n\n or simply change the connection settings:\n\n .. code:: python\n\n from limpyd_jobs.models import BaseJobsModel\n\n BaseJobsModel.database.connect(host='localhost', port=6379, db=15)\n\nThe end.\n--------\n\n.. |PyPI Version| image:: https://img.shields.io/pypi/v/redis-limpyd-jobs.png\n :target: https://pypi.python.org/pypi/redis-limpyd-jobs\n.. |Build Status| image:: https://travis-ci.org/limpyd/redis-limpyd-jobs.png?branch=master\n :target: https://travis-ci.org/limpyd/redis-limpyd-jobs\n\n\n", "description_content_type": "", "docs_url": null, "download_url": "", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "https://github.com/limpyd/redis-limpyd-jobs", "keywords": "redis,orm,jobs,queue", "license": "WTFPL", "maintainer": "", "maintainer_email": "", "name": "redis-limpyd-jobs", "package_url": "https://pypi.org/project/redis-limpyd-jobs/", "platform": "any", "project_url": "https://pypi.org/project/redis-limpyd-jobs/", "project_urls": { "Homepage": "https://github.com/limpyd/redis-limpyd-jobs" }, "release_url": "https://pypi.org/project/redis-limpyd-jobs/2/", "requires_dist": [ "redis (>=3)", "redis-limpyd (>=2)", "redis-limpyd-extensions (>=2)", "future", "setproctitle", "python-dateutil" ], "requires_python": ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*", "summary": "A queue/jobs system based on redis-limpyd, a redis orm (sort of) in python", "version": "2" }, "last_serial": 5966562, "releases": { "0.0.1": [ { "comment_text": "", "digests": { "md5": "4d06c537f31addfd23e0f987b5b48061", "sha256": "2449c931aa34e19367c886d73634d6a5ded57e6ef5c451fc7851b080df3755e9" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.1.tar.gz", "has_sig": false, "md5_digest": "4d06c537f31addfd23e0f987b5b48061", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 46620, "upload_time": "2013-08-26T17:15:31", "url": "https://files.pythonhosted.org/packages/b4/b7/fbe5252be68097287a890707e47f716c40ef6299fbd9d6cb0d86a2aed0f3/redis-limpyd-jobs-0.0.1.tar.gz" } ], "0.0.10": [ { "comment_text": "", "digests": { "md5": "80381e6ddff9b5b9a73ca45cd62921ca", "sha256": "46c985664670d6cb71b489c542d4f29ccc78fe515fb484fc2e651a25a1cd15e8" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.10.tar.gz", "has_sig": false, "md5_digest": "80381e6ddff9b5b9a73ca45cd62921ca", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 60794, "upload_time": "2013-11-18T00:22:07", "url": "https://files.pythonhosted.org/packages/41/f1/51d8195572f99894e2e7f0dad8f51a9ad7606a6cca0e01d63a93fc4f9bcc/redis-limpyd-jobs-0.0.10.tar.gz" } ], "0.0.11": [ { "comment_text": "", "digests": { "md5": "8ef7e3344cfcc91eb9eb85c432f4131e", "sha256": "d1d1c30214a4ed4d0f031dbebbd4709e886b74fe22abcd08a94fef72dd57b0d5" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.11.tar.gz", "has_sig": false, "md5_digest": "8ef7e3344cfcc91eb9eb85c432f4131e", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 60839, "upload_time": "2013-11-21T16:16:58", "url": "https://files.pythonhosted.org/packages/4d/e0/9a989763abe1c8594f3028168fee988b5bd576142fa061e709b32c05e2ca/redis-limpyd-jobs-0.0.11.tar.gz" } ], "0.0.12": [ { "comment_text": "", "digests": { "md5": "7bdb785284be5057c96dc350716362b2", "sha256": "35c938d6b27dcb8043c556dfe702bc70849aa6ebc3f913f6fe5fab383612e6d4" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.12.tar.gz", "has_sig": false, "md5_digest": "7bdb785284be5057c96dc350716362b2", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 61855, "upload_time": "2013-11-22T00:33:32", "url": "https://files.pythonhosted.org/packages/e1/27/5a72245a249e4f39aa2e03daa93cfef5d979d3fbf6c131ba02a25817f577/redis-limpyd-jobs-0.0.12.tar.gz" } ], "0.0.13": [ { "comment_text": "", "digests": { "md5": "4f7453273cd529c9fb3edf728f52764b", "sha256": "44d032b159fc21cc0d4bafb7a5ef79feeefb4c65f05605d94664878f039a7457" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.13.tar.gz", "has_sig": false, "md5_digest": "4f7453273cd529c9fb3edf728f52764b", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 61879, "upload_time": "2013-12-30T14:19:09", "url": "https://files.pythonhosted.org/packages/24/53/abd7cc01a530c001ee172188213e9c69346b84fa3dfbdfb0d5ba5d7e6a79/redis-limpyd-jobs-0.0.13.tar.gz" } ], "0.0.14": [ { "comment_text": "", "digests": { "md5": "f06bec7692cf906e725d7a84fb45aad2", "sha256": "f5a5b3ead4437727f19e6b9a5d80de58953ee6f5a7596267d2c2ee91bb57144b" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.14.tar.gz", "has_sig": false, "md5_digest": "f06bec7692cf906e725d7a84fb45aad2", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 61979, "upload_time": "2014-01-02T16:17:47", "url": "https://files.pythonhosted.org/packages/7a/c6/43c9acca8ebe3a5bb9162e747307c4ae7326862cfb26867ece68141cbcbc/redis-limpyd-jobs-0.0.14.tar.gz" } ], "0.0.15": [ { "comment_text": "", "digests": { "md5": "88661452b446f7573718a3737d843739", "sha256": "da2387bf34791301a79aea6359df337037f954d9cb0ac9e4d39c41aa901673f8" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.15.tar.gz", "has_sig": false, "md5_digest": "88661452b446f7573718a3737d843739", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 62534, "upload_time": "2014-02-07T19:51:27", "url": "https://files.pythonhosted.org/packages/3e/d1/2bdcd47a712837a90d974bb03616120c3e1164bd86a782985c66c1442ab8/redis-limpyd-jobs-0.0.15.tar.gz" } ], "0.0.16": [ { "comment_text": "", "digests": { "md5": "60ad60476af46435c282424a6ef797a7", "sha256": "3f8a17d8b5458117ab26010b32c6142f8cb17a48c532fe9a24fc9839421ef790" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.16.tar.gz", "has_sig": false, "md5_digest": "60ad60476af46435c282424a6ef797a7", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 62553, "upload_time": "2014-02-17T13:25:45", "url": "https://files.pythonhosted.org/packages/d9/57/b1198261578ee383c1c9d63f97e21b15476c4d1219071dceb2b4158a924c/redis-limpyd-jobs-0.0.16.tar.gz" } ], "0.0.17": [ { "comment_text": "", "digests": { "md5": "3d0dbbf5df8c58663d4ebf2f989a3395", "sha256": "c201e9a68d34885f0c88b6dc7498a7628807e2ea5e990e25f987a4377215a001" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.17.tar.gz", "has_sig": false, "md5_digest": "3d0dbbf5df8c58663d4ebf2f989a3395", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 65915, "upload_time": "2014-02-20T22:24:46", "url": "https://files.pythonhosted.org/packages/e9/bc/5f4e45db2c7eb2fff64aa39f0addbbfcfbf4044974e21b91b6874047586d/redis-limpyd-jobs-0.0.17.tar.gz" } ], "0.0.18": [ { "comment_text": "", "digests": { "md5": "825b37d882a6fe123e7d6d20b52778c4", "sha256": "16b0f34da974740347a2122c2f080d9a5666e264e5f0bfd13ad98dbf2c000b75" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.18.tar.gz", "has_sig": false, "md5_digest": "825b37d882a6fe123e7d6d20b52778c4", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 65920, "upload_time": "2014-02-21T16:40:36", "url": "https://files.pythonhosted.org/packages/b5/9e/3b9e30ca222bb49b07db2e0f240ee9ecc4ef8d630c290de87f61ac1bbb1a/redis-limpyd-jobs-0.0.18.tar.gz" } ], "0.0.2": [ { "comment_text": "", "digests": { "md5": "80731296604b018576bc68c83135bb59", "sha256": "d532423ed22f6920e1f2c89d31ccdc852f94da8c1b9debc795793600796c5099" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.2.tar.gz", "has_sig": false, "md5_digest": "80731296604b018576bc68c83135bb59", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 37908, "upload_time": "2013-08-27T18:55:27", "url": "https://files.pythonhosted.org/packages/41/03/76ab90e6f9c0ee63b4ff0bb108983d5fbea0bb5522a1606d0c319a29333c/redis-limpyd-jobs-0.0.2.tar.gz" } ], "0.0.3": [ { "comment_text": "", "digests": { "md5": "3224738c73ef74d63d276dd61019a993", "sha256": "421fb0b81cd4f723cbc6dcc54b36d83e93e0a5b43a41e9e32d0aaa8cccfbc58a" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.3.tar.gz", "has_sig": false, "md5_digest": "3224738c73ef74d63d276dd61019a993", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 39378, "upload_time": "2013-09-06T11:42:19", "url": "https://files.pythonhosted.org/packages/00/ad/279b2865a3a05059f43ec7497949b28b5a4f0b77791fc3c16ce2f19679c5/redis-limpyd-jobs-0.0.3.tar.gz" } ], "0.0.4": [ { "comment_text": "", "digests": { "md5": "ddd6818150e48ffd863298a4b4575766", "sha256": "9ca8157c28cd1a57b8072e671c62d6716d8da79e8b21c60408020f9b1d7512f5" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.4.tar.gz", "has_sig": false, "md5_digest": "ddd6818150e48ffd863298a4b4575766", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 43697, "upload_time": "2013-09-06T16:55:20", "url": "https://files.pythonhosted.org/packages/88/85/25110ae5fcc2aea1de59f4051f6e48dd5ed3b2543d74a97c2bed3cf1eed2/redis-limpyd-jobs-0.0.4.tar.gz" } ], "0.0.5": [ { "comment_text": "", "digests": { "md5": "5cacb464673de4fcebc0998dde797e02", "sha256": "f31974cab68ddca990828996746515d07fb87452324c85a4783d26345c139b29" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.5.tar.gz", "has_sig": false, "md5_digest": "5cacb464673de4fcebc0998dde797e02", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 46320, "upload_time": "2013-09-08T10:04:12", "url": "https://files.pythonhosted.org/packages/9e/b1/72e9a91ade367cee20b1aa9e6347952817e35dfc9b22dc1ff5e24114117b/redis-limpyd-jobs-0.0.5.tar.gz" } ], "0.0.6": [ { "comment_text": "", "digests": { "md5": "f9ee36be6d9adc6784be7c9d9eb0a5bb", "sha256": "14454e8e16029a95818442727d07e5d020d252ecc47665b9a6f740e97fa70025" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.6.tar.gz", "has_sig": false, "md5_digest": "f9ee36be6d9adc6784be7c9d9eb0a5bb", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 52466, "upload_time": "2013-09-15T19:42:35", "url": "https://files.pythonhosted.org/packages/39/27/ca9dc14e4c0ba563f5df882b9586dfcfcafc4a600af65a384e2a49f35fc9/redis-limpyd-jobs-0.0.6.tar.gz" } ], "0.0.7": [ { "comment_text": "", "digests": { "md5": "3a919388f8ebfbf426e7b997585f81c9", "sha256": "05feeb08ce481b886baef9da7f825fd39e3380a0070fbeec43cd82a8ef6c0d73" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.7.tar.gz", "has_sig": false, "md5_digest": "3a919388f8ebfbf426e7b997585f81c9", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 53030, "upload_time": "2013-09-15T22:05:05", "url": "https://files.pythonhosted.org/packages/0e/5d/0f170d9818c6461ccfb9277fca56ed97061052d8f569330e043feec9c6cb/redis-limpyd-jobs-0.0.7.tar.gz" } ], "0.0.8": [ { "comment_text": "", "digests": { "md5": "17eb76c4639a5c3cdae3adb45b3e4135", "sha256": "f95baf9465f038caee6a88b476ab68f579c1a474b5df8f43277071ab2e4963cb" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.8.tar.gz", "has_sig": false, "md5_digest": "17eb76c4639a5c3cdae3adb45b3e4135", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 58238, "upload_time": "2013-09-30T20:10:34", "url": "https://files.pythonhosted.org/packages/a0/f9/4c49a2769cca49b797761e7bea90781adda5281c559eb198cbafc9f5a354/redis-limpyd-jobs-0.0.8.tar.gz" } ], "0.0.9": [ { "comment_text": "", "digests": { "md5": "7984856a659c827a11f45f3b71f227c8", "sha256": "d85c35df61366314ffd0aaaee184e9d78380c2ffee6fc6ff286bb81d43caff2a" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.0.9.tar.gz", "has_sig": false, "md5_digest": "7984856a659c827a11f45f3b71f227c8", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 59004, "upload_time": "2013-10-02T07:21:53", "url": "https://files.pythonhosted.org/packages/39/94/3667ee68ae7558b614dfc456166a84fd7dd89b9942c4d53dcd0d0532428a/redis-limpyd-jobs-0.0.9.tar.gz" } ], "0.1.0": [ { "comment_text": "", "digests": { "md5": "f054cfe6a251230df35e1cb4bfda7d16", "sha256": "b042ac6f08369805bed7ed47f94a2fa3d56cc01d192003fe180acf411d395036" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.1.0.tar.gz", "has_sig": false, "md5_digest": "f054cfe6a251230df35e1cb4bfda7d16", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 66529, "upload_time": "2014-09-07T21:16:06", "url": "https://files.pythonhosted.org/packages/3e/3f/fa86a920028908517f9317eae5ef18ad87c448dad2bbf954d67ce51719b8/redis-limpyd-jobs-0.1.0.tar.gz" } ], "0.1.1": [ { "comment_text": "", "digests": { "md5": "94d41e01a2c6412fdc90b7f0112cb636", "sha256": "d4a33ef054545adf1f280255007136a9c3de92e2ff560ae7f0e65490ac02e169" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.1.1.tar.gz", "has_sig": false, "md5_digest": "94d41e01a2c6412fdc90b7f0112cb636", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 66289, "upload_time": "2015-01-13T14:37:30", "url": "https://files.pythonhosted.org/packages/97/19/ef6580a1e12ec185303007c30c544ee6fc0844a6f9c8b66c583217602ae1/redis-limpyd-jobs-0.1.1.tar.gz" } ], "0.1.2": [ { "comment_text": "", "digests": { "md5": "10ccdabd26e259717692b1154ccb6319", "sha256": "5b8c9de926e22bb690ea6939cd7bd78498820a57ede3039133bbcbb78cf7f631" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.1.2.tar.gz", "has_sig": false, "md5_digest": "10ccdabd26e259717692b1154ccb6319", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 66409, "upload_time": "2015-06-12T16:33:09", "url": "https://files.pythonhosted.org/packages/78/1b/10635cdd6861b1d7bdebe854d7cc9ebed92d90dfe4282601d087f1547ce8/redis-limpyd-jobs-0.1.2.tar.gz" } ], "0.1.3": [ { "comment_text": "", "digests": { "md5": "8498e0c43b30812f17008695d78785d3", "sha256": "c6b9d63b41bf5bf88c447a75ac51f842756109a83a7f41e83bfc5742dbff7b82" }, "downloads": -1, "filename": "redis_limpyd_jobs-0.1.3-py3-none-any.whl", "has_sig": false, "md5_digest": "8498e0c43b30812f17008695d78785d3", "packagetype": "bdist_wheel", "python_version": "3.4", "requires_python": null, "size": 53647, "upload_time": "2015-12-16T16:47:08", "url": "https://files.pythonhosted.org/packages/f9/64/1d755e2cc4ff7439ee9ab3f69ba8bc23b147c8a404b1bf335432d784cb22/redis_limpyd_jobs-0.1.3-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "63737a101901253cac572f38520bee8d", "sha256": "4e3a6b15cbf64217061ca7d4f15d3619e16ddb98b1ad09972bf8f0a67aa36497" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.1.3.tar.gz", "has_sig": false, "md5_digest": "63737a101901253cac572f38520bee8d", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 66260, "upload_time": "2015-12-16T16:46:57", "url": "https://files.pythonhosted.org/packages/0a/f2/63051aee5f3e4ce2277f365b3c866e670fe56a34009b0efac2b64393edd5/redis-limpyd-jobs-0.1.3.tar.gz" } ], "0.1.4": [ { "comment_text": "", "digests": { "md5": "d35e2369911df05731ce9ce653742c86", "sha256": "6159ef21873326c3915c081f5be9a403537725f123a9d8947e201e536089417b" }, "downloads": -1, "filename": "redis_limpyd_jobs-0.1.4-py2-none-any.whl", "has_sig": false, "md5_digest": "d35e2369911df05731ce9ce653742c86", "packagetype": "bdist_wheel", "python_version": "2.7", "requires_python": null, "size": 53661, "upload_time": "2016-02-05T15:20:40", "url": "https://files.pythonhosted.org/packages/58/bd/5fbad83800baba20834d340b6ba28f5e82b671c8b40f271c130350b4b3ec/redis_limpyd_jobs-0.1.4-py2-none-any.whl" }, { "comment_text": "", "digests": { "md5": "b8a5563d51c584a7de4ae685093d84ea", "sha256": "5ca8097b12085627da3056960fb80096f8fc5d2b57ef5463fe8c31c394318cb1" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.1.4.tar.gz", "has_sig": false, "md5_digest": "b8a5563d51c584a7de4ae685093d84ea", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 66285, "upload_time": "2016-02-05T15:20:09", "url": "https://files.pythonhosted.org/packages/a6/dd/2a97d05050c75153c2e3633f9873c35edccafadf3bfcf0e3744d296e5503/redis-limpyd-jobs-0.1.4.tar.gz" } ], "0.1.5": [ { "comment_text": "", "digests": { "md5": "9b7c14f698b7651a8f476b8ca96e293f", "sha256": "02caa0966788c28c0887081b6b821e08af7f4dd9e7a710df79935bed1353a192" }, "downloads": -1, "filename": "redis_limpyd_jobs-0.1.5-py2-none-any.whl", "has_sig": false, "md5_digest": "9b7c14f698b7651a8f476b8ca96e293f", "packagetype": "bdist_wheel", "python_version": "2.7", "requires_python": null, "size": 53875, "upload_time": "2016-12-25T15:35:44", "url": "https://files.pythonhosted.org/packages/04/12/cbc1f1d7c6ee592b5490fa589f050f5389f24988428abe334332176585a5/redis_limpyd_jobs-0.1.5-py2-none-any.whl" }, { "comment_text": "", "digests": { "md5": "73cf9f7d36df231bcd52bbaa890d6a73", "sha256": "485b32464767e49ecb53e5a1d60a59a314d17c26eebdda7380470b68bb7bb6d5" }, "downloads": -1, "filename": "redis-limpyd-jobs-0.1.5.tar.gz", "has_sig": false, "md5_digest": "73cf9f7d36df231bcd52bbaa890d6a73", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 66601, "upload_time": "2016-12-25T15:35:42", "url": "https://files.pythonhosted.org/packages/c1/46/a665bafd86594fcc96062537d1364769692dd453ccca855688653b1bfcf4/redis-limpyd-jobs-0.1.5.tar.gz" } ], "1.0": [ { "comment_text": "", "digests": { "md5": "209239a1b413bb69c747d1ceabd11854", "sha256": "37fabe3a3452f7b70abfe223be85e98315faf19dc7fedc8c97b090f48e2fd4ec" }, "downloads": -1, "filename": "redis_limpyd_jobs-1.0-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "209239a1b413bb69c747d1ceabd11854", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": null, "size": 54776, "upload_time": "2018-01-31T21:04:50", "url": "https://files.pythonhosted.org/packages/d9/7a/b6046e3757f621d7744026fbd99b6f4b4aaabee2bfb6e9f694d7ad36639e/redis_limpyd_jobs-1.0-py2.py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "74c4ab047a4ff380164c016e2aca3cb7", "sha256": "f53c9d5aca8a633c4caf79da548a3641d5f2a0eb0c0d0905b145d7120c9e5b4e" }, "downloads": -1, "filename": "redis-limpyd-jobs-1.0.tar.gz", "has_sig": false, "md5_digest": "74c4ab047a4ff380164c016e2aca3cb7", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 67057, "upload_time": "2018-01-31T21:04:53", "url": "https://files.pythonhosted.org/packages/ee/99/a0fd89887f3815bc8b9965f5d0177d7fe6b281ff0bd434c7f6cd23d7d6c6/redis-limpyd-jobs-1.0.tar.gz" } ], "1.1": [ { "comment_text": "", "digests": { "md5": "99e23404880f938d2f1c91c2f08f5254", "sha256": "8c3a24cc5f4555aa93782888e28d8d6b24a9e94e9696e9571d938bbd4f07aa52" }, "downloads": -1, "filename": "redis_limpyd_jobs-1.1-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "99e23404880f938d2f1c91c2f08f5254", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*", "size": 37363, "upload_time": "2019-10-13T06:27:09", "url": "https://files.pythonhosted.org/packages/c6/b2/940b999c1188aef27fd08fe318762124755ffed0ac23ed84cbd0efc3442b/redis_limpyd_jobs-1.1-py2.py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "33df06e957f3c32dae62071ef35bf996", "sha256": "4654096b4b100ae97e9a6b51ac757f68fe5e75d503f4dfde2397f2188d8c4b33" }, "downloads": -1, "filename": "redis-limpyd-jobs-1.1.tar.gz", "has_sig": false, "md5_digest": "33df06e957f3c32dae62071ef35bf996", "packagetype": "sdist", "python_version": "source", "requires_python": ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*", "size": 68325, "upload_time": "2019-10-13T06:27:12", "url": "https://files.pythonhosted.org/packages/c8/ce/3813decc02f170c0d1af32f0bb94bcb6452d715dfc7bd657f37b378ebe53/redis-limpyd-jobs-1.1.tar.gz" } ], "2": [ { "comment_text": "", "digests": { "md5": "00d3c3d23776d1e83ab73716a9dc3fb5", "sha256": "3fe0791e8a2b5a2caecae88a39304b71cc66731094a0f902948a71092a9564c3" }, "downloads": -1, "filename": "redis_limpyd_jobs-2-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "00d3c3d23776d1e83ab73716a9dc3fb5", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*", "size": 37357, "upload_time": "2019-10-13T07:00:05", "url": "https://files.pythonhosted.org/packages/6a/36/dd32f1539415a232879c9bc3b7f25bfb01dd59b7c0fce0ae8a971685845a/redis_limpyd_jobs-2-py2.py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "823d5ceac160ea3481fec5b38ea50c1e", "sha256": "6f94de2ecacbcf2ace1b21fcf1a5a017732238c0ff62a64ca94e4e5e6275043b" }, "downloads": -1, "filename": "redis-limpyd-jobs-2.tar.gz", "has_sig": false, "md5_digest": "823d5ceac160ea3481fec5b38ea50c1e", "packagetype": "sdist", "python_version": "source", "requires_python": ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*", "size": 68358, "upload_time": "2019-10-13T07:00:08", "url": "https://files.pythonhosted.org/packages/27/04/fd8b62713b9ce1cf19026902eb614716ee111bd64d7ca46bde5b7c7649e5/redis-limpyd-jobs-2.tar.gz" } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "00d3c3d23776d1e83ab73716a9dc3fb5", "sha256": "3fe0791e8a2b5a2caecae88a39304b71cc66731094a0f902948a71092a9564c3" }, "downloads": -1, "filename": "redis_limpyd_jobs-2-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "00d3c3d23776d1e83ab73716a9dc3fb5", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*", "size": 37357, "upload_time": "2019-10-13T07:00:05", "url": "https://files.pythonhosted.org/packages/6a/36/dd32f1539415a232879c9bc3b7f25bfb01dd59b7c0fce0ae8a971685845a/redis_limpyd_jobs-2-py2.py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "823d5ceac160ea3481fec5b38ea50c1e", "sha256": "6f94de2ecacbcf2ace1b21fcf1a5a017732238c0ff62a64ca94e4e5e6275043b" }, "downloads": -1, "filename": "redis-limpyd-jobs-2.tar.gz", "has_sig": false, "md5_digest": "823d5ceac160ea3481fec5b38ea50c1e", "packagetype": "sdist", "python_version": "source", "requires_python": ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*", "size": 68358, "upload_time": "2019-10-13T07:00:08", "url": "https://files.pythonhosted.org/packages/27/04/fd8b62713b9ce1cf19026902eb614716ee111bd64d7ca46bde5b7c7649e5/redis-limpyd-jobs-2.tar.gz" } ] }