{ "info": { "author": "Alex Landau", "author_email": "alex@rover.com", "bugtrack_url": null, "classifiers": [ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: BSD License", "Operating System :: Microsoft :: Windows", "Operating System :: POSIX", "Operating System :: Unix", "Programming Language :: Python", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.5", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7", "Topic :: Utilities" ], "description": "========\nOverview\n========\n\n\n\n``dogstatsd-collector`` is a library to make it easy to collect DataDog-style\nStatsD `counters `_\nand `histograms `_\nwith tags and control when they are flushed. It\ngives you a drop-in wrapper for the `DogStatsD\n`_ library for counters and\nhistograms and allows you to defer flushing the metrics until you choose to. This\ncapability enables you to collect StatsD metrics at arbitrary granularity, for\nexample on a per-web request or per-job basis (instead of the per-flush\ninterval basis).\n\nCounters and histograms are tracked separately for each metric series (unique\nset of tag key-value pairs) and a single metric is emitted for each series when\nthe collector is flushed. You don't have to think about tracking your metric\nseries separately; you just use the ``DogstatsdCollector`` object as you would the\nnormal ``DogStatsD`` object, and flush when you're ready; the library will take\ncare of emitting all the series for you.\n\n* Free software: BSD 3-Clause License\n\nInstallation\n============\n\n::\n\n pip install dogstatsd-collector\n\nExample Usage\n=============\n\nImagine you want to track a distribution of the number of queries issued by\nrequests to your webapp, and tag them by which database is queried and which\nverb is used. You collect the following metrics as you issue your queries:\n\n.. code-block:: python\n\n collector = DogstatsdCollector(dogstatsd)\n ...\n collector.histogram('query', tags=['database:master','verb:insert'])\n collector.histogram('query', tags=['database:master','verb:update'])\n collector.histogram('query', tags=['database:master','verb:update'])\n collector.histogram('query', tags=['database:replica','verb:select'])\n collector.histogram('query', tags=['database:replica','verb:select'])\n\nThen, at the end of your web request, when you flush the collector, the\nfollowing metrics will be pushed to ``DogStatsD`` (`shown in DogStatsD datagram\nformat\n`_):\n\n.. code-block:: python\n\n collector.flush()\n # query:1|h|#database:master,verb:insert\n # query:2|h|#database:master,verb:update\n # query:2|h|#database:replica,verb:select\n\nBase Tags\n---------\n\nThe collector object also supports specifying a set of base tags, which will be\nincluded on every metric that gets emitted.\n\n.. code-block:: python\n\n base_tags = ['mytag:myvalue']\n collector = DogstatsdCollector(dogstatsd, base_tags=base_tags)\n collector.histogram('query', tags=['database:master','verb:insert'])\n collector.histogram('query', tags=['database:master','verb:update'])\n collector.flush()\n # query:1|h|#database:master,verb:insert,mytag:myvalue\n # query:1|h|#database:master,verb:update,mytag:myvalue\n\nMotivation\n==========\n\nThe StatsD model is to run an agent on each server/container in your\ninfrastructure and periodically flush aggregations at a regular interval to a\ncentralized location. This model scales very well because the volume of metrics\nsent to the centralized location grows very slowly even as you scale\nyour application; each StatsD agent calculates aggregations to flush to the\nbackend instead of every datapoint, so the storage volume is quite low even for\na large application with lots of volume.\n\nA drawback to this model is that you don't have much control of the granularity\nthat your metrics represent. When your aggregations reach the centralized\nlocation (DataDog in this case), you only know the counts or distributions\nwithin the flush interval. You can't represent any other `execution\ngranularity` beyond \"across X seconds\" (where X is the flush interval). This\nlimitation precludes you from easily representings metrics on a \"per-request\"\nbasis, for example.\n\nThe purpose of this library is to make it simple to control when your StatsD\nmetrics are emitted so that you can defer emission of the metrics until a point\nyou determine. This allows you to represent a finer granularity than \"across X\nseconds\" such as \"across a web request\" or \"across a cron job.\" It also\npreserves metric tags by emitting each series independently when the collector\nis flushed, which ensures you don't lose any of the benefit of tagging\nyour metrics (such as aggregating/slicing in DataDog).\n\nPatterns\n========\n\nThe ``DogstatsdCollector`` object is a singleton that provides a similar\ninterface as the ``DogStatsD`` `increment\n`_\nand `histogram\n`_\nmethods. As you invoke these methods, you collect counters and histograms for\neach series (determined by any tags you include). After calling ``flush()``,\neach series is separately emitted as a StatsD metric.\n\nSimple Request Metrics\n----------------------\n\nYou can collect various metrics over a request and emit them at the end of the\nrequest to get per-request granularity.\n\nIn Django:\n\n.. code-block:: python\n\n from datadog.dogstatsd.base import DogStatsd\n from dogstatsd_collector import DogstatsdCollector\n \n # Middleware\n class MetricsMiddleware:\n def __init__(self, get_response):\n self.get_response = get_response\n self.dogstatsd = DogStatsd()\n \n def __call__(self, request):\n request.metrics = DogstatsdCollector(self.dogstatsd)\n response = self.get_response(request)\n request.metrics.flush()\n \n return response\n \n # Inside a view\n def my_view(request):\n # Do some stuff...\n request.metrics.increment('my.count')\n request.metrics.histogram('my.time', 0.5)\n return HttpResponse('ok')\n\nIn Flask:\n\n.. code-block:: python\n\n from datadog.dogstatsd.base import DogStatsd\n from dogstatsd_collector import DogstatsdCollector\n \n from flask import Flask\n from flask import request\n \n app = Flask(__name__)\n dogstatsd = DogStatsd()\n \n @app.before_request\n def init_metrics():\n request.metrics = DogstatsdCollector(dogstatsd)\n \n @app.after_request\n def flush_metrics():\n request.metrics.flush()\n \n @app.route('/')\n def my_view():\n # Do some stuff...\n request.metrics.increment('my.count')\n request.metrics.histogram('my.time', 0.5)\n return 'ok'\n\n\nCelery Task Metrics\n-------------------\n\nSame as above, but over a Celery task.\n\n.. code-block:: python\n\n from datadog.dogstatsd.base import DogStatsd\n from dogstatsd_collector import DogstatsdCollector\n \n from celery import Celery\n from celery import current_task\n from celery.signals import task_prerun\n from celery.signals import task_postrun\n \n app = Celery('tasks', broker='pyamqp://guest@localhost//')\n \n dogstatsd = DogStatsd()\n \n @task_prerun.connect\n def init_metrics(task_id, task, *args, **kwargs):\n task.request.metrics = DogstatsdCollector(dogstatsd)\n \n @task_postrun.connect\n def flush_metrics(task_id, task, *args, **kwargs):\n task.request.metrics.flush()\n \n @app.task\n def my_task():\n # Do some stuff...\n current_task.request.metrics.increment('my.count')\n current_task.request.metrics.histogram('my.time', 0.5)\n \nMetrics Within a Function\n-------------------------\n\nEmit a set of metrics for a particular function you execute.\n\n.. code-block:: python\n\n from datadog.dogstatsd.base import DogStatsd\n from dogstatsd_collector import DogstatsdCollector\n \n dogstatsd = DogStatsd()\n \n def do_stuff(metrics):\n # Do some stuff...\n metrics.increment('my.count')\n metrics.histogram('my.time', 0.5)\n \n metrics = DogstatsdCollector(dogstatsd)\n do_stuff(metrics)\n metrics.flush()\n\nThread Safety\n=============\n\nThe ``DogstatsdCollector`` singleton is **not threadsafe.** Do not share a\nsingle ``DogstatsdCollector`` object among multiple threads.\n\nMore Documentation\n==================\n\nFull documentation can be found on ReadTheDocs:\n\nhttps://dogstatsd-collector.readthedocs.io/\n\nDevelopment\n===========\n\nTo run the all tests run::\n\n tox\n\nChangelog\n=========\n\n0.0.2 (2019-08-14)\n------------------\n\n* Add base_tags optional kwarg to support tags added to all metrics that get\n flushed.\n\n0.0.1 (2019-05-02)\n------------------\n\n* First release on PyPI.", "description_content_type": "", "docs_url": null, "download_url": "", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "https://github.com/roverdotcom/dogstatsd-collector", "keywords": "", "license": "BSD 3-Clause License", "maintainer": "", "maintainer_email": "", "name": "dogstatsd-collector", "package_url": "https://pypi.org/project/dogstatsd-collector/", "platform": "", "project_url": "https://pypi.org/project/dogstatsd-collector/", "project_urls": { "Homepage": "https://github.com/roverdotcom/dogstatsd-collector" }, "release_url": "https://pypi.org/project/dogstatsd-collector/0.1.0/", "requires_dist": null, "requires_python": "", "summary": "A library to enable collection and delayed emission of StatsD metrics using the DataDog protocol.", "version": "0.1.0" }, "last_serial": 5722129, "releases": { "0.0.1": [ { "comment_text": "", "digests": { "md5": "cfd299018f359bf5b72be118e3f4ab2a", "sha256": "131a0daa4ee8dadf68d8107a9bcf5240e2bf4d21730be80703e2ec6a7f5a488e" }, "downloads": -1, "filename": "dogstatsd_collector-0.0.1-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "cfd299018f359bf5b72be118e3f4ab2a", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": null, "size": 5689, "upload_time": "2019-05-03T16:07:24", "url": "https://files.pythonhosted.org/packages/f7/11/f4eeb58f4b98193e1bd8f56f9ce2b7f707653ac19e38e5546fce857d3dff/dogstatsd_collector-0.0.1-py2.py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "963dc09f5b0f758aa8391f834b71fc04", "sha256": "6a7954b946312bfe90f7041320c625a45de45135acebb911eeb78f36341dcb0e" }, "downloads": -1, "filename": "dogstatsd-collector-0.0.1.tar.gz", "has_sig": false, "md5_digest": "963dc09f5b0f758aa8391f834b71fc04", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 15090, "upload_time": "2019-05-03T16:07:26", "url": "https://files.pythonhosted.org/packages/90/a2/db0dd9ff704d2f23770048e72cfdf264453d37515fa3d5c6e79a5c0a7f3b/dogstatsd-collector-0.0.1.tar.gz" } ], "0.1.0": [ { "comment_text": "", "digests": { "md5": "aa4c7a9cdc8b11ab753b43fe2af44f9e", "sha256": "e6c755df3bceed92a77dd32c2e5801bd47db512e41f5724a3dcbe1dc2cdd5068" }, "downloads": -1, "filename": "dogstatsd-collector-0.1.0.tar.gz", "has_sig": false, "md5_digest": "aa4c7a9cdc8b11ab753b43fe2af44f9e", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 17806, "upload_time": "2019-08-23T18:32:18", "url": "https://files.pythonhosted.org/packages/6f/2f/48f1c7d712eb609638f6a7d727044eb054ff82cc0a139c7a681e08148d7b/dogstatsd-collector-0.1.0.tar.gz" } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "aa4c7a9cdc8b11ab753b43fe2af44f9e", "sha256": "e6c755df3bceed92a77dd32c2e5801bd47db512e41f5724a3dcbe1dc2cdd5068" }, "downloads": -1, "filename": "dogstatsd-collector-0.1.0.tar.gz", "has_sig": false, "md5_digest": "aa4c7a9cdc8b11ab753b43fe2af44f9e", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 17806, "upload_time": "2019-08-23T18:32:18", "url": "https://files.pythonhosted.org/packages/6f/2f/48f1c7d712eb609638f6a7d727044eb054ff82cc0a139c7a681e08148d7b/dogstatsd-collector-0.1.0.tar.gz" } ] }