{ "info": { "author": "Panos Kittenis", "author_email": "22e889d8@opayq.com", "bugtrack_url": null, "classifiers": [ "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 2", "Programming Language :: Python :: 3", "Topic :: Scientific/Engineering :: Information Analysis", "Topic :: Scientific/Engineering :: Visualization", "Topic :: System :: Monitoring" ], "description": "InfluxGraph\n=================\n\nAn `InfluxDB`_ storage plugin for `Graphite-API`_. Graphite with InfluxDB data store from any kind of schema(s) used in the DB.\n\n.. image:: https://img.shields.io/pypi/v/influxgraph.svg\n :target: https://pypi.python.org/pypi/influxgraph\n :alt: Latest Version\n.. image:: https://travis-ci.org/InfluxGraph/influxgraph.svg?branch=master\n :target: https://travis-ci.org/InfluxGraph/influxgraph\n :alt: CI status\n.. image:: https://codecov.io/gh/InfluxGraph/influxgraph/branch/master/graph/badge.svg\n :target: https://codecov.io/gh/InfluxGraph/influxgraph\n.. image:: https://readthedocs.org/projects/influxgraph/badge/?version=latest\n :target: http://influxgraph.readthedocs.io/en/latest/?badge=latest\n :alt: Documentation Status\n.. image:: https://img.shields.io/pypi/wheel/influxgraph.svg\n :target: https://pypi.python.org/pypi/influxgraph\n.. image:: https://img.shields.io/pypi/pyversions/influxgraph.svg\n :target: https://pypi.python.org/pypi/influxgraph\n\n\nThis project started as a re-write of `graphite influxdb `_, now a separate project.\n\n\nInstallation\n=============\n\nDocker Compose\n---------------\n\nIn `compose directory `_ can be found docker-compose configuration that will spawn all necessary services for a complete monitoring solution with:\n\n* InfluxDB\n* Telegraf\n* Graphite API with InfluxGraph\n* Grafana dashboard\n\nTo use, within compose directory run:\n\n.. code-block:: shell\n\n docker-compose up\n\nGrafana will be running on ``http://localhost:3000`` with Graphite datasource for InfluxDB data available at ``http://localhost``. Add a new Graphite datasource to Grafana - default Grafana user/pass is admin/admin - to create dashboards with.\n\nSee `compose configuration readme `_ for more details.\n\nDocker Image\n-------------\n\n.. code-block:: shell\n\n docker pull influxgraph/influxgraph\n docker create --name=influxgraph -p 8000:80 influxgraph/influxgraph\n docker start influxgraph\n\nThere will now be a Graphite-API running on ``localhost:8000`` from the container with a default InfluxDB configuration and memcache enabled. Finder expects InfluxDB to be running on ``localhost:8086`` by default.\n\nThe image will use a supplied ``graphite-api.yaml`` on build, when ``docker build`` is called on an InfluxGraph image.\n\n`Docker file `_ used to build container can be found under ``docker`` directory of the repository.\n\n.. note::\n\n If having issues with the container accessing the host's InfluxDB service then either use ``--network=\"host\"`` when launching the container or build a new image with a provided configuration file containing the correct `InfluxDB host:port `_ destination.\n\nManual Installation\n---------------------\n\n.. code-block:: shell\n\n pip install influxgraph\n\nUse of a local `memcached` service is highly recommended - see configuration section on how to enable.\n\nMimimal configuration for Graphite-API is below. See `Full Configuration Example`_ for all possible configuration options.\n\n``/etc/graphite-api.yaml``\n\n.. code-block:: yaml\n\n finders:\n - influxgraph.InfluxDBFinder\n\nSee the `Wiki `_ and `Configuration`_ section for details.\n\n.. contents:: Table of Contents\n\nMain features\n==============\n\n* InfluxDB Graphite template support - expose InfluxDB tagged data as Graphite metrics with configurable metric paths\n* Dynamically calculated group by intervals based on query date/time range - fast queries regardless of the date/time they span\n* Configurable per-query aggregation functions by regular expression pattern\n* Configurable per-query retention policies by query date/time range. Automatically use pre-calculated downsampled data in a retention policy for historical data\n* Fast in-memory index for Graphite metric path queries as a Python native code extension\n* Multi-fetch enabled - fetch data for multiple metrics with one query to InfluxDB\n* Memcached integration\n* Python 3 and PyPy compatibility\n* Good performance even with extremely large number of metrics in the DB - generated queries are guaranteed to have ``O(1)`` performance characteristics\n\nGoogle User's Group\n=====================\n\nThere is a `Google user's group for discussion `_ which is open to the public.\n\nGoals\n======\n\n* InfluxDB as a drop-in replacement data store to the Graphite query API\n* Backwards compatibility with existing Graphite API clients like Grafana and Graphite installations migrated to InfluxDB data stores using Graphite input service *with or without* Graphite template configuration\n* Expose native InfluxDB line protocol ingested data via the Graphite API\n* Clean, readable code with complete documentation for public endpoints\n* Complete code coverage with both unit and integration testing. Code has `>90%` test coverage and is integration tested against a real InfluxDB service\n* Good performance at large scale. InfluxGraph is used in production with good performance on InfluxDB nodes with cardinality exceeding 5M and a write rate of over 5M metrics/minute or 66K/second.\n\nThe first three goals provide both\n\n- A backwards compatible migration path for existing Graphite installations to use InfluxDB as a drop-in storage back-end replacement with no API client side changes required, meaning existing Grafana or other dashboards continue to work as-is.\n- A way for native InfluxDB collection agents to expose their data via the *Graphite API* which allows the use of any Graphite API talking tool, the plethora of Graphite API functions, custom functions, functions across series, multi-series plotting and functions via Graphite glob expressions et al.\n\nAs of this time of writing, no alternatives exist with similar functionality, performance and compatibility.\n\nNon-Goals\n==========\n\n* Graphite-Web support from the official Graphite project\n\nDependencies\n=============\n\nWith the exception of `InfluxDB`_ itself, the other dependencies are installed automatically by ``pip``.\n\n* ``influxdb`` Python module\n* `Graphite-API`_\n* ``python-memcached`` Python module\n* `InfluxDB`_ service, versions ``1.0`` or higher\n\nInfluxDB Graphite metric templates\n==================================\n\n`InfluxGraph` can make use of any InfluxDB data and expose them as Graphite API metrics, as well as make use of Graphite metrics added to InfluxDB as-is sans tags.\n\nEven data written to InfluxDB by native InfluxDB API clients can be exposed as Graphite metrics, allowing transparent to clients use of the Graphite API with InfluxDB acting as its storage back-end.\n\nTo make use of tagged InfluxDB data, the finder needs to know how to generate a Graphite metric path from the tags used by InfluxDB.\n\nThe easiest way to do this is to use the Graphite service in InfluxDB with configured templates which can be used as-is in `InfluxGraph`_ configuration - see `Full Configuration Example`_ section for details. This presumes existing collection agents are using the Graphite line protocol to write to InfluxDB via its Graphite input service.\n\nIf, on the other hand, native `InfluxDB`_ metrics collection agents like `Telegraf `_ are used, that data can too be exposed as Graphite metrics by writing appropriate template(s) in Graphite-API configuration alone.\n\nSee `Telegraf default configuration template `_ for an example of this.\n\nBy default, the storage plugin makes no assumptions that data is tagged, per InfluxDB default Graphite service template configuration as below::\n\n [[graphite]]\n <..>\n # templates = []\n\n\nRetention policy configuration\n==============================\n\nPending implementation of a feature request that will allow InfluxDB to select and/or merge results from down-sampled data as appropriate, retention policy configuration is needed to support the use-case of down-sampled data being present in non default retention policies:\n\n.. code-block:: yaml\n\n retention_policies:\n