{ "info": { "author": "Joosep Pata", "author_email": "joosep.pata@cern.ch", "bugtrack_url": null, "classifiers": [ "License :: OSI Approved :: BSD License", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: Implementation :: CPython" ], "description": "[](https://travis-ci.com/hepaccelerate/hepaccelerate)\n[](https://gitlab.cern.ch/jpata/hepaccelerate/commits/master)\n[](https://zenodo.org/badge/latestdoi/191644111)\n\n# hepaccelerate\n\n- HEP data analysis with [jagged arrays](https://github.com/scikit-hep/awkward-array) using python + [Numba](http://numba.pydata.org/)\n- Use **any ntuple**, as long as you can open it with [uproot](https://github.com/scikit-hep/uproot)\n- analyze a billion events with systematic to histograms in minutes on a single workstation\n - 1e9 events / (50 kHz x 24 threads) ~ 13 minutes\n- weighted histograms, deltaR matching and [more](https://github.com/hepaccelerate/hepaccelerate#kernels)\n- use a CPU or an nVidia CUDA GPU with the same interface!\n- this is **not** an analysis framework, but rather a small set of helpers for fast jagged array processing\n\n**Under active development and use by a few CMS analyses!**\n\n
\n
\n
\n