{ "info": { "author": "Johnnie Gray", "author_email": "johnniemcgray@gmail.com", "bugtrack_url": null, "classifiers": [ "Development Status :: 3 - Alpha", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9" ], "description": "

\"autoray\"

\n\nA lightweight python AUTOmatic-arRAY library. Write numeric code that works for:\n\n* [numpy](https://github.com/numpy/numpy)\n* [cupy](https://github.com/cupy/cupy)\n* [dask](https://github.com/dask/dask)\n* [autograd](https://github.com/HIPS/autograd)\n* [jax](https://github.com/google/jax)\n* [mars](https://github.com/mars-project/mars)\n* [tensorflow](https://github.com/tensorflow/tensorflow)\n* [pytorch](https://pytorch.org/)\n* ... and indeed **any** library that provides a numpy-*ish* api.\n\n[![Azure Pipelines](https://dev.azure.com/autoray-org/autoray/_apis/build/status/jcmgray.autoray?branchName=master)](https://dev.azure.com/autoray-org/autoray/_build/latest?definitionId=1&branchName=master) [![codecov](https://codecov.io/gh/jcmgray/autoray/branch/master/graph/badge.svg?token=Q5evNiuT9S)](https://codecov.io/gh/jcmgray/autoray) [![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/jcmgray/autoray.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/jcmgray/autoray/context:python) [![Anaconda-Server Badge](https://anaconda.org/conda-forge/autoray/badges/installer/conda.svg)](https://conda.anaconda.org/conda-forge)\n\nAs an example consider this function that orthogonalizes a matrix using the modified [Gram-Schmidt](https://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process) algorithm:\n\n```python\nfrom autoray import do\n\ndef modified_gram_schmidt(X):\n # n.b. performance-wise this particular function is *not*\n # a good candidate for a pure python implementation\n\n Q = []\n for j in range(0, X.shape[0]):\n\n q = X[j, :]\n for i in range(0, j):\n rij = do('tensordot', do('conj', Q[i]), q, 1)\n q = q - rij * Q[i]\n\n rjj = do('linalg.norm', q, 2)\n Q.append(q / rjj)\n\n return do('stack', Q, axis=0)\n```\n\nWhich is now compatible with **all** of the above mentioned libraries! Abstracting out the array interface also allows the following functionality:\n\n* *swap custom versions of functions for specific backends*\n* *trace through computations lazily without actually running them*\n* *automatically share intermediates and fold constants in computations*\n* *compile functions with a unified interface for different backends*\n\n... all implemented in a lightweight manner with an emphasis on minimizing overhead. Of course complete compatibility is not going to be possible for all functions, operations and libraries, but ``autoray`` hopefully makes the job much easier. Of the above, ``tensorflow`` has *quite* a different interface and ``pytorch`` probably the *most* different. Whilst for example not every function will work out-of-the-box for these two, ``autoray`` is also designed with the easy addition of new functions in mind (for example adding new translations is often a one-liner).\n\n**Contents**\n\n* [Basic Usage](#Basic-usage)\n * [How does it work?](#how-does-it-work?)\n * [Customizing functions](#Customizing-functions)\n * [Lazy Computation](#Lazy-Computation)\n * [Compilation](#Compilation)\n* [Details](#Details)\n * [Special Functions](#Special-Functions)\n * [Deviations from `numpy`](#Deviations-from-numpy)\n* [Installation](#Installation)\n* [Contributing](#Contributing)\n\n\n# Basic Usage\n\n\n## How does it work?\n\n``autoray`` works using essentially a single dispatch mechanism on the first argument for ``do``, or the ``like`` keyword argument if specified, fetching functions from the whichever module defined that supplied array. Additionally, it caches a few custom translations and lookups so as to handle libraries like ``tensorflow`` that don't exactly replicate the ``numpy`` api (for example ``sum`` gets translated to ``tensorflow.reduce_sum``). Due to the caching, each ``do`` call only adds 1 or 2 dict look-ups as overhead - much less than using ``functools.singledispatch`` for example.\n\nEssentially you call your numpy-style array functions in one of four ways:\n\n***1. Automatic backend:***\n\n```python\ndo('sqrt', x)\n```\n\nHere the backend is inferred from ``x``. Usually dispatch happens on the first argument, but several functions (such as ``stack`` and ``einsum``) know to override this and look elsewhere.\n\n***2. Backend 'like' another array:***\n\n```python\ndo('random.normal', size=(2, 3, 4), like=x)\n```\n\nHere the backend is inferred from another array and can thus be implicitly propagated, even when functions take no array arguments.\n\n***3. Explicit backend:***\n\n```python\ndo('einsum', eq, x, y, like='customlib')\n```\n\nHere one simply supplies the desired function backend explicitly.\n\n***4. Context manager***\n\n```python\nwith backend_like('autoray.lazy'):\n xy = do('tensordot', x, y, 1)\n z = do('trace', xy)\n```\n\nHere you set a default backend for a whole block of code. This default overrides method 1. above but 2. and 3. still take precedence.\n\n\n\nIf you don't like the explicit ``do`` syntax, then you can import the fake ``numpy`` object as a **drop-in replacement** instead:\n\n```python\nfrom autoray import numpy as np\n\nx = np.random.uniform(size=(2, 3, 4), like='tensorflow')\nnp.tensordot(x, x, [(2, 1), (2, 1)])\n# \n\nnp.eye(3, like=x) # many functions obviously can't dispatch without the `like` keyword\n# \n```\n\n\n## Customizing functions\n\nIf you want to directly provide a missing or alternative implementation of some function for a particular backend you can swap one in with ``autoray.register_function``:\n\n```python\ndef my_custom_torch_svd(x):\n import torch\n\n print('Hello SVD!')\n u, s, v = torch.svd(x)\n\n return u, s, v.T\n\nar.register_function('torch', 'linalg.svd', my_custom_torch_svd)\n\nx = ar.do('random.uniform', size=(3, 4), like='torch')\n\nar.do('linalg.svd', x)\n# Hello SVD!\n# (tensor([[-0.5832, 0.6188, -0.5262],\n# [-0.5787, -0.7711, -0.2655],\n# [-0.5701, 0.1497, 0.8078]]),\n# tensor([2.0336, 0.8518, 0.4572]),\n# tensor([[-0.4568, -0.3166, -0.6835, -0.4732],\n# [-0.5477, 0.2825, -0.2756, 0.7377],\n# [ 0.2468, -0.8423, -0.0993, 0.4687]]))\n```\n\nIf you want to make use of the existing function you can supply ``wrap=True`` in which case the custom function supplied should act like a decorator:\n\n```python\ndef my_custom_sum_wrapper(old_fn):\n\n def new_fn(*args, **kwargs):\n print('Hello sum!')\n return old_fn(*args **kwargs)\n\n return new_fn\n\nar.register_function('torch', 'sum', my_custom_sum_wrapper, wrap=True)\n\nar.do('sum', x)\n# Hello sum!\n# tensor(5.4099)\n```\n\nThough be careful, if you call ``register_function`` again it will now wrap the *new* function!\n\n\n## Lazy Computation\n\nAbstracting out the array interface also affords an opportunity to run any computations utilizing ``autoray.do`` completely lazily. ``autoray`` provides the ``lazy`` submodule and ``LazyArray`` class for this purpose:\n\n```python\nfrom autoray import lazy\n\n# input array - can be anything autoray.do supports\nx = do('random.normal', size=(5, 5), like='torch')\n\n# convert it to a lazy 'computational node'\nlx = lazy.array(x)\n\n# supply this to our function\nly = modified_gram_schmidt(lx)\nly\n# \n```\n\nNone of the functions have been called yet - simply the shapes and dtypes have been propagated through. ``ly`` represents the final ``stack`` call, and tracks which other ``LazyArray`` instances it needs to materialize before it can compute itself. At this point one can perform various bits of introspection:\n\n```python\n# --> the largest array encountered\nly.history_max_size()\n# 25\n\n# number of unique computational nodes\nlen(tuple(ly))\n# 57\n\n# --> traverse the computational graph and collect statistics\nfrom collections import Counter\nCounter(node.fn_name for node in ly)\n# Counter({'stack': 1,\n# 'truediv': 5,\n# 'norm': 5,\n# 'sub': 10,\n# 'mul': 10,\n# 'getitem': 5,\n# 'None': 1,\n# 'tensordot': 10,\n# 'conjugate': 10})\n\n# --> plot the full computation graph\nly.plot()\n```\n

\n\nPreview the memory footprint (in terms of number of array elements) throughout the computation:\n\n```python\nly.plot_history_size_footprint()\n```\n

\n\nFinally, if we want to compute the actual value we call:\n```python\nly.compute()\n# tensor([[-0.4225, 0.1371, -0.2307, 0.5892, 0.6343],\n# [ 0.4079, -0.5103, 0.5924, 0.4261, 0.2016],\n# [ 0.2569, -0.5173, -0.4875, -0.4238, 0.4992],\n# [-0.2778, -0.5870, -0.3928, 0.3645, -0.5396],\n# [ 0.7155, 0.3297, -0.4515, 0.3986, -0.1291]])\n```\n\nNote that once a node is computed, it only stores the actual result and clears all references to other ``LazyArray`` instances.\n\n**Sharing intermediates**\n\nIf the computation might involve repeated computations then you can call it in a ``shared_intermediates`` context:\n\n```python\nwith lazy.shared_intermediates():\n ly = modified_gram_schmidt(lx)\n\n# --> a few nodes can be reused here (c.f. 57 previously)\nlen(tuple(ly))\n# 51\n```\nthis caches the computational nodes as they are created based on a hash of their input arguments (note this uses ``id`` for array like things, i.e. assumes they are immutable). Unlike eagerly caching function calls in real time, which might consume large amounts of memory, now when the computation runs (i.e. ``ly.compute()`` is called) data is only kept as long as its needed.\n\n**Why not use e.g. ``dask``?**\n\n There are many reasons to use [dask](https://dask.org/), but it incurs a pretty large overhead for big computational graphs with comparatively small operations. Calling and computing the ``modified_gram_schmidt`` function for a 100x100 matrix (20,102 computational nodes) with ``dask.array`` takes ~25sec whereas with ``lazy.array`` it takes ~0.25sec:\n\n ```python\nimport dask.array as da\n\n%%time\ndx = da.array(x)\ndy = modified_gram_schmidt(dx)\ny = dy.compute()\n# CPU times: user 25.6 s, sys: 137 ms, total: 25.8 s\n# Wall time: 25.5 s\n\n%%time\nlx = lazy.array(x)\nly = modified_gram_schmidt(lx)\ny = ly.compute()\n# CPU times: user 256 ms, sys: 0 ns, total: 256 ms\n# Wall time: 255 ms\n ```\n\nThis is enabled by `autoray`'s very minimal implementation.\n\n## Compilation\n\nVarious libraries provide tools for tracing numeric functions and turning the resulting computation into a more efficient, compiled function. Notably:\n\n* [``jax.jit``](https://github.com/google/jax)\n* [``tensorflow.function``](https://www.tensorflow.org/api_docs/python/tf/function)\n* [``torch.jit.trace``](https://pytorch.org/docs/stable/jit.html)\n\n ``autoray`` is obviously very well suited to these since it just dispatches functions to whichever library is doing the tracing - functions written using autoray should be immediately compatible with all of them.\n\n**The `autojit` wrapper**\n\nMoreover, ``autoray`` also provides a *unified interface* for compiling functions so that the compilation backend can be easily switched or automatically identified:\n\n```python\nfrom autoray import autojit\n\nmgs = autojit(modified_gram_schmidt)\n```\n\nCurrently ``autojit`` supports functions with the signature ``fn(*args, **kwargs) -> array`` where both ``args`` and ``kwargs`` can be any nested combination of ``tuple``, ``list`` and ``dict`` objects containings arrays.\nWe can compare different compiled versions of this simply by changing the ``backend`` option:\n\n```python\nx = do(\"random.normal\", size=(50, 50), like='numpy')\n\n# first the uncompiled version\n%%timeit\nmodified_gram_schmidt(x)\n# 23.5 ms \u00b1 241 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\n\n# 'python' mode unravels computation into source then uses compile+exec\n%%timeit\nmgs(x) # backend='python'\n# 17.8 ms \u00b1 191 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each)\n\n%%timeit\nmgs(x, backend='torch')\n# 11.9 ms \u00b1 80.5 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\n\n%%timeit\nmgs(x, backend='tensorflow')\n# 1.87 ms \u00b1 441 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\n\n# need to config jax to run on same footing\nfrom jax.config import config\nconfig.update(\"jax_enable_x64\", True)\nconfig.update('jax_platform_name', 'cpu')\n\n%%timeit\nmgs(x, backend='jax')\n# 226 \u00b5s \u00b1 14.8 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\n\n%%timeit\ndo('linalg.qr', x, like='numpy')[0] # appriximately the 'C' version\n# 156 \u00b5s \u00b1 32.1 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n```\n\nHere you see *(with this very for-loop heavy function)*, that there are significant gains to be made for all the compilations options. Whilst ``jax`` for example achieves fantastic performance, it should be noted the compilation step takes a lot of time and scales badly (super-linearly) with the number of computational nodes.\n\n# Details\n\n## Special Functions\n\nThe main function is ``do``, but the following special (i.e. not in ``numpy``) functions are also implemented that may be useful:\n\n* ``autoray.infer_backend`` - check what library is being inferred for a given array\n* ``autoray.to_backend_dtype`` - convert a string specified dtype like ``'float32'`` to ``torch.float32`` for example\n* ``autoray.get_dtype_name`` - convert a backend dtype back into the equivalent string specifier like ``'complex64'``\n* ``autoray.astype`` - backend agnostic dtype conversion of arrays\n* ``autoray.to_numpy`` - convert any array to a ``numpy.ndarray``\n\nHere are all of those in action:\n\n\n```python\nimport autoray as ar\n\nbackend = 'torch'\ndtype = ar.to_backend_dtype('float64', like=backend)\ndtype\n# torch.float64\n\nx = ar.do('random.normal', size=(4,), dtype=dtype, like=backend)\nx\n# tensor([ 0.0461, 0.3028, 0.1790, -0.1494], dtype=torch.float64)\n\nar.infer_backend(x)\n# 'torch'\n\nar.get_dtype_name(x)\n# 'float64'\n\nx32 = ar.astype(x, 'float32')\nar.to_numpy(x32)\n# array([ 0.04605161, 0.30280888, 0.17903718, -0.14936243], dtype=float32)\n```\n\n## Deviations from `numpy`\n\n`autoray` doesn't have an API as such, since it is essentially just a fancy single dispatch mechanism. On the other hand, where translations *are* in place, they generally use the numpy API. So ``autoray.do('stack', arrays=pytorch_tensors, axis=0)`` gets automatically translated into ``torch.stack(tensors=pytorch_tensors, dims=0)`` and so forth.\n\nCurrently the one place this isn't true is ``autoray.do('linalg.svd', x)`` where instead ``full_matrices=False`` is used as the default since this generally makes more sense and many libraries don't even implement the other case. Autoray also dispatches ``'linalg.expm'`` for ``numpy`` arrays to ``scipy``, and may well do with other scipy-only functions at some point.\n\n\n# Installation\n\nYou can install ``autoray`` via [conda-forge](https://conda-forge.org/) as well as with ``pip``. Alternatively, simply copy the monolithic ``autoray.py`` into your project internally (if dependencies aren't your thing) to provide ``do``.\n\n**Alternatives**\n\n* The ``__array_function__`` protocol has been [suggested](https://www.numpy.org/neps/nep-0018-array-function-protocol.html) and now implemented in ``numpy``. Hopefully this will eventually negate the need for ``autoray``. On the other hand, third party libraries themselves need to implement the interface, which has not been done, for example, in ``tensorflow`` yet.\n* The [uarray](https://github.com/Quansight-Labs/uarray) project aims to develop a generic array interface but comes with the warning *\"This is experimental and very early research code. Don't use this.\"*.\n\n\n# Contributing\n\nPull requests such as extra translations are very welcome!\n\n\n\n", "description_content_type": "text/markdown", "docs_url": null, "download_url": "", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "http://github.com/jcmgray/autoray", "keywords": "array agnostic numeric numpy cupy dask tensorflow jax autograd", "license": "Apache", "maintainer": "", "maintainer_email": "", "name": "autoray", "package_url": "https://pypi.org/project/autoray/", "platform": null, "project_url": "https://pypi.org/project/autoray/", "project_urls": { "Homepage": "http://github.com/jcmgray/autoray" }, "release_url": "https://pypi.org/project/autoray/0.3.1/", "requires_dist": [ "numpy", "coverage ; extra == 'tests'", "pytest ; extra == 'tests'", "pytest-cov ; extra == 'tests'" ], "requires_python": ">=3.6", "summary": "Write backend agnostic numeric code compatible with any numpy-ish array library.", "version": "0.3.1", "yanked": false, "yanked_reason": null }, "last_serial": 13738177, "releases": { "0.1.0": [ { "comment_text": "", "digests": { "md5": "45b7024601ea3d0357e85ddfb582a957", "sha256": "7fd4cf0a4280a39dcc5df310612d3f946a451f495e9356590eb4dfcfaa89ca68" }, "downloads": -1, "filename": "autoray-0.1.0-py3-none-any.whl", "has_sig": false, "md5_digest": "45b7024601ea3d0357e85ddfb582a957", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": ">=3.5", "size": 10655, "upload_time": "2019-04-04T14:44:51", "upload_time_iso_8601": "2019-04-04T14:44:51.300483Z", "url": "https://files.pythonhosted.org/packages/ab/0f/f38840fc7f1b945a864ec5b4a12b9888151ee8971bc15b50ba2c08395c85/autoray-0.1.0-py3-none-any.whl", "yanked": false, "yanked_reason": null }, { "comment_text": "", "digests": { "md5": "236950930bcb8444df84f11b4b514b3b", "sha256": "0f364715e2802b0bc0c17f0c873d167c2549ac0adbde545aa1ece493a1b87bc0" }, "downloads": -1, "filename": "autoray-0.1.0.tar.gz", "has_sig": false, "md5_digest": "236950930bcb8444df84f11b4b514b3b", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.5", "size": 25928, "upload_time": "2019-04-04T14:44:53", "upload_time_iso_8601": "2019-04-04T14:44:53.732013Z", "url": "https://files.pythonhosted.org/packages/b3/ab/37fe12c451d0261a9083ae1869417d468eee1825fcb5ded0b162e6ca397d/autoray-0.1.0.tar.gz", "yanked": false, "yanked_reason": null } ], "0.1.1": [ { "comment_text": "", "digests": { "md5": "cb99680fdf5fe9c519457f218b1dc4b1", "sha256": "e31e36f8f7870f56278bae9114086cf0e51a591f6c210bc77fe02f9ad757ab04" }, "downloads": -1, "filename": "autoray-0.1.1-py3-none-any.whl", "has_sig": false, "md5_digest": "cb99680fdf5fe9c519457f218b1dc4b1", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": ">=3.5", "size": 12073, "upload_time": "2019-10-23T11:59:39", "upload_time_iso_8601": "2019-10-23T11:59:39.479233Z", "url": "https://files.pythonhosted.org/packages/f0/3b/03b5e645f8cd8d041c301ebb93551df644604b60d4451f178eb4df243587/autoray-0.1.1-py3-none-any.whl", "yanked": false, "yanked_reason": null }, { "comment_text": "", "digests": { "md5": "40660fb6ec91872cc8fa4ce491766cee", "sha256": "d5e0c061482839d8b024e30c549b0736ec262d4244aab66b36279b2d136dbed2" }, "downloads": -1, "filename": "autoray-0.1.1.tar.gz", "has_sig": false, "md5_digest": "40660fb6ec91872cc8fa4ce491766cee", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.5", "size": 27002, "upload_time": "2019-10-23T11:59:41", "upload_time_iso_8601": "2019-10-23T11:59:41.383372Z", "url": "https://files.pythonhosted.org/packages/14/0e/a5117ce3908e573803e41282ffb70a1c96ed19665e6871c6160d90ec8979/autoray-0.1.1.tar.gz", "yanked": false, "yanked_reason": null } ], "0.2.0": [ { "comment_text": "", "digests": { "md5": "1e6677ed491f2d3ba7071a9e6d260cdd", "sha256": "87996ca01b72bc6b9f3e815c18e3c4128f973a4330bdeca868352ec5edad8dd9" }, "downloads": -1, "filename": "autoray-0.2.0-py3-none-any.whl", "has_sig": false, "md5_digest": "1e6677ed491f2d3ba7071a9e6d260cdd", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": ">=3.5", "size": 13340, "upload_time": "2020-02-15T03:17:34", "upload_time_iso_8601": "2020-02-15T03:17:34.763933Z", "url": "https://files.pythonhosted.org/packages/64/aa/9b8e76d2150c6451a2db6c769571c79913829325bb0a39974d462800bb1b/autoray-0.2.0-py3-none-any.whl", "yanked": false, "yanked_reason": null }, { "comment_text": "", "digests": { "md5": "4f6e72988a052e375883dd06e6d08eb3", "sha256": "4ed18fb851358fcaa303dc2fe1dd9f624f2a7b72badfc545b005895d0392cfa8" }, "downloads": -1, "filename": "autoray-0.2.0.tar.gz", "has_sig": false, "md5_digest": "4f6e72988a052e375883dd06e6d08eb3", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.5", "size": 28952, "upload_time": "2020-02-15T03:17:36", "upload_time_iso_8601": "2020-02-15T03:17:36.587711Z", "url": "https://files.pythonhosted.org/packages/96/59/9409f740c68412a5267abeb7ed42454f4ec84186a5818ade726276055ea0/autoray-0.2.0.tar.gz", "yanked": false, "yanked_reason": null } ], "0.2.1": [ { "comment_text": "", "digests": { "md5": "7e6234d83464fac48ea77de01730c1d0", "sha256": "b9ceaa6e3ebf50e357b62cdb6b899657c28e721019757f63e6b83790489e0a10" }, "downloads": -1, "filename": "autoray-0.2.1-py3-none-any.whl", "has_sig": false, "md5_digest": "7e6234d83464fac48ea77de01730c1d0", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": ">=3.5", "size": 14227, "upload_time": "2020-03-11T18:36:55", "upload_time_iso_8601": "2020-03-11T18:36:55.103043Z", "url": "https://files.pythonhosted.org/packages/d7/12/9bed8f8fca758dfcb36921120613459053d9e099b6aedad3cc0f2b5401de/autoray-0.2.1-py3-none-any.whl", "yanked": false, "yanked_reason": null }, { "comment_text": "", "digests": { "md5": "92f73f092cf632db5d797699d41296e6", "sha256": "dc60095fbcc464e1d72ae7c4447438cff947e0cdb14626045b70abea8b58fe9e" }, "downloads": -1, "filename": "autoray-0.2.1.tar.gz", "has_sig": false, "md5_digest": "92f73f092cf632db5d797699d41296e6", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.5", "size": 30607, "upload_time": "2020-03-11T18:36:56", "upload_time_iso_8601": "2020-03-11T18:36:56.305603Z", "url": "https://files.pythonhosted.org/packages/dd/91/d66c5426595f6d20429fa1e1871402cc9638376b244009f4ff09ab465c90/autoray-0.2.1.tar.gz", "yanked": false, "yanked_reason": null } ], "0.2.2": [ { "comment_text": "", "digests": { "md5": "fa08547b22aefd320fe169d4a392e45e", "sha256": "b1dcad62088a8b3cbf7dc133028c1a25466b2a60f3c1ba3e7acf35f2cc239d68" }, "downloads": -1, "filename": "autoray-0.2.2-py3-none-any.whl", "has_sig": false, "md5_digest": "fa08547b22aefd320fe169d4a392e45e", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": ">=3.5", "size": 15128, "upload_time": "2020-04-14T22:11:33", "upload_time_iso_8601": "2020-04-14T22:11:33.216891Z", "url": "https://files.pythonhosted.org/packages/10/df/28b83097dae493bcee3f3b0629348be618738b0d4805d3986527245ff88e/autoray-0.2.2-py3-none-any.whl", "yanked": false, "yanked_reason": null }, { "comment_text": "", "digests": { "md5": "8dff849a677447a6234517c17fdc4d64", "sha256": "004e964b40309bf1a91b37b0a4e5ad5e014eb139eecf75830dde23c8379940fb" }, "downloads": -1, "filename": "autoray-0.2.2.tar.gz", "has_sig": false, "md5_digest": "8dff849a677447a6234517c17fdc4d64", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.5", "size": 33291, "upload_time": "2020-04-14T22:11:34", "upload_time_iso_8601": "2020-04-14T22:11:34.916869Z", "url": "https://files.pythonhosted.org/packages/4a/cf/5846296615734143bf6d2ddd75b7ae4b4cd697c36bcd031b07015bb06b5d/autoray-0.2.2.tar.gz", "yanked": false, "yanked_reason": null } ], "0.2.3": [ { "comment_text": "", "digests": { "md5": "dc9c40387bcf1e52e5065d6a53b9da7a", "sha256": "9bb4d765e5a2a5ee16ca3891bf4ab04a04e428af03b75165092360de976f7c03" }, "downloads": -1, "filename": "autoray-0.2.3-py3-none-any.whl", "has_sig": false, "md5_digest": "dc9c40387bcf1e52e5065d6a53b9da7a", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": ">=3.5", "size": 15299, "upload_time": "2020-07-31T04:26:55", "upload_time_iso_8601": "2020-07-31T04:26:55.382786Z", "url": "https://files.pythonhosted.org/packages/fc/c4/fe91e6d9f2c1872aab0d56d20d986a2ed841d1d9a8c294df7ccee51c2abe/autoray-0.2.3-py3-none-any.whl", "yanked": false, "yanked_reason": null }, { "comment_text": "", "digests": { "md5": "9c517217fc8ea786bad71234cd91b042", "sha256": "75196d0ffc4f98999d459c1a659d849a8ebfca9dfb8ad34654f53195a5aa7322" }, "downloads": -1, "filename": "autoray-0.2.3.tar.gz", "has_sig": false, "md5_digest": "9c517217fc8ea786bad71234cd91b042", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.5", "size": 33495, "upload_time": "2020-07-31T04:26:56", "upload_time_iso_8601": "2020-07-31T04:26:56.598781Z", "url": "https://files.pythonhosted.org/packages/e8/20/9c4c7d8fe247c9bd4331e9b35b714fcd6a198dcebb1f1c50c960a2d76d0a/autoray-0.2.3.tar.gz", "yanked": false, "yanked_reason": null } ], "0.2.4": [ { "comment_text": "", "digests": { "md5": "0f2aaaf617f7f084c3672b4944ee04ae", "sha256": "059299a17338431fcbce9551e81d50621d98de58706272f2a9c33558620f0a96" }, "downloads": -1, "filename": "autoray-0.2.4-py3-none-any.whl", "has_sig": false, "md5_digest": "0f2aaaf617f7f084c3672b4944ee04ae", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": ">=3.5", "size": 22749, "upload_time": "2020-12-20T07:16:15", "upload_time_iso_8601": "2020-12-20T07:16:15.176982Z", "url": "https://files.pythonhosted.org/packages/8e/1e/6bd204c64db24ab0001db6a5c43d30da99cd196fe840d2030e7380357a60/autoray-0.2.4-py3-none-any.whl", "yanked": false, "yanked_reason": null }, { "comment_text": "", "digests": { "md5": "04087853bdce2eda9bd062a0093a79d6", "sha256": "739e617496e6a334d3b856fec2ad39937ab382e2b26d1536abce4bf58fe57198" }, "downloads": -1, "filename": "autoray-0.2.4.tar.gz", "has_sig": false, "md5_digest": "04087853bdce2eda9bd062a0093a79d6", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.5", "size": 40868, "upload_time": "2020-12-20T07:16:16", "upload_time_iso_8601": "2020-12-20T07:16:16.771777Z", "url": "https://files.pythonhosted.org/packages/bc/e7/13187a7b620372a0b913704eb27a1dd80954fee215823a89feaddbd5a08d/autoray-0.2.4.tar.gz", "yanked": false, "yanked_reason": null } ], "0.2.5": [ { "comment_text": "", "digests": { "md5": "4233a2ebe233e065c2293e243462df02", "sha256": "1a095a06b2354ad7d9e61aaacc1432954837b15cf7618d30e85630bc884628c9" }, "downloads": -1, "filename": "autoray-0.2.5-py3-none-any.whl", "has_sig": false, "md5_digest": "4233a2ebe233e065c2293e243462df02", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": ">=3.5", "size": 16418, "upload_time": "2021-01-27T02:59:03", "upload_time_iso_8601": "2021-01-27T02:59:03.581793Z", "url": "https://files.pythonhosted.org/packages/2e/9f/6742425eba1664edc8d6e93dd4536f4552acacd1a7b3a7ac2871657c969a/autoray-0.2.5-py3-none-any.whl", "yanked": false, "yanked_reason": null }, { "comment_text": "", "digests": { "md5": "381a947d6eef1119bcba753948ac12a0", "sha256": "1c5bf914553f1d3d7d4af254f70d37b59aa20b2f6c0aa110f5b7f336c2cafbba" }, "downloads": -1, "filename": "autoray-0.2.5.tar.gz", "has_sig": false, "md5_digest": "381a947d6eef1119bcba753948ac12a0", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.5", "size": 35131, "upload_time": "2021-01-27T02:59:05", "upload_time_iso_8601": "2021-01-27T02:59:05.244870Z", "url": "https://files.pythonhosted.org/packages/60/76/e6c56a487be4f1606224f348341ef62c1db705652bef44946da49e225711/autoray-0.2.5.tar.gz", "yanked": false, "yanked_reason": null } ], "0.3.1": [ { "comment_text": "", "digests": { "md5": "71311399e42392f643ab13752de9ad39", "sha256": "cca89a210f13bfbbea1f907108fe88f9323916e77ac64b4ab89bb3f12e102554" }, "downloads": -1, "filename": "autoray-0.3.1-py3-none-any.whl", "has_sig": false, "md5_digest": "71311399e42392f643ab13752de9ad39", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": ">=3.6", "size": 36793, "upload_time": "2022-05-06T23:01:30", "upload_time_iso_8601": "2022-05-06T23:01:30.594162Z", "url": "https://files.pythonhosted.org/packages/d9/39/5be28009c0efe041ac4f6607bbd4a6a833831f627dd53aaa6776cc6f95dc/autoray-0.3.1-py3-none-any.whl", "yanked": false, "yanked_reason": null }, { "comment_text": "", "digests": { "md5": "62ea85b1f59dba1251b753a1321fcfa8", "sha256": "fea7f04ee855b6badddd659b1764b5d152b81e62bb59f051fba561d973dd761d" }, "downloads": -1, "filename": "autoray-0.3.1.tar.gz", "has_sig": false, "md5_digest": "62ea85b1f59dba1251b753a1321fcfa8", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.6", "size": 57445, "upload_time": "2022-05-06T23:01:32", "upload_time_iso_8601": "2022-05-06T23:01:32.689412Z", "url": "https://files.pythonhosted.org/packages/e4/59/7d1fbe644606382ef663eb459817066b105017df645dcf7c704e85d1a2e9/autoray-0.3.1.tar.gz", "yanked": false, "yanked_reason": null } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "71311399e42392f643ab13752de9ad39", "sha256": "cca89a210f13bfbbea1f907108fe88f9323916e77ac64b4ab89bb3f12e102554" }, "downloads": -1, "filename": "autoray-0.3.1-py3-none-any.whl", "has_sig": false, "md5_digest": "71311399e42392f643ab13752de9ad39", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": ">=3.6", "size": 36793, "upload_time": "2022-05-06T23:01:30", "upload_time_iso_8601": "2022-05-06T23:01:30.594162Z", "url": "https://files.pythonhosted.org/packages/d9/39/5be28009c0efe041ac4f6607bbd4a6a833831f627dd53aaa6776cc6f95dc/autoray-0.3.1-py3-none-any.whl", "yanked": false, "yanked_reason": null }, { "comment_text": "", "digests": { "md5": "62ea85b1f59dba1251b753a1321fcfa8", "sha256": "fea7f04ee855b6badddd659b1764b5d152b81e62bb59f051fba561d973dd761d" }, "downloads": -1, "filename": "autoray-0.3.1.tar.gz", "has_sig": false, "md5_digest": "62ea85b1f59dba1251b753a1321fcfa8", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.6", "size": 57445, "upload_time": "2022-05-06T23:01:32", "upload_time_iso_8601": "2022-05-06T23:01:32.689412Z", "url": "https://files.pythonhosted.org/packages/e4/59/7d1fbe644606382ef663eb459817066b105017df645dcf7c704e85d1a2e9/autoray-0.3.1.tar.gz", "yanked": false, "yanked_reason": null } ], "vulnerabilities": [] }