{ "info": { "author": "laughingman7743", "author_email": "laughingman7743@gmail.com", "bugtrack_url": null, "classifiers": [ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python :: 2", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.5", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7", "Topic :: Database :: Front-Ends" ], "description": ".. image:: https://img.shields.io/pypi/pyversions/PyAthena.svg\n :target: https://pypi.python.org/pypi/PyAthena/\n\n.. image:: https://travis-ci.org/laughingman7743/PyAthena.svg?branch=master\n :target: https://travis-ci.org/laughingman7743/PyAthena\n\n.. image:: https://codecov.io/gh/laughingman7743/PyAthena/branch/master/graph/badge.svg\n :target: https://codecov.io/gh/laughingman7743/PyAthena\n\n.. image:: https://img.shields.io/pypi/l/PyAthena.svg\n :target: https://github.com/laughingman7743/PyAthena/blob/master/LICENSE\n\n.. image:: https://img.shields.io/pypi/dm/PyAthena.svg\n :target: https://pypistats.org/packages/pyathena\n\n\nPyAthena\n========\n\nPyAthena is a Python `DB API 2.0 (PEP 249)`_ compliant client for `Amazon Athena`_.\n\n.. _`DB API 2.0 (PEP 249)`: https://www.python.org/dev/peps/pep-0249/\n.. _`Amazon Athena`: http://docs.aws.amazon.com/athena/latest/APIReference/Welcome.html\n\nlambda-pyathena\n===============\n\nlambda-pyathena is a fork of PyAthena that simply removes boto3 and botocore from the install-requires,\nresulting in an AWS Lambda friendly package.\n\n\nRequirements\n------------\n\n* Python\n\n - CPython 2,7, 3.5, 3.6, 3.7\n\nInstallation\n------------\n\n.. code:: bash\n\n $ pip install lambda-pyathena\n\nExtra packages:\n\n+---------------+--------------------------------------+------------------+\n| Package | Install command | Version |\n+===============+======================================+==================+\n| Pandas | ``pip install PyAthena[Pandas]`` | >=0.24.0 |\n+---------------+--------------------------------------+------------------+\n| SQLAlchemy | ``pip install PyAthena[SQLAlchemy]`` | >=1.0.0, <1.3.0 |\n+---------------+--------------------------------------+------------------+\n\nUsage\n-----\n\nBasic usage\n~~~~~~~~~~~\n\n.. code:: python\n\n from pyathena import connect\n\n cursor = connect(aws_access_key_id='YOUR_ACCESS_KEY_ID',\n aws_secret_access_key='YOUR_SECRET_ACCESS_KEY',\n s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2').cursor()\n cursor.execute(\"SELECT * FROM one_row\")\n print(cursor.description)\n print(cursor.fetchall())\n\nCursor iteration\n~~~~~~~~~~~~~~~~\n\n.. code:: python\n\n from pyathena import connect\n\n cursor = connect(aws_access_key_id='YOUR_ACCESS_KEY_ID',\n aws_secret_access_key='YOUR_SECRET_ACCESS_KEY',\n s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2').cursor()\n cursor.execute(\"SELECT * FROM many_rows LIMIT 10\")\n for row in cursor:\n print(row)\n\nQuery with parameter\n~~~~~~~~~~~~~~~~~~~~\n\nSupported `DB API paramstyle`_ is only ``PyFormat``.\n``PyFormat`` only supports `named placeholders`_ with old ``%`` operator style and parameters specify dictionary format.\n\n.. code:: python\n\n from pyathena import connect\n\n cursor = connect(aws_access_key_id='YOUR_ACCESS_KEY_ID',\n aws_secret_access_key='YOUR_SECRET_ACCESS_KEY',\n s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2').cursor()\n cursor.execute(\"\"\"\n SELECT col_string FROM one_row_complex\n WHERE col_string = %(param)s\n \"\"\", {'param': 'a string'})\n print(cursor.fetchall())\n\nif ``%`` character is contained in your query, it must be escaped with ``%%`` like the following:\n\n.. code:: sql\n\n SELECT col_string FROM one_row_complex\n WHERE col_string = %(param)s OR col_string LIKE 'a%%'\n\n.. _`DB API paramstyle`: https://www.python.org/dev/peps/pep-0249/#paramstyle\n.. _`named placeholders`: https://pyformat.info/#named_placeholders\n\nSQLAlchemy\n~~~~~~~~~~\n\nInstall SQLAlchemy with ``pip install \"SQLAlchemy>=1.0.0, <1.3.0\"`` or ``pip install PyAthena[SQLAlchemy]``.\nSupported SQLAlchemy is 1.0.0 or higher and less than 1.3.0.\n\n.. code:: python\n\n from urllib.parse import quote_plus # PY2: from urllib import quote_plus\n from sqlalchemy.engine import create_engine\n from sqlalchemy.sql.expression import select\n from sqlalchemy.sql.functions import func\n from sqlalchemy.sql.schema import Table, MetaData\n\n conn_str = 'awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com:443/'\\\n '{schema_name}?s3_staging_dir={s3_staging_dir}'\n engine = create_engine(conn_str.format(\n aws_access_key_id=quote_plus('YOUR_ACCESS_KEY_ID'),\n aws_secret_access_key=quote_plus('YOUR_SECRET_ACCESS_KEY'),\n region_name='us-west-2',\n schema_name='default',\n s3_staging_dir=quote_plus('s3://YOUR_S3_BUCKET/path/to/')))\n many_rows = Table('many_rows', MetaData(bind=engine), autoload=True)\n print(select([func.count('*')], from_obj=many_rows).scalar())\n\nThe connection string has the following format:\n\n.. code:: python\n\n awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com:443/{schema_name}?s3_staging_dir={s3_staging_dir}&...\n\nIf you do not specify ``aws_access_key_id`` and ``aws_secret_access_key`` using instance profile or boto3 configuration file:\n\n.. code:: python\n\n awsathena+rest://:@athena.{region_name}.amazonaws.com:443/{schema_name}?s3_staging_dir={s3_staging_dir}&...\n\nNOTE: ``s3_staging_dir`` requires quote. If ``aws_access_key_id``, ``aws_secret_access_key`` and other parameter contain special characters, quote is also required.\n\nPandas\n~~~~~~\n\nMinimal example for Pandas DataFrame:\n\n.. code:: python\n\n from pyathena import connect\n import pandas as pd\n\n conn = connect(aws_access_key_id='YOUR_ACCESS_KEY_ID',\n aws_secret_access_key='YOUR_SECRET_ACCESS_KEY',\n s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2')\n df = pd.read_sql(\"SELECT * FROM many_rows\", conn)\n print(df.head())\n\nAs Pandas DataFrame:\n\n.. code:: python\n\n from pyathena import connect\n from pyathena.util import as_pandas\n\n cursor = connect(aws_access_key_id='YOUR_ACCESS_KEY_ID',\n aws_secret_access_key='YOUR_SECRET_ACCESS_KEY',\n s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2').cursor()\n cursor.execute(\"SELECT * FROM many_rows\")\n df = as_pandas(cursor)\n print(df.describe())\n\nIf you want to use Pandas `DataFrame object`_ directly, you can use `PandasCursor`_.\n\nAsynchronousCursor\n~~~~~~~~~~~~~~~~~~\n\nAsynchronousCursor is a simple implementation using the concurrent.futures package.\nPython 2.7 uses `backport of the concurrent.futures`_ package.\nThis cursor is not `DB API 2.0 (PEP 249)`_ compliant.\n\nYou can use the AsynchronousCursor by specifying the ``cursor_class``\nwith the connect method or connection object.\n\n.. code:: python\n\n from pyathena import connect\n from pyathena.async_cursor import AsyncCursor\n\n cursor = connect(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2',\n cursor_class=AsyncCursor).cursor()\n\n.. code:: python\n\n from pyathena.connection import Connection\n from pyathena.async_cursor import AsyncCursor\n\n cursor = Connection(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2',\n cursor_class=AsyncCursor).cursor()\n\nIt can also be used by specifying the cursor class when calling the connection object's cursor method.\n\n.. code:: python\n\n from pyathena import connect\n from pyathena.async_cursor import AsyncCursor\n\n cursor = connect(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2').cursor(AsyncCursor)\n\n.. code:: python\n\n from pyathena.connection import Connection\n from pyathena.async_cursor import AsyncCursor\n\n cursor = Connection(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2').cursor(AsyncCursor)\n\nThe default number of workers is 5 or cpu number * 5.\nIf you want to change the number of workers you can specify like the following.\n\n.. code:: python\n\n from pyathena import connect\n from pyathena.async_cursor import AsyncCursor\n\n cursor = connect(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2',\n cursor_class=AsyncCursor).cursor(max_workers=10)\n\nThe execute method of the AsynchronousCursor returns the tuple of the query ID and the `future object`_.\n\n.. code:: python\n\n from pyathena import connect\n from pyathena.async_cursor import AsyncCursor\n\n cursor = connect(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2',\n cursor_class=AsyncCursor).cursor()\n\n query_id, future = cursor.execute(\"SELECT * FROM many_rows\")\n\nThe return value of the `future object`_ is an ``AthenaResultSet`` object.\nThis object has an interface that can fetch and iterate query results similar to synchronous cursors.\nIt also has information on the result of query execution.\n\n.. code:: python\n\n from pyathena import connect\n from pyathena.async_cursor import AsyncCursor\n\n cursor = connect(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2',\n cursor_class=AsyncCursor).cursor()\n\n query_id, future = cursor.execute(\"SELECT * FROM many_rows\")\n result_set = future.result()\n print(result_set.state)\n print(result_set.state_change_reason)\n print(result_set.completion_date_time)\n print(result_set.submission_date_time)\n print(result_set.data_scanned_in_bytes)\n print(result_set.execution_time_in_millis)\n print(result_set.output_location)\n print(result_set.description)\n for row in result_set:\n print(row)\n\n.. code:: python\n\n from pyathena import connect\n from pyathena.async_cursor import AsyncCursor\n\n cursor = connect(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2',\n cursor_class=AsyncCursor).cursor()\n\n query_id, future = cursor.execute(\"SELECT * FROM many_rows\")\n result_set = future.result()\n print(result_set.fetchall())\n\nA query ID is required to cancel a query with the asynchronous cursor.\n\n.. code:: python\n\n from pyathena import connect\n from pyathena.async_cursor import AsyncCursor\n\n cursor = connect(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2',\n cursor_class=AsyncCursor).cursor()\n\n query_id, future = cursor.execute(\"SELECT * FROM many_rows\")\n cursor.cancel(query_id)\n\nNOTE: The cancel method of the `future object`_ does not cancel the query.\n\n.. _`backport of the concurrent.futures`: https://pypi.python.org/pypi/futures\n.. _`future object`: https://docs.python.org/3/library/concurrent.futures.html#future-objects\n\nPandasCursor\n~~~~~~~~~~~~\n\nPandasCursor directly handles the CSV file of the query execution result output to S3.\nThis cursor is to download the CSV file after executing the query, and then loaded into `DataFrame object`_.\nPerformance is better than fetching data with a cursor.\n\nYou can use the PandasCursor by specifying the ``cursor_class``\nwith the connect method or connection object.\n\n.. code:: python\n\n from pyathena import connect\n from pyathena.pandas_cursor import PandasCursor\n\n cursor = connect(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2',\n cursor_class=PandasCursor).cursor()\n\n.. code:: python\n\n from pyathena.connection import Connection\n from pyathena.pandas_cursor import PandasCursor\n\n cursor = Connection(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2',\n cursor_class=PandasCursor).cursor()\n\nIt can also be used by specifying the cursor class when calling the connection object's cursor method.\n\n.. code:: python\n\n from pyathena import connect\n from pyathena.pandas_cursor import PandasCursor\n\n cursor = connect(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2').cursor(PandasCursor)\n\n.. code:: python\n\n from pyathena.connection import Connection\n from pyathena.pandas_cursor import PandasCursor\n\n cursor = Connection(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2').cursor(PandasCursor)\n\nThe as_pandas method returns `DataFrame object`_.\n\n.. code:: python\n\n from pyathena import connect\n from pyathena.pandas_cursor import PandasCursor\n\n cursor = connect(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2',\n cursor_class=PandasCursor).cursor()\n\n df = cursor.execute(\"SELECT * FROM many_rows\").as_pandas()\n print(df.describe())\n print(df.head())\n\nSupport fetch and iterate query results.\n\n.. code:: python\n\n from pyathena import connect\n from pyathena.pandas_cursor import PandasCursor\n\n cursor = connect(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2',\n cursor_class=PandasCursor).cursor()\n\n cursor.execute(\"SELECT * FROM many_rows\")\n print(cursor.fetchone())\n print(cursor.fetchmany())\n print(cursor.fetchall())\n\n.. code:: python\n\n from pyathena import connect\n from pyathena.pandas_cursor import PandasCursor\n\n cursor = connect(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2',\n cursor_class=PandasCursor).cursor()\n\n cursor.execute(\"SELECT * FROM many_rows\")\n for row in cursor:\n print(row)\n\nThe DATE and TIMESTAMP of Athena's data type are returned as `pandas.Timestamp`_ type.\n\n.. code:: python\n\n from pyathena import connect\n from pyathena.pandas_cursor import PandasCursor\n\n cursor = connect(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2',\n cursor_class=PandasCursor).cursor()\n\n cursor.execute(\"SELECT col_timestamp FROM one_row_complex\")\n print(type(cursor.fetchone()[0])) # \n\nExecution information of the query can also be retrieved.\n\n.. code:: python\n\n from pyathena import connect\n from pyathena.pandas_cursor import PandasCursor\n\n cursor = connect(s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',\n region_name='us-west-2',\n cursor_class=PandasCursor).cursor()\n\n cursor.execute(\"SELECT * FROM many_rows\")\n print(cursor.state)\n print(cursor.state_change_reason)\n print(cursor.completion_date_time)\n print(cursor.submission_date_time)\n print(cursor.data_scanned_in_bytes)\n print(cursor.execution_time_in_millis)\n print(cursor.output_location)\n\nNOTE: PandasCursor handles the CSV file on memory. Pay attention to the memory capacity.\n\n.. _`DataFrame object`: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html\n.. _`pandas.Timestamp`: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Timestamp.html\n\nCredentials\n-----------\n\nSupport `Boto3 credentials`_.\n\n.. _`Boto3 credentials`: http://boto3.readthedocs.io/en/latest/guide/configuration.html\n\nAdditional environment variable:\n\n.. code:: bash\n\n $ export AWS_ATHENA_S3_STAGING_DIR=s3://YOUR_S3_BUCKET/path/to/\n\nTesting\n-------\n\nDepends on the following environment variables:\n\n.. code:: bash\n\n $ export AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID\n $ export AWS_SECRET_ACCESS_KEY=YOUR_SECRET_ACCESS_KEY\n $ export AWS_DEFAULT_REGION=us-west-2\n $ export AWS_ATHENA_S3_STAGING_DIR=s3://YOUR_S3_BUCKET/path/to/\n\nRun test\n~~~~~~~~\n\n.. code:: bash\n\n $ pip install pipenv\n $ pipenv install --dev\n $ pipenv run scripts/test_data/upload_test_data.sh\n $ pipenv run pytest\n $ pipenv run scripts/test_data/delete_test_data.sh\n\nRun test multiple Python versions\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n.. code:: bash\n\n $ pip install pipenv\n $ pipenv install --dev\n $ pipenv run scripts/test_data/upload_test_data.sh\n $ pyenv local 3.7.2 3.6.8 3.5.7 2.7.16\n $ pipenv run tox\n $ pipenv run scripts/test_data/delete_test_data.sh\n\n\n", "description_content_type": "", "docs_url": null, "download_url": "", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "https://github.com/laughingman7743/PyAthena/", "keywords": "", "license": "MIT License", "maintainer": "", "maintainer_email": "", "name": "lambda-pyathena", "package_url": "https://pypi.org/project/lambda-pyathena/", "platform": "", "project_url": "https://pypi.org/project/lambda-pyathena/", "project_urls": { "Homepage": "https://github.com/laughingman7743/PyAthena/" }, "release_url": "https://pypi.org/project/lambda-pyathena/1.6.1/", "requires_dist": [ "future", "tenacity (>=4.1.0)", "futures ; python_version == \"2.7\"", "pandas (>=0.24.0) ; extra == 'pandas'", "SQLAlchemy (<1.3.0,>=1.0.0) ; extra == 'sqlalchemy'" ], "requires_python": "", "summary": "Python DB API 2.0 (PEP 249) compliant client for Amazon Athena", "version": "1.6.1" }, "last_serial": 5298746, "releases": { "1.3.0": [ { "comment_text": "", "digests": { "md5": "5023a7293e569383a01447ddf5f0f50a", "sha256": "8e1d4b33ca541e8608252aacedb7bb6a2dea8ee1c62196bc1dbabe983cf06298" }, "downloads": -1, "filename": "lambda_pyathena-1.3.0-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "5023a7293e569383a01447ddf5f0f50a", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": null, "size": 29085, "upload_time": "2018-08-07T16:05:09", "url": "https://files.pythonhosted.org/packages/fe/32/c7db6baeacd0a2015ea25dcd1e4881079c3d3f69056eee71a2be4069760c/lambda_pyathena-1.3.0-py2.py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "e96dfe667ec6ec8e133720833b63778e", "sha256": "7918dc1551a62721cf1c123b42c77e049d4fb613cef599014dd3796226e3a523" }, "downloads": -1, "filename": "lambda-pyathena-1.3.0.tar.gz", "has_sig": false, "md5_digest": "e96dfe667ec6ec8e133720833b63778e", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 23725, "upload_time": "2018-08-07T16:05:10", "url": "https://files.pythonhosted.org/packages/b5/a7/428ae6d99f823556c0df5df0760bc6ad1ba021635b7e3e49e8322d795821/lambda-pyathena-1.3.0.tar.gz" } ], "1.4.1": [ { "comment_text": "", "digests": { "md5": "fd1a4308ce1619fc6c89f5a28ad5391a", "sha256": "14d1074a87cbf1871613d338b902bdd7a33e2126bc76b73d7aad6321175a54d2" }, "downloads": -1, "filename": "lambda_pyathena-1.4.1-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "fd1a4308ce1619fc6c89f5a28ad5391a", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": null, "size": 47918, "upload_time": "2018-09-26T19:57:32", "url": "https://files.pythonhosted.org/packages/20/1c/20c707894c73eed4bdb1554d9c33b6a06afcf96fa14625c2d237d3edb493/lambda_pyathena-1.4.1-py2.py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "a761707ec07c68297d6b8f5c17c2d432", "sha256": "e0900581acc2b9a5f3bcfe76bbaedc01c44553fd6c8139a35a29127719a6f0e4" }, "downloads": -1, "filename": "lambda-pyathena-1.4.1.tar.gz", "has_sig": false, "md5_digest": "a761707ec07c68297d6b8f5c17c2d432", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 36716, "upload_time": "2018-09-26T19:57:34", "url": "https://files.pythonhosted.org/packages/0a/d1/51ce122b76342773c33b9e80c554c2aaf9ad23115b6e4b04dad0e71684b3/lambda-pyathena-1.4.1.tar.gz" } ], "1.4.4": [ { "comment_text": "", "digests": { "md5": "332cbcdab6fe916e2c890ed6c3ee9f88", "sha256": "e2ce6219821cc72c85b0c86b838e982f875486121d42ab31399584967de7e154" }, "downloads": -1, "filename": "lambda_pyathena-1.4.4-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "332cbcdab6fe916e2c890ed6c3ee9f88", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": null, "size": 50305, "upload_time": "2019-01-04T23:34:20", "url": "https://files.pythonhosted.org/packages/ec/84/0bdf0e990ee46e8daa4e934bceeec60955e7a4d67ddb326cf8b4284fd8b1/lambda_pyathena-1.4.4-py2.py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "9d4370089c3cab170a1de4a64eb3863a", "sha256": "b9322751354d708d282819e733fefe1b5e336fc16d76f26ec4c9e3e9fbc5f504" }, "downloads": -1, "filename": "lambda-pyathena-1.4.4.tar.gz", "has_sig": false, "md5_digest": "9d4370089c3cab170a1de4a64eb3863a", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 35956, "upload_time": "2019-01-04T23:34:21", "url": "https://files.pythonhosted.org/packages/4f/61/e9f6f614577f62255fc1fba72f4544e2bf7f8ba2ac88549baa72723861aa/lambda-pyathena-1.4.4.tar.gz" } ], "1.5.0": [ { "comment_text": "", "digests": { "md5": "15939d38c82bd47cccf95f0b9d2d90d8", "sha256": "8fa2a932eab80b56477c7569eb31fa3fbf5052eb23e2c3573ea39fb6f5a9ea31" }, "downloads": -1, "filename": "lambda_pyathena-1.5.0-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "15939d38c82bd47cccf95f0b9d2d90d8", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": null, "size": 50607, "upload_time": "2019-03-08T20:06:03", "url": "https://files.pythonhosted.org/packages/84/c6/97dee31c9d2a18a32748b2f0a978bd3a42041f03cfd0cd3384446102ce77/lambda_pyathena-1.5.0-py2.py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "ba8ece7d2227b762e11da1e46e4be823", "sha256": "d771afed0c43ca254ad6eb7d60bdeef9ac8553292dace5de2e07db72da537a43" }, "downloads": -1, "filename": "lambda-pyathena-1.5.0.tar.gz", "has_sig": false, "md5_digest": "ba8ece7d2227b762e11da1e46e4be823", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 38456, "upload_time": "2019-03-08T20:06:05", "url": "https://files.pythonhosted.org/packages/f3/1e/abf02aac4af2ca0abff3c14a260e9087fd85eb6a9997be9b50fc854379c3/lambda-pyathena-1.5.0.tar.gz" } ], "1.6.1": [ { "comment_text": "", "digests": { "md5": "8fb4283ec3dc346fa245d2799cec91d3", "sha256": "d079b64cd10a4ea81a6dd76715442e38b5a9ff626c1882d45043ad06461f0b4e" }, "downloads": -1, "filename": "lambda_pyathena-1.6.1-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "8fb4283ec3dc346fa245d2799cec91d3", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": null, "size": 50753, "upload_time": "2019-05-21T16:41:08", "url": "https://files.pythonhosted.org/packages/89/21/12f76e3bd81ead659116a8eef4458aff80ef5433f5f5ed683253cc24e295/lambda_pyathena-1.6.1-py2.py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "46f992801e8482a68262d21f17e342af", "sha256": "d24b2e1670920aa218721aa4c2c6f263b22cf73fa3eb17ba2f2bab71fd48cec7" }, "downloads": -1, "filename": "lambda-pyathena-1.6.1.tar.gz", "has_sig": false, "md5_digest": "46f992801e8482a68262d21f17e342af", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 39706, "upload_time": "2019-05-21T16:41:09", "url": "https://files.pythonhosted.org/packages/5c/6d/fad43dfad02f5f213a41dbfbf2ffd322b25bbc672569ef8e7669d271e26a/lambda-pyathena-1.6.1.tar.gz" } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "8fb4283ec3dc346fa245d2799cec91d3", "sha256": "d079b64cd10a4ea81a6dd76715442e38b5a9ff626c1882d45043ad06461f0b4e" }, "downloads": -1, "filename": "lambda_pyathena-1.6.1-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "8fb4283ec3dc346fa245d2799cec91d3", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": null, "size": 50753, "upload_time": "2019-05-21T16:41:08", "url": "https://files.pythonhosted.org/packages/89/21/12f76e3bd81ead659116a8eef4458aff80ef5433f5f5ed683253cc24e295/lambda_pyathena-1.6.1-py2.py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "46f992801e8482a68262d21f17e342af", "sha256": "d24b2e1670920aa218721aa4c2c6f263b22cf73fa3eb17ba2f2bab71fd48cec7" }, "downloads": -1, "filename": "lambda-pyathena-1.6.1.tar.gz", "has_sig": false, "md5_digest": "46f992801e8482a68262d21f17e342af", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 39706, "upload_time": "2019-05-21T16:41:09", "url": "https://files.pythonhosted.org/packages/5c/6d/fad43dfad02f5f213a41dbfbf2ffd322b25bbc672569ef8e7669d271e26a/lambda-pyathena-1.6.1.tar.gz" } ] }