{
"info": {
"author": "Brian T. Park",
"author_email": "brian@xparks.net",
"bugtrack_url": null,
"classifiers": [],
"description": "BigQuery Schema Generator\n=========================\n\nThis script generates the BigQuery schema from the newline-delimited\ndata records on the STDIN. The records can be in JSON format or CSV\nformat. The BigQuery data importer (``bq load``) uses only the first 100\nlines when the schema auto-detection feature is enabled. In contrast,\nthis script uses all data records to generate the schema.\n\nUsage:\n\n::\n\n $ generate-schema < file.data.json > file.schema.json\n $ generate-schema --input_format csv < file.data.csv > file.schema.json\n\nVersion: 0.5.1 (2019-06-19)\n\nBackground\n----------\n\nData can be imported into\n`BigQuery `__ using the\n`bq `__ command\nline tool. It accepts a number of data formats including CSV or\nnewline-delimited JSON. The data can be loaded into an existing table or\na new table can be created during the loading process. The structure of\nthe table is defined by its\n`schema `__. The table's\nschema can be defined manually or the schema can be\n`auto-detected `__.\n\nWhen the auto-detect feature is used, the BigQuery data importer\nexamines only the `first 100\nrecords `__ of the\ninput data. In many cases, this is sufficient because the data records\nwere dumped from another database and the exact schema of the source\ntable was known. However, for data extracted from a service (e.g. using\na REST API) the record fields could have been organically added at later\ndates. In this case, the first 100 records do not contain fields which\nare present in later records. The **bq load** auto-detection fails and\nthe data fails to load.\n\nThe **bq load** tool does not support the ability to process the entire\ndataset to determine a more accurate schema. This script fills in that\ngap. It processes the entire dataset given in the STDIN and outputs the\nBigQuery schema in JSON format on the STDOUT. This schema file can be\nfed back into the **bq load** tool to create a table that is more\ncompatible with the data fields in the input dataset.\n\nInstallation\n------------\n\nInstall from `PyPI `__ repository using\n``pip3``. If you want to install the package for your entire system\nglobally, use\n\n::\n\n $ sudo -H pip3 install bigquery_schema_generator\n\nIf you are using a virtual environment (such as\n`venv `__), then you don't\nneed the ``sudo`` coommand, and you can just type:\n\n::\n\n $ pip3 install bigquery_schema_generator\n\nA successful install should print out something like the following (the\nversion number may be different):\n\n::\n\n Collecting bigquery-schema-generator\n Installing collected packages: bigquery-schema-generator\n Successfully installed bigquery-schema-generator-0.3.2\n\nThe shell script ``generate-schema`` is installed in the same directory\nas ``pip3``.\n\nUbuntu Linux\n~~~~~~~~~~~~\n\nUnder Ubuntu Linux, you should find the ``generate-schema`` script at\n``/usr/local/bin/generate-schema``.\n\nMacOS\n~~~~~\n\nIf you installed Python from `Python Releases for Mac OS\nX `__, then\n``/usr/local/bin/pip3`` is a symlink to\n``/Library/Frameworks/Python.framework/Versions/3.6/bin/pip3``. So\n``generate-schema`` is installed at\n``/Library/Frameworks/Python.framework/Versions/3.6/bin/generate-schema``.\n\nThe Python installer updates ``$HOME/.bash_profile`` to add\n``/Library/Frameworks/Python.framework/Versions/3.6/bin`` to the\n``$PATH`` environment variable. So you should be able to run the\n``generate-schema`` command without typing in the full path.\n\nUsage\n-----\n\nThe ``generate_schema.py`` script accepts a newline-delimited JSON or\nCSV data file on the STDIN. JSON input format has been tested\nextensively. CSV input format was added more recently (in v0.4) using\nthe ``--input_format csv`` flag. The support is not as robust as JSON\nfile. For example, CSV format supports only the comma-separator, and\ndoes not support the pipe (``|``) or tab (``\\t``) character.\n\nUnlike ``bq load``, the ``generate_schema.py`` script reads every record\nin the input data file to deduce the table's schema. It prints the JSON\nformatted schema file on the STDOUT.\n\nThere are at least 3 ways to run this script:\n\n**1) Shell script**\n\nIf you installed using ``pip3``, then it should have installed a small\nhelper script named ``generate-schema`` in your local ``./bin``\ndirectory of your current environment (depending on whether you are\nusing a virtual environment).\n\n::\n\n $ generate-schema < file.data.json > file.schema.json\n\n**2) Python module**\n\nYou can invoke the module directly using:\n\n::\n\n $ python3 -m bigquery_schema_generator.generate_schema < file.data.json > file.schema.json\n\nThis is essentially what the ``generate-schema`` command does.\n\n**3) Python script**\n\nIf you retrieved this code from its `GitHub\nrepository `__,\nthen you can invoke the Python script directly:\n\n::\n\n $ ./generate_schema.py < file.data.json > file.schema.json\n\nUsing the Schema Output\n~~~~~~~~~~~~~~~~~~~~~~~\n\nThe resulting schema file can be given to the **bq load** command using\nthe ``--schema`` flag:\n\n::\n\n\n $ bq load --source_format NEWLINE_DELIMITED_JSON \\\n --ignore_unknown_values \\\n --schema file.schema.json \\\n mydataset.mytable \\\n file.data.json\n\nwhere ``mydataset.mytable`` is the target table in BigQuery.\n\nFor debugging purposes, here is the equivalent ``bq load`` command using\nschema autodetection:\n\n::\n\n $ bq load --source_format NEWLINE_DELIMITED_JSON \\\n --autodetect \\\n mydataset.mytable \\\n file.data.json\n\nIf the input file is in CSV format, the first line will be the header\nline which enumerates the names of the columns. But this header line\nmust be skipped when importing the file into the BigQuery table. We\naccomplish this using ``--skip_leading_rows`` flag:\n\n::\n\n $ bq load --source_format CSV \\\n --schema file.schema.json \\\n --skip_leading_rows 1 \\\n mydataset.mytable \\\n file.data.csv\n\nHere is the equivalent ``bq load`` command for CSV files using\nautodetection:\n\n::\n\n $ bq load --source_format CSV \\\n --autodetect \\\n mydataset.mytable \\\n file.data.csv\n\nA useful flag for ``bq load``, particularly for JSON files, is\n``--ignore_unknown_values``, which causes ``bq load`` to ignore fields\nin the input data which are not defined in the schema. When\n``generate_schema.py`` detects an inconsistency in the definition of a\nparticular field in the input data, it removes the field from the schema\ndefinition. Without the ``--ignore_unknown_values``, the ``bq load``\nfails when the inconsistent data record is read.\n\nAnother useful flag during development and debugging is ``--replace``\nwhich replaces any existing BigQuery table.\n\nAfter the BigQuery table is loaded, the schema can be retrieved using:\n\n::\n\n $ bq show --schema mydataset.mytable | python3 -m json.tool\n\n(The ``python -m json.tool`` command will pretty-print the JSON\nformatted schema file. An alternative is the `jq\ncommand `__.) The resulting schema file\nshould be identical to ``file.schema.json``.\n\nFlag Options\n~~~~~~~~~~~~\n\nThe ``generate_schema.py`` script supports a handful of command line\nflags as shown by the ``--help`` flag below.\n\nHelp (``--help``)\n^^^^^^^^^^^^^^^^^\n\nPrint the built-in help strings:\n\n::\n\n $ generate-schema --help\n usage: generate_schema.py [-h] [--input_format INPUT_FORMAT] [--keep_nulls]\n [--quoted_values_are_strings] [--infer_mode]\n [--debugging_interval DEBUGGING_INTERVAL]\n [--debugging_map] [--sanitize_names]\n\n Generate BigQuery schema from JSON or CSV file.\n\n optional arguments:\n -h, --help show this help message and exit\n --input_format INPUT_FORMAT\n Specify an alternative input format ('csv', 'json')\n --keep_nulls Print the schema for null values, empty arrays or\n empty records\n --quoted_values_are_strings\n Quoted values should be interpreted as strings\n --infer_mode Determine if mode can be 'NULLABLE' or 'REQUIRED'\n --debugging_interval DEBUGGING_INTERVAL\n Number of lines between heartbeat debugging messages\n --debugging_map Print the metadata schema_map instead of the schema\n --sanitize_names Forces schema name to comply with BigQuery naming\n standard\n\nInput Format (``--input_format``)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nSpecifies the format of the input file, either ``json`` (default) or\n``csv``.\n\nIf ``csv`` file is specified, the ``--keep_nulls`` flag is automatically\nactivated. This is required because CSV columns are defined\npositionally, so the schema file must contain all the columns specified\nby the CSV file, in the same order, even if the column contains an empty\nvalue for every record.\n\nSee `Issue\n#26 `__\nfor implementation details.\n\nKeep Nulls (``--keep_nulls``)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nNormally when the input data file contains a field which has a null,\nempty array or empty record as its value, the field is suppressed in the\nschema file. This flag enables this field to be included in the schema\nfile.\n\nIn other words, using a data file containing just nulls and empty\nvalues:\n\n::\n\n $ generate_schema\n { \"s\": null, \"a\": [], \"m\": {} }\n ^D\n INFO:root:Processed 1 lines\n []\n\nWith the ``keep_nulls`` flag, we get:\n\n::\n\n $ generate-schema --keep_nulls\n { \"s\": null, \"a\": [], \"m\": {} }\n ^D\n INFO:root:Processed 1 lines\n [\n {\n \"mode\": \"REPEATED\",\n \"type\": \"STRING\",\n \"name\": \"a\"\n },\n {\n \"mode\": \"NULLABLE\",\n \"fields\": [\n {\n \"mode\": \"NULLABLE\",\n \"type\": \"STRING\",\n \"name\": \"__unknown__\"\n }\n ],\n \"type\": \"RECORD\",\n \"name\": \"d\"\n },\n {\n \"mode\": \"NULLABLE\",\n \"type\": \"STRING\",\n \"name\": \"s\"\n }\n ]\n\nQuoted Values Are Strings (``--quoted_values_are_strings``)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nBy default, quoted values are inspected to determine if they can be\ninterpreted as ``DATE``, ``TIME``, ``TIMESTAMP``, ``BOOLEAN``,\n``INTEGER`` or ``FLOAT``. This is consistent with the algorithm used by\n``bq load``. However, for the ``BOOLEAN``, ``INTEGER``, or ``FLOAT``\ntypes, it is sometimes more useful to interpret those as normal strings\ninstead. This flag disables type inference for ``BOOLEAN``, ``INTEGER``\nand ``FLOAT`` types inside quoted strings.\n\n::\n\n $ generate-schema\n { \"name\": \"1\" }\n ^D\n [\n {\n \"mode\": \"NULLABLE\",\n \"name\": \"name\",\n \"type\": \"INTEGER\"\n }\n ]\n\n $ generate-schema --quoted_values_are_strings\n { \"name\": \"1\" }\n ^D\n [\n {\n \"mode\": \"NULLABLE\",\n \"name\": \"name\",\n \"type\": \"STRING\"\n }\n ]\n\nInfer Mode (``--infer_mode``)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nSet the schema ``mode`` of a field to ``REQUIRED`` instead of the\ndefault ``NULLABLE`` if the field contains a non-null or non-empty value\nfor every data record in the input file. This option is available only\nfor CSV (``--input_format csv``) files. It is theoretically possible to\nimplement this feature for JSON files, but too difficult to implement in\npractice because fields are often completely missing from a given JSON\nrecord (instead of explicitly being defined to be ``null``).\n\nSee `Issue\n#28 `__\nfor implementation details.\n\nDebugging Interval (``--debugging_interval``)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nBy default, the ``generate_schema.py`` script prints a short progress\nmessage every 1000 lines of input data. This interval can be changed\nusing the ``--debugging_interval`` flag.\n\n::\n\n $ generate-schema --debugging_interval 50 < file.data.json > file.schema.json\n\nDebugging Map (``--debugging_map``)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nInstead of printing out the BigQuery schema, the ``--debugging_map``\nprints out the bookkeeping metadata map which is used internally to keep\ntrack of the various fields and their types that were inferred using the\ndata file. This flag is intended to be used for debugging.\n\n::\n\n $ generate-schema --debugging_map < file.data.json > file.schema.json\n\nSanitize Names (``--sanitize_names``)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nBigQuery column names are restricted to certain characters and length.\nWith this flag, column names are sanitizes so that any character outside\nof ASCII letters, numbers and underscore (``[a-zA-Z0-9_]``) are\nconverted to an underscore. (For example \"go&2#there!\" is converted to\n\"go\\_2\\_there\\_\".) Names longer than 128 characters are truncated to\n128.\n\nSchema Types\n------------\n\nSupported Types\n~~~~~~~~~~~~~~~\n\nThe **bq show --schema** command produces a JSON schema file that uses\nthe older `Legacy SQL date\ntypes `__. For\ncompatibility, **generate-schema** script will also generate a schema\nfile using the legacy data types.\n\nThe supported types are:\n\n- ``BOOLEAN``\n- ``INTEGER``\n- ``FLOAT``\n- ``STRING``\n- ``TIMESTAMP``\n- ``DATE``\n- ``TIME``\n- ``RECORD``\n\nThe ``generate-schema`` script supports both ``NULLABLE`` and\n``REPEATED`` modes of all of the above types.\n\nThe supported format of ``TIMESTAMP`` is as close as practical to the\n`bq load\nformat `__:\n\n::\n\n YYYY-[M]M-[D]D[( |T)[H]H:[M]M:[S]S[.DDDDDD]][time zone]\n\nwhich appears to be an extension of the `ISO 8601\nformat `__. The difference from\n``bq load`` is that the ``[time zone]`` component can be only \\* ``Z``\n\\* ``UTC`` (same as ``Z``) \\* ``(+|-)H[H][:M[M]]``\n\nNote that BigQuery supports up to 6 decimal places after the integer\n'second' component. ``generate-schema`` follows the same restriction for\ncompatibility. If your input file contains more than 6 decimal places,\nyou need to write a data cleansing filter to fix this.\n\nThe suffix ``UTC`` is not standard ISO 8601 nor `documented by\nGoogle `__\nbut the ``UTC`` suffix is used by ``bq extract`` and the web interface.\n(See `Issue\n19 `__.)\n\nTimezone names from the `tz database `__\n(e.g. \"America/Los\\_Angeles\") are *not* supported by\n``generate-schema``.\n\nThe following types are *not* supported at all:\n\n- ``BYTES``\n- ``DATETIME`` (unable to distinguish from ``TIMESTAMP``)\n\nType Inferrence Rules\n~~~~~~~~~~~~~~~~~~~~~\n\nThe ``generate-schema`` script attempts to emulate the various type\nconversion and compatibility rules implemented by **bq load**:\n\n- ``INTEGER`` can upgrade to ``FLOAT``\n\n - if a field in an early record is an ``INTEGER``, but a subsequent\n record shows this field to have a ``FLOAT`` value, the type of the\n field will be upgraded to a ``FLOAT``\n - the reverse does not happen, once a field is a ``FLOAT``, it will\n remain a ``FLOAT``\n\n- conflicting ``TIME``, ``DATE``, ``TIMESTAMP`` types upgrades to\n ``STRING``\n\n - if a field is determined to have one type of \"time\" in one record,\n then subsequently a different \"time\" type, then the field will be\n assigned a ``STRING`` type\n\n- ``NULLABLE RECORD`` can upgrade to a ``REPEATED RECORD``\n\n - a field may be defined as ``RECORD`` (aka \"Struct\") type with\n ``{ ... }``\n - if the field is subsequently read as an array with a\n ``[{ ... }]``, the field is upgraded to a ``REPEATED RECORD``\n\n- a primitive type (``FLOAT``, ``INTEGER``, ``STRING``) cannot upgrade\n to a ``REPEATED`` primitive type\n\n - there's no technical reason why this cannot be allowed, but **bq\n load** does not support it, so we follow its behavior\n\n- a ``DATETIME`` field is always inferred to be a ``TIMESTAMP``\n\n - the format of these two fields is identical (in the absence of\n timezone)\n - we follow the same logic as **bq load** and always infer these as\n ``TIMESTAMP``\n\n- ``BOOLEAN``, ``INTEGER``, and ``FLOAT`` can appear inside quoted\n strings\n\n - In other words, ``\"true\"`` (or ``\"True\"`` or ``\"false\"``, etc) is\n considered a BOOLEAN type, ``\"1\"`` is considered an INTEGER type,\n and ``\"2.1\"`` is considered a FLOAT type. Luigi Mori\n (jtschichold@) added additional logic to replicate the type\n conversion logic used by ``bq load`` for these strings.\n - This type inference inside quoted strings can be disabled using\n the ``--quoted_values_are_strings`` flag\n - (See `Issue\n #22 `__\n for more details.)\n\n- ``INTEGER`` values overflowing a 64-bit signed integer upgrade to\n ``FLOAT``\n\n - integers greater than ``2^63-1`` (9223372036854775807)\n - integers less than ``-2^63`` (-9223372036854775808)\n - (See `Issue\n #18 `__\n for more details)\n\nExamples\n--------\n\nHere is an example of a single JSON data record on the STDIN (the ``^D``\nbelow means typing Control-D, which indicates \"end of file\" under Linux\nand MacOS):\n\n::\n\n $ generate-schema\n { \"s\": \"string\", \"b\": true, \"i\": 1, \"x\": 3.1, \"t\": \"2017-05-22T17:10:00-07:00\" }\n ^D\n INFO:root:Processed 1 lines\n [\n {\n \"mode\": \"NULLABLE\",\n \"name\": \"b\",\n \"type\": \"BOOLEAN\"\n },\n {\n \"mode\": \"NULLABLE\",\n \"name\": \"i\",\n \"type\": \"INTEGER\"\n },\n {\n \"mode\": \"NULLABLE\",\n \"name\": \"s\",\n \"type\": \"STRING\"\n },\n {\n \"mode\": \"NULLABLE\",\n \"name\": \"t\",\n \"type\": \"TIMESTAMP\"\n },\n {\n \"mode\": \"NULLABLE\",\n \"name\": \"x\",\n \"type\": \"FLOAT\"\n }\n ]\n\nIn most cases, the data file will be stored in a file:\n\n::\n\n $ cat > file.data.json\n { \"a\": [1, 2] }\n { \"i\": 3 }\n ^D\n\n $ generate-schema < file.data.json > file.schema.json\n INFO:root:Processed 2 lines\n\n $ cat file.schema.json\n [\n {\n \"mode\": \"REPEATED\",\n \"name\": \"a\",\n \"type\": \"INTEGER\"\n },\n {\n \"mode\": \"NULLABLE\",\n \"name\": \"i\",\n \"type\": \"INTEGER\"\n }\n ]\n\nHere is the schema generated from a CSV input file. The first line is\nthe header containing the names of the columns, and the schema lists the\ncolumns in the same order as the header:\n\n::\n\n $ generate-schema --input_format csv\n e,b,c,d,a\n 1,x,true,,2.0\n 2,x,,,4\n 3,,,,\n ^D\n INFO:root:Processed 3 lines\n [\n {\n \"mode\": \"NULLABLE\",\n \"name\": \"e\",\n \"type\": \"INTEGER\"\n },\n {\n \"mode\": \"NULLABLE\",\n \"name\": \"b\",\n \"type\": \"STRING\"\n },\n {\n \"mode\": \"NULLABLE\",\n \"name\": \"c\",\n \"type\": \"BOOLEAN\"\n },\n {\n \"mode\": \"NULLABLE\",\n \"name\": \"d\",\n \"type\": \"STRING\"\n },\n {\n \"mode\": \"NULLABLE\",\n \"name\": \"a\",\n \"type\": \"FLOAT\"\n }\n ]\n\nHere is an example of the schema generated with the ``--infer_mode``\nflag:\n\n::\n\n $ generate-schema --input_format csv --infer_mode\n name,surname,age\n John\n Michael,,\n Maria,Smith,30\n Joanna,Anders,21\n ^D\n INFO:root:Processed 4 lines\n [\n {\n \"mode\": \"REQUIRED\",\n \"name\": \"name\",\n \"type\": \"STRING\"\n },\n {\n \"mode\": \"NULLABLE\",\n \"name\": \"surname\",\n \"type\": \"STRING\"\n },\n {\n \"mode\": \"NULLABLE\",\n \"name\": \"age\",\n \"type\": \"INTEGER\"\n }\n ]\n\nUsing As a Library\n------------------\n\nThe ``bigquery_schema_generator`` module can be used as a library by an\nexternal Python client code by creating an instance of\n``SchemaGenerator`` and calling the ``run(input, output)`` method:\n\n.. code:: python\n\n from bigquery_schema_generator.generate_schema import SchemaGenerator\n\n generator = SchemaGenerator(\n input_format=input_format,\n infer_mode=infer_mode,\n keep_nulls=keep_nulls,\n quoted_values_are_strings=quoted_values_are_strings,\n debugging_interval=debugging_interval,\n debugging_map=debugging_map)\n generator.run(input_file, output_file)\n\nIf you need to process the generated schema programmatically, use the\n``deduce_schema()`` method and process the resulting ``schema_map`` and\n``error_log`` data structures like this:\n\n.. code:: python\n\n from bigquery_schema_generator.generate_schema import SchemaGenerator\n ...\n schema_map, error_logs = generator.deduce_schema(input_file)\n\n for error in error_logs:\n logging.info(\"Problem on line %s: %s\", error['line'], error['msg'])\n\n schema = generator.flatten_schema(schema_map)\n json.dump(schema, output_file, indent=2)\n\nBenchmarks\n----------\n\nI wrote the ``bigquery_schema_generator/anonymize.py`` script to create\nan anonymized data file ``tests/testdata/anon1.data.json.gz``:\n\n::\n\n $ ./bigquery_schema_generator/anonymize.py < original.data.json \\\n > anon1.data.json\n $ gzip anon1.data.json\n\nThis data file is 290MB (5.6MB compressed) with 103080 data records.\n\nGenerating the schema using\n\n::\n\n $ bigquery_schema_generator/generate_schema.py < anon1.data.json \\\n > anon1.schema.json\n\ntook 67s on a Dell Precision M4700 laptop with an Intel Core i7-3840QM\nCPU @ 2.80GHz, 32GB of RAM, Ubuntu Linux 18.04, Python 3.6.7.\n\nSystem Requirements\n-------------------\n\nThis project was initially developed on Ubuntu 17.04 using Python 3.5.3.\nI have tested it on:\n\n- Ubuntu 18.04, Python 3.6.7\n- Ubuntu 17.10, Python 3.6.3\n- Ubuntu 17.04, Python 3.5.3\n- Ubuntu 16.04, Python 3.5.2\n- MacOS 10.14.2, `Python\n 3.6.4 `__\n- MacOS 10.13.2, `Python\n 3.6.4 `__\n\nChangelog\n---------\n\nSee `CHANGELOG.md `__.\n\nAuthors\n-------\n\n- Created by Brian T. Park (brian@xparks.net).\n- Type inference inside quoted strings by Luigi Mori (jtschichold@).\n- Flag to disable type inference inside quoted strings by Daniel Ecer\n (de-code@).\n- Support for CSV files and detection of ``REQUIRED`` fields by Sandor\n Korotkevics (korotkevics@).\n- Better support for using ``bigquery_schema_generator`` as a library\n from an external Python code by StefanoG\\_ITA (StefanoGITA@).\n- Sanitizing of column names to valid BigQuery characters and length by\n Jon Warghed (jonwarghed@).\n\nLicense\n-------\n\nApache License 2.0",
"description_content_type": "",
"docs_url": null,
"download_url": "",
"downloads": {
"last_day": -1,
"last_month": -1,
"last_week": -1
},
"home_page": "https://github.com/bxparks/bigquery-schema-generator",
"keywords": "",
"license": "Apache 2.0",
"maintainer": "",
"maintainer_email": "",
"name": "bigquery-schema-generator",
"package_url": "https://pypi.org/project/bigquery-schema-generator/",
"platform": "",
"project_url": "https://pypi.org/project/bigquery-schema-generator/",
"project_urls": {
"Homepage": "https://github.com/bxparks/bigquery-schema-generator"
},
"release_url": "https://pypi.org/project/bigquery-schema-generator/0.5.1/",
"requires_dist": null,
"requires_python": "~=3.5",
"summary": "BigQuery schema generator from JSON or CSV data",
"version": "0.5.1"
},
"last_serial": 5410633,
"releases": {
"0.1": [
{
"comment_text": "",
"digests": {
"md5": "24cf9cc9d0c5f9f043cb304612367bc2",
"sha256": "f72de9d303d89997b7d83f1ecc046ff7d089a76f8ce864e55d80336c1c296eba"
},
"downloads": -1,
"filename": "bigquery-schema-generator-0.1.tar.gz",
"has_sig": false,
"md5_digest": "24cf9cc9d0c5f9f043cb304612367bc2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.5",
"size": 8142,
"upload_time": "2018-01-02T21:23:28",
"url": "https://files.pythonhosted.org/packages/e1/db/d9a03a7e36a708489a655effd59a3cb5fcc13f8dd2aba74a319d65d7e03a/bigquery-schema-generator-0.1.tar.gz"
}
],
"0.1.1": [
{
"comment_text": "",
"digests": {
"md5": "4c1ab0d67aa70ab53cbe32fc757113fc",
"sha256": "12174eac7b354136d0aede4845b557116c4a7d6d64d4276f1377219a085ccd0c"
},
"downloads": -1,
"filename": "bigquery-schema-generator-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "4c1ab0d67aa70ab53cbe32fc757113fc",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.5",
"size": 8614,
"upload_time": "2018-01-03T22:39:39",
"url": "https://files.pythonhosted.org/packages/34/1b/cae35741da8bbd894f78cef64157b091be6477475cf8d78785031f5e1a20/bigquery-schema-generator-0.1.1.tar.gz"
}
],
"0.1.2": [
{
"comment_text": "",
"digests": {
"md5": "0ce0f074da4741cb365a3c3dba2bc4a0",
"sha256": "4def9de61d384654948a31d80a757fea205038933a36575e773b9fa4b30a34ec"
},
"downloads": -1,
"filename": "bigquery-schema-generator-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "0ce0f074da4741cb365a3c3dba2bc4a0",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.5",
"size": 8955,
"upload_time": "2018-01-04T22:34:23",
"url": "https://files.pythonhosted.org/packages/d7/82/976a46999464e60371ba72cdb89bd8609e071104208e9196d6e2ae9a4b09/bigquery-schema-generator-0.1.2.tar.gz"
}
],
"0.1.3": [
{
"comment_text": "",
"digests": {
"md5": "c7358bd3388833e629ac9270e391df71",
"sha256": "6928113821d15f91e30e6c066085fd9afbe39760d6f7100d7427aeabe72b6052"
},
"downloads": -1,
"filename": "bigquery-schema-generator-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "c7358bd3388833e629ac9270e391df71",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.5",
"size": 9029,
"upload_time": "2018-01-23T20:59:07",
"url": "https://files.pythonhosted.org/packages/6f/73/882c4267dc6052e5268a2e5e90cac7ddc27c3090e5ebccf55b9b6d5c3c0c/bigquery-schema-generator-0.1.3.tar.gz"
}
],
"0.1.4": [
{
"comment_text": "",
"digests": {
"md5": "4b6fb6845b185e45fabb29d0e044fa7f",
"sha256": "3039e4aa860f1e1d8ffa98f5816ec53aa8885354f5f8e06263ee0d0a9f59d275"
},
"downloads": -1,
"filename": "bigquery-schema-generator-0.1.4.tar.gz",
"has_sig": false,
"md5_digest": "4b6fb6845b185e45fabb29d0e044fa7f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.5",
"size": 9074,
"upload_time": "2018-01-23T21:21:24",
"url": "https://files.pythonhosted.org/packages/ab/70/e8a09f5c5f293e05d862b99884a8037c29c66ac4852dea2fc33b732f747b/bigquery-schema-generator-0.1.4.tar.gz"
}
],
"0.1.5": [
{
"comment_text": "",
"digests": {
"md5": "c7ef625216d89078b4f2507709df2494",
"sha256": "98ebe2499d3b46ad9ae375ba78061a0b7fd689c3d8e18e897e92982b7ddfa149"
},
"downloads": -1,
"filename": "bigquery-schema-generator-0.1.5.tar.gz",
"has_sig": false,
"md5_digest": "c7ef625216d89078b4f2507709df2494",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.5",
"size": 9608,
"upload_time": "2018-01-26T07:10:43",
"url": "https://files.pythonhosted.org/packages/4d/07/b11cdcbafde5778bc2e9546487bd4ef83ac625e109c5d2c28a35667efbaf/bigquery-schema-generator-0.1.5.tar.gz"
}
],
"0.1.6": [
{
"comment_text": "",
"digests": {
"md5": "a7df0a0ee7e3656ff4e5e85a976bbc40",
"sha256": "57e1b1cdea34436c3c48d962fdd3f7d511637b82948aba20914bf4f55239c4e7"
},
"downloads": -1,
"filename": "bigquery-schema-generator-0.1.6.tar.gz",
"has_sig": false,
"md5_digest": "a7df0a0ee7e3656ff4e5e85a976bbc40",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.5",
"size": 9740,
"upload_time": "2018-01-26T16:42:40",
"url": "https://files.pythonhosted.org/packages/dd/aa/477a39a6c5a2bb65b8e71a4c5346be7bb66e115dcb56f55eb35dd258d58b/bigquery-schema-generator-0.1.6.tar.gz"
}
],
"0.2.0": [
{
"comment_text": "",
"digests": {
"md5": "e3db9557e604f82eeee46d8fcc431c7a",
"sha256": "ef08aecaef20a744f48e032bc051ccaa26b4af9d15922bfb653ca10dbec02e6d"
},
"downloads": -1,
"filename": "bigquery-schema-generator-0.2.0.tar.gz",
"has_sig": false,
"md5_digest": "e3db9557e604f82eeee46d8fcc431c7a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.5",
"size": 11141,
"upload_time": "2018-02-10T21:33:00",
"url": "https://files.pythonhosted.org/packages/7c/ad/274590eb8dd3b6722be664f2f20228b690c680e4526ca2d6921d608d36af/bigquery-schema-generator-0.2.0.tar.gz"
}
],
"0.2.1": [
{
"comment_text": "",
"digests": {
"md5": "1bcb0bde44cf776ad8dac21ba9c5d437",
"sha256": "4117cd053d8cf298514ec09280988fc026ff7db97171bba4d21a85f9003df109"
},
"downloads": -1,
"filename": "bigquery-schema-generator-0.2.1.tar.gz",
"has_sig": false,
"md5_digest": "1bcb0bde44cf776ad8dac21ba9c5d437",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.5",
"size": 13175,
"upload_time": "2018-07-18T20:51:30",
"url": "https://files.pythonhosted.org/packages/75/4d/a27f70a0509efe5e6370535aca308c0ec3aaf186962a20976a8b940b9adf/bigquery-schema-generator-0.2.1.tar.gz"
}
],
"0.3": [
{
"comment_text": "",
"digests": {
"md5": "0cb9d1a6ddaa34cde7c5fdb055de7750",
"sha256": "b35505b1fb4e836fb3a4499152f1423fbe6382cd2c9ed1250e7e9222eba7618b"
},
"downloads": -1,
"filename": "bigquery-schema-generator-0.3.tar.gz",
"has_sig": false,
"md5_digest": "0cb9d1a6ddaa34cde7c5fdb055de7750",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.5",
"size": 15572,
"upload_time": "2018-12-17T19:07:59",
"url": "https://files.pythonhosted.org/packages/d2/26/189e153d127ad3fc56c6805514141aea23616719e77d44d59a854087bc46/bigquery-schema-generator-0.3.tar.gz"
}
],
"0.3.1": [
{
"comment_text": "",
"digests": {
"md5": "b2c396332181206508282cef0031995d",
"sha256": "a4693f19147d0fc1f2ccc2e8b35f5b86444175c8ae3c790a394a394dbb3a59d3"
},
"downloads": -1,
"filename": "bigquery-schema-generator-0.3.1.tar.gz",
"has_sig": false,
"md5_digest": "b2c396332181206508282cef0031995d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.5",
"size": 19262,
"upload_time": "2019-01-18T17:39:32",
"url": "https://files.pythonhosted.org/packages/74/9b/0dab5d57fd13cbe955d3519f312f822712df36c14b40add9e88d8643cc7c/bigquery-schema-generator-0.3.1.tar.gz"
}
],
"0.3.2": [
{
"comment_text": "",
"digests": {
"md5": "e5b5a4bbbafa81e0ca4ce0b09f6b6d0d",
"sha256": "1ca6a7f85eb7757b900b6766ad4a2ca915fad9cbba66f6ce48819ba33df61028"
},
"downloads": -1,
"filename": "bigquery-schema-generator-0.3.2.tar.gz",
"has_sig": false,
"md5_digest": "e5b5a4bbbafa81e0ca4ce0b09f6b6d0d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.5",
"size": 17222,
"upload_time": "2019-02-24T22:55:51",
"url": "https://files.pythonhosted.org/packages/04/c3/9a14e42b03c9b397c364275355de61c550b399c6a38348c6c956076410ac/bigquery-schema-generator-0.3.2.tar.gz"
}
],
"0.4": [
{
"comment_text": "",
"digests": {
"md5": "078e8f64be76d8acf17461111d67e5b6",
"sha256": "b0ac6b9c7c2d24e927e908b2f9c42da71bd6434a38f004069e59427e05bf7a3d"
},
"downloads": -1,
"filename": "bigquery-schema-generator-0.4.tar.gz",
"has_sig": false,
"md5_digest": "078e8f64be76d8acf17461111d67e5b6",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.5",
"size": 25802,
"upload_time": "2019-03-06T19:00:31",
"url": "https://files.pythonhosted.org/packages/7a/f7/bb78c092db1e1c6f061d99e9e23195c4090ca1f451fee3445478ce562899/bigquery-schema-generator-0.4.tar.gz"
}
],
"0.5": [
{
"comment_text": "",
"digests": {
"md5": "cdfb14eea068bb0b99d0ae624475634a",
"sha256": "7025f9b32ec93e6a8be4351cc5229a785c9fd275ea4c46a8a8cb6fb2d82a313a"
},
"downloads": -1,
"filename": "bigquery-schema-generator-0.5.tar.gz",
"has_sig": false,
"md5_digest": "cdfb14eea068bb0b99d0ae624475634a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.5",
"size": 24566,
"upload_time": "2019-06-06T18:23:33",
"url": "https://files.pythonhosted.org/packages/97/f7/157fa13d44fce29b8c727afc35a19cbc0b2b5d2c5afe34f55f8901d6be29/bigquery-schema-generator-0.5.tar.gz"
}
],
"0.5.1": [
{
"comment_text": "",
"digests": {
"md5": "1022895b5c8a4150e0e743ff420f434e",
"sha256": "bf02e747e7dfa7a393d1c5e981a3258c36dbda21998bbc4fc6020219190f1fca"
},
"downloads": -1,
"filename": "bigquery-schema-generator-0.5.1.tar.gz",
"has_sig": false,
"md5_digest": "1022895b5c8a4150e0e743ff420f434e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.5",
"size": 25356,
"upload_time": "2019-06-17T15:06:05",
"url": "https://files.pythonhosted.org/packages/55/ae/a9b8a6ecc0c8224536b7ac3538522f1848c017268e79f5e3206026c64775/bigquery-schema-generator-0.5.1.tar.gz"
}
]
},
"urls": [
{
"comment_text": "",
"digests": {
"md5": "1022895b5c8a4150e0e743ff420f434e",
"sha256": "bf02e747e7dfa7a393d1c5e981a3258c36dbda21998bbc4fc6020219190f1fca"
},
"downloads": -1,
"filename": "bigquery-schema-generator-0.5.1.tar.gz",
"has_sig": false,
"md5_digest": "1022895b5c8a4150e0e743ff420f434e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.5",
"size": 25356,
"upload_time": "2019-06-17T15:06:05",
"url": "https://files.pythonhosted.org/packages/55/ae/a9b8a6ecc0c8224536b7ac3538522f1848c017268e79f5e3206026c64775/bigquery-schema-generator-0.5.1.tar.gz"
}
]
}