{ "info": { "author": "Suriyan Laohaprapanon, Gaurav Sood", "author_email": "suriyant@gmail.com, gsood07@gmail.com", "bugtrack_url": null, "classifiers": [ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3.5", "Programming Language :: Python :: 3.6", "Topic :: Scientific/Engineering :: Information Analysis", "Topic :: Software Development :: Libraries :: Python Modules", "Topic :: Utilities" ], "description": "PyDomains: Classifying the Content of Domains\r\n------------------------------------------------\r\n\r\n.. image:: https://travis-ci.org/themains/pydomains.svg?branch=master\r\n :target: https://travis-ci.org/themains/pydomains\r\n.. image:: https://ci.appveyor.com/api/projects/status/qfvbu8h99ymtw2ub?svg=true\r\n :target: https://ci.appveyor.com/project/themains/pydomains\r\n.. image:: https://img.shields.io/pypi/v/pydomains.svg\r\n :target: https://pypi.python.org/pypi/pydomains\r\n.. image:: https://readthedocs.org/projects/pydomains/badge/?version=latest\r\n :target: http://pydomains.readthedocs.io/en/latest/?badge=latest\r\n :alt: Documentation Status\r\n\r\nThe package provides two broad ways of learning about the kind of content hosted \r\non a domain. First, it provides convenient access to curated lists of domain content\r\nlike the Shallalist, DMOZ, PhishTank, and such. Second, it exposes models built on top of \r\nthese large labeled datasets; the models estimate the relationship between sequence of \r\ncharacters in the domain name and the kind of content hosted by the domain. \r\n\r\nDownloads\r\n----------\r\nAs of November 1st, 2018, the package had been downloaded over 800 times (see `saved BigQuery `__).\r\n\r\n\r\nQuick Start\r\n------------\r\n\r\n::\r\n\r\n import pandas as pd\r\n from pydomains import *\r\n\r\n # Get help\r\n help(dmoz_cat)\r\n\r\n # Load data\r\n df = pd.read_csv('./pydomains/examples/input-header.csv')\r\n\r\n # df\r\n # label url\r\n # 0 test1 topshop.com\r\n # 1 test2 beyondrelief.com\r\n\r\n # Get the Content Category from DMOZ, phishtank\r\n df_dmoz = dmoz_cat(df, domain_names = 'url')\r\n df_phish = phish_cat(df, domain_names = 'url')\r\n\r\n # Predicted category from shallalist, toulouse\r\n df_shalla = pred_shalla(df, domain_names = 'url')\r\n df_toulouse = pred_toulouse(df, domain_names = 'url')\r\n\r\n\r\nInstallation\r\n--------------\r\n\r\nInstallation is as easy as typing in:\r\n\r\n::\r\n\r\n pip install pydomains\r\n\r\nAPI\r\n~~~~~~~~~~\r\n\r\n1. **dmoz\\_cat**, **shalla\\_cat**, and **phish\\_cat**: When the domain\r\n is in the DMOZ, Shallalist, and Phishtank data, the functions give the\r\n category of the domain according to the respective list. (Phishtank just\r\n gives whether or not the domain has been implicated in phishing.) Otherwise,\r\n the function returns an empty string.\r\n\r\n - **Arguments:**\r\n\r\n - ``df``: pandas dataframe. No default.\r\n - ``domain_names``: column with the domain names/URLs. \r\n Default is ``domain_names``\r\n - ``year``. Specify the year from which you want to use the data.\r\n Currently only DMOZ data from 2016, and Shallalist and Phishtank\r\n data from 2017 is available.\r\n - ``latest``. Boolean. Default is ``False``. If ``True``, the\r\n function checks if a local file exists and if it exists, if the\r\n local file is the latest. If it isn't, it downloads the latest\r\n file from the GitHub link and overwrites the local file.\r\n\r\n - **Functionality:**\r\n\r\n - converts domain name to lower case, strips ``http://``.\r\n - Looks for ``dmoz_YYYY.csv``, ``shalla_YYYY.csv``, or\r\n ``phish_YYYY.csv`` respectively in the local folder. If it\r\n doesn't find it, it downloads the latest DMOZ, Shallalist, or\r\n Phishtank file from\r\n `pydomains/data/dmoz_YYYY.csv.bz2 `__,\r\n `pydomains/data/shalla_YYYY.csv.bz2 `__,\r\n or\r\n `pydomains/data/phish_YYYY.csv.bz2 `__\\ respectively.\r\n - If the ``latest`` flag is planted, it checks if the\r\n local file is older than the remote file. If it is,\r\n it overwrites the local file with the newer remote file.\r\n\r\n - **Output:**\r\n\r\n - Appends the category to the CSV. By default it creates a column\r\n (dmoz\\_year\\_cat or shalla\\_year\\_cat or phish\\_year\\_cat).\r\n - If no match is found, it returns nothing.\r\n - DMOZ sometimes has multiple categories per domain. The\r\n categories are appended together with a semi-colon.\r\n\r\n - **Examples:**\r\n\r\n ::\r\n \r\n import pandas as pd\r\n\r\n df = pd.DataFrame([{'domain_names': 'http://www.google.com'}])\r\n\r\n dmoz_cat(df)\r\n shalla_cat(df)\r\n phish_cat(df)\r\n\r\n2. **pred\\_shalla**: We use data from Shallalist to train a \r\n `LSTM model `__. The function\r\n uses the trained model to predict the category of the domain based on \r\n the domain name.\r\n\r\n - **Arguments:**\r\n\r\n - ``df``: pandas dataframe. No default.\r\n - ``domain_names``: column with the domain names/URLs. \r\n Default is ``domain_names``\r\n - ``year``. Year of the model. Default is 2017. Currently only\r\n a model based on data from 2017 is available.\r\n - ``latest``. Boolean. Default is ``False``. If ``True``, the\r\n function checks if a local model file exists and if it exists, is it\r\n older than what is on the website. If it isn't, it downloads the latest\r\n file from the GitHub link and overwrites the local file.\r\n\r\n - **Functionality:**\r\n\r\n - converts domain name to lower case, strips ``http://``.\r\n - Uses the model to predict the probability of content being from\r\n various categories.\r\n\r\n - **Output**\r\n\r\n - Appends a column carrying the label of the category with the \r\n highest probability (``pred_shalla_year_lab``) and a series of \r\n columns with probabilities for each category \r\n (``pred_shalla_year_prob_catname``).\r\n\r\n - **Examples:**\r\n\r\n ::\r\n\r\n pred_shalla(df)\r\n\r\n3. **pred\\_toulouse**: We use data from http://dsi.ut-capitole.fr/blacklists/ to \r\n train a `LSTM model `__ that predicts\r\n the category of content hosted by the domain. The function uses the trained \r\n model to predict the category of the domain based on the domain name.\r\n\r\n - **Arguments:**\r\n\r\n - ``df``: pandas dataframe. No default.\r\n - ``domain_names``: column with the domain names/URLs. \r\n Default is ``domain_names``\r\n - ``year``. Year of the model. Default is 2017. Currently only\r\n a model based on data from 2017 is available.\r\n - ``latest``. Boolean. Default is ``False``. If ``True``, the\r\n function checks if a local model file exists and if it exists, is it\r\n older than what is on the website. If it isn't, it downloads the latest\r\n file from the GitHub link and overwrites the local file.\r\n\r\n - **Functionality:**\r\n\r\n - converts domain name to lower case, strips ``http://``.\r\n - Uses the model to predict the probability of it being a domain\r\n implicated in distributing malware.\r\n\r\n - **Output:**\r\n\r\n - Appends a column carrying the label of the category with the \r\n highest probability (``pred_toulouse_year_lab``) and a series of \r\n columns with probabilities for each category \r\n (``pred_toulouse_year_prob_catname``).\r\n\r\n - **Examples:**\r\n\r\n ::\r\n\r\n pred_malware(df)\r\n\r\n4. **pred\\_phish**: Given the importance, we devote special care to try\r\n to predict domains involved in phishing well. To do that, we use data\r\n from `PhishTank `__ and combine it with\r\n data from http://s3.amazonaws.com/alexa-static/top-1m.csv.zip, and train a `LSTM\r\n model `__. The function gives the \r\n predicted probability based on the LSTM model.\r\n\r\n - **Arguments:**\r\n\r\n - ``df``: pandas dataframe. No default.\r\n - ``domain_names``: column with the domain names/URLs. \r\n Default is ``domain_names``\r\n - ``year``. Year of the model. Default is 2017. Currently only\r\n a model based on data from 2017 is available.\r\n - ``latest``. Boolean. Default is ``False``. If ``True``, the\r\n function checks if a local model file exists and if it exists, is it\r\n older than what is on the website. If it isn't, it downloads the latest\r\n file from the GitHub link and overwrites the local file.\r\n\r\n - **Functionality:**\r\n\r\n - converts domain name to lower case, strips ``http://``.\r\n - Uses the model to predict the probability of it being a domain\r\n implicated in phishing.\r\n\r\n - **Output:**\r\n\r\n - Appends column `pred_phish_year_lab` which contains the most probable\r\n label, and a column indicating the probability that the domain \r\n is involved in distributing malware (`pred_phish_year_prob`).\r\n\r\n - **Examples:**\r\n\r\n ::\r\n\r\n pred_phish(df)\r\n\r\n5. **pred\\_malware**: Once again, given the importance of flagging domains\r\n that carry malware, we again devote extra care to try to predict domains \r\n involved in distributing malware well. We combine data on malware \r\n domains http://mirror1.malwaredomains.com/ with data from \r\n http://s3.amazonaws.com/alexa-static/top-1m.csv.zip, and train a \r\n `LSTM model `__. The function gives \r\n the predicted probability based on the LSTM model.\r\n\r\n - **Arguments:**\r\n\r\n - ``df``: pandas dataframe. No default.\r\n - ``domain_names``: column with the domain names/URLs. \r\n Default is ``domain_names``\r\n - ``year``. Year of the model. Default is 2017. Currently only\r\n a model based on data from 2017 is available.\r\n - ``latest``. Boolean. Default is ``False``. If ``True``, the\r\n function checks if a local model file exists and if it exists, is it\r\n older than what is on the website. If it isn't, it downloads the latest\r\n file from the GitHub link and overwrites the local file.\r\n\r\n - **Functionality:**\r\n\r\n - converts domain name to lower case, strips ``http://``.\r\n - Uses the model to predict the probability of it being a domain\r\n implicated in distributing malware.\r\n\r\n - **Output:**\r\n\r\n - Appends column `pred_malware_year_lab` and a column indicating the \r\n probability that the domain is involved in distributing malware \r\n (`pred_malware_year_prob`).\r\n\r\n - **Examples:**\r\n\r\n ::\r\n\r\n pred_malware(df)\r\n\r\nUsing pydomains\r\n~~~~~~~~~~~~~~~~\r\n\r\n::\r\n\r\n >>> import pandas as pd\r\n >>> from pydomains import *\r\n Using TensorFlow backend.\r\n\r\n >>> # Get help of the function\r\n ... help(dmoz_cat)\r\n Help on function dmoz_cat in module pydomains.dmoz_cat:\r\n\r\n dmoz_cat(df, domain_names='domain_names', year=2016, latest=False)\r\n Appends DMOZ domain categories to the DataFrame.\r\n\r\n The function extracts the domain name along with the subdomain\r\n from the specified column and appends the category (dmoz_cat)\r\n to the DataFrame. If DMOZ file is not available locally or\r\n latest is set to True, it downloads the file. The function\r\n looks for category of the domain name in the DMOZ file\r\n for each domain. When no match is found, it returns an\r\n empty string.\r\n\r\n Args:\r\n df (:obj:`DataFrame`): Pandas DataFrame. No default value.\r\n domain_names (str): Column name of the domain in DataFrame.\r\n Default in `domain_names`.\r\n year (int): DMOZ data year. Only 2016 data is available.\r\n Default is 2016.\r\n latest (Boolean): Whether or not to download latest\r\n data available from GitHub. Default is False.\r\n\r\n Returns:\r\n DataFrame: Pandas DataFrame with two additional columns:\r\n 'dmoz_year_domain' and 'dmoz_year_cat'\r\n\r\n\r\n >>> # Load an example input with columns header\r\n ... df = pd.read_csv('./pydomains/examples/input-header.csv')\r\n\r\n >>> df\r\n label url\r\n 0 test1 topshop.com\r\n 1 test2 beyondrelief.com\r\n 2 test3 golf-tours.com/test\r\n 3 test4 thegayhotel.com\r\n 4 test5 https://zonasequravlabcp.com/bcp/\r\n 5 test6 http://privatix.xyz\r\n 6 test7 adultfriendfinder.com\r\n 7 test8 giftregistrylocator.com\r\n 8 test9 bangbrosonline.com\r\n 9 test10 scotland-info.co.uk\r\n\r\n >>> # Get the Content Category from DMOZ\r\n ... df = dmoz_cat(df, domain_names='url')\r\n Loading DMOZ data file...\r\n\r\n >>> df\r\n label url dmoz_2016_domain \\\r\n 0 test1 topshop.com topshop.com\r\n 1 test2 beyondrelief.com beyondrelief.com\r\n 2 test3 golf-tours.com/test golf-tours.com\r\n 3 test4 thegayhotel.com thegayhotel.com\r\n 4 test5 https://zonasequravlabcp.com/bcp/ zonasequravlabcp.com\r\n 5 test6 http://privatix.xyz privatix.xyz\r\n 6 test7 adultfriendfinder.com adultfriendfinder.com\r\n 7 test8 giftregistrylocator.com giftregistrylocator.com\r\n 8 test9 bangbrosonline.com bangbrosonline.com\r\n 9 test10 scotland-info.co.uk scotland-info.co.uk\r\n\r\n dmoz_2016_cat\r\n 0 Top/Regional/Europe/United_Kingdom/Business_an...\r\n 1 NaN\r\n 2 NaN\r\n 3 NaN\r\n 4 NaN\r\n 5 NaN\r\n 6 NaN\r\n 7 NaN\r\n 8 NaN\r\n 9 Top/Regional/Europe/United_Kingdom/Scotland/Tr...\r\n >>> # Predict Content Category Using the Toulouse Model\r\n ... df = pred_toulouse(df, domain_names='url')\r\n Loading Toulouse model, vocab and names data file...\r\n\r\n >>> df\r\n label url dmoz_2016_domain \\\r\n 0 test1 topshop.com topshop.com\r\n 1 test2 beyondrelief.com beyondrelief.com\r\n 2 test3 golf-tours.com/test golf-tours.com\r\n 3 test4 thegayhotel.com thegayhotel.com\r\n 4 test5 https://zonasequravlabcp.com/bcp/ zonasequravlabcp.com\r\n 5 test6 http://privatix.xyz privatix.xyz\r\n 6 test7 adultfriendfinder.com adultfriendfinder.com\r\n 7 test8 giftregistrylocator.com giftregistrylocator.com\r\n 8 test9 bangbrosonline.com bangbrosonline.com\r\n 9 test10 scotland-info.co.uk scotland-info.co.uk\r\n\r\n dmoz_2016_cat \\\r\n 0 Top/Regional/Europe/United_Kingdom/Business_an...\r\n 1 NaN\r\n 2 NaN\r\n 3 NaN\r\n 4 NaN\r\n 5 NaN\r\n 6 NaN\r\n 7 NaN\r\n 8 NaN\r\n 9 Top/Regional/Europe/United_Kingdom/Scotland/Tr...\r\n\r\n pred_toulouse_2017_domain pred_toulouse_2017_lab \\\r\n 0 topshop.com shopping\r\n 1 beyondrelief.com adult\r\n 2 golf-tours.com shopping\r\n 3 thegayhotel.com adult\r\n 4 zonasequravlabcp.com phishing\r\n 5 privatix.xyz adult\r\n 6 adultfriendfinder.com adult\r\n 7 giftregistrylocator.com shopping\r\n 8 bangbrosonline.com adult\r\n 9 scotland-info.co.uk shopping\r\n\r\n pred_toulouse_2017_prob_adult pred_toulouse_2017_prob_audio-video \\\r\n 0 0.133953 0.003793\r\n 1 0.521590 0.016359\r\n 2 0.186083 0.008208\r\n 3 0.971451 0.001080\r\n 4 0.065503 0.001063\r\n 5 0.986328 0.002241\r\n 6 0.939441 0.000211\r\n 7 0.014645 0.000570\r\n 8 0.945490 0.004017\r\n 9 0.256270 0.003745\r\n\r\n pred_toulouse_2017_prob_bank pred_toulouse_2017_prob_gambling \\\r\n 0 1.161209e-04 2.911613e-04\r\n 1 3.912278e-03 6.484169e-03\r\n 2 1.783388e-03 8.022175e-04\r\n 3 8.920387e-05 6.256429e-05\r\n 4 6.226773e-04 1.073759e-04\r\n 5 6.823016e-07 1.969112e-06\r\n 6 1.742063e-07 6.485808e-08\r\n 7 3.973934e-04 1.019526e-05\r\n 8 9.122109e-05 1.142884e-04\r\n 9 3.962536e-04 4.977396e-04\r\n\r\n pred_toulouse_2017_prob_games pred_toulouse_2017_prob_malware \\\r\n 0 0.002073 0.003976\r\n 1 0.022408 0.018371\r\n 2 0.013352 0.006392\r\n 3 0.000713 0.000934\r\n 4 0.012431 0.077391\r\n 5 0.001021 0.004949\r\n 6 0.000044 0.000059\r\n 7 0.004112 0.016339\r\n 8 0.002216 0.000422\r\n 9 0.014452 0.006615\r\n\r\n pred_toulouse_2017_prob_others pred_toulouse_2017_prob_phishing \\\r\n 0 0.014862 0.112132\r\n 1 0.046011 0.172208\r\n 2 0.021287 0.060633\r\n 3 0.005018 0.017201\r\n 4 0.031691 0.416989\r\n 5 0.003069 0.002094\r\n 6 0.001674 0.058497\r\n 7 0.015631 0.131174\r\n 8 0.017964 0.012574\r\n 9 0.057622 0.111698\r\n\r\n pred_toulouse_2017_prob_press pred_toulouse_2017_prob_publicite \\\r\n 0 8.404775e-04 0.000761\r\n 1 2.525988e-02 0.002821\r\n 2 1.853482e-02 0.000990\r\n 3 2.208834e-04 0.000135\r\n 4 2.796387e-03 0.000284\r\n 5 4.559151e-06 0.000252\r\n 6 1.133891e-07 0.000007\r\n 7 1.115335e-02 0.000436\r\n 8 5.098383e-04 0.000785\r\n 9 7.331154e-04 0.000168\r\n\r\n pred_toulouse_2017_prob_shopping\r\n 0 0.727203\r\n 1 0.164577\r\n 2 0.681934\r\n 3 0.003094\r\n 4 0.391121\r\n 5 0.000038\r\n 6 0.000066\r\n 7 0.805531\r\n 8 0.015817\r\n 9 0.547802\r\n\r\nModels\r\n~~~~~~~~~~~~~~~~\r\n\r\nFor more information about the models, including the decisions we made around\r\ncurtailing the number of categories, see `here <./pydomains/models/>`__\r\n\r\nUnderlying Data\r\n~~~~~~~~~~~~~~~~\r\n\r\nWe use data from DMOZ, Shallalist, PhishTank, and a prominent Blacklist aggregator.\r\nFor more details about how the underlying data, see `here <./pydomains/data/>`__\r\n\r\nValidation\r\n~~~~~~~~~~~~~~~~~\r\n\r\nWe compare content categories according to the `TrustedSource API `__ \r\nwith content category from Shallalist and the Shallalist model for all the unique domains in the \r\ncomScore 2004 data: \r\n\r\n1. `comScore 2004 Trusted API results `__\r\n\r\n2. `comScore 2004 categories from pydomains <./pydomains/app/comscore-2004.ipynb>`__\r\n\r\n3. `comparison between TrustedSource and Shallalist and shallalist model <./pydomains/app/comscore-2004-eval.ipynb>`__\r\n\r\nLearning Browsing Behavior Using pydomains\r\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n\r\nTo make it easier to learn browsing behavior of people, we obtained the type of content\r\nhosted by a domain using all the functions in pydomains for all the unique domains in all \r\nthe comScore data from 2002 to 2016 (there are some missing years). We have posted the data\r\n`here `__ \r\n\r\nNotes and Caveats\r\n~~~~~~~~~~~~~~~~~~~\r\n\r\n- The DMOZ categorization system at tier 1 is bad. The category names\r\n are vague. They have a lot of subcategories that could easily belong\r\n to other tier 1 categories. That means a) it would likely be hard to\r\n classify well at tier 1 and b) not very valuable. So we choose not to\r\n predict tier 1 DMOZ categories.\r\n\r\n- The association between patterns in domain names and the kind of\r\n content they host may change over time. It may change as new domains\r\n come online and as older domains are repurposed. All this likely\r\n happens slowly. But, to be careful, we add a ``year`` variable in our\r\n functions. Each list and each model is for a particular year.\r\n\r\n- Imputing the kind of content hosted by a domain may suggest to some\r\n that domains carry only one kind of content. Many domains don't. And\r\n even when they do, the quality varies immensely. (See more `here \r\n `__.) There is \r\n much less heterogeneity at the URL level. And we plan to look into \r\n predicting at URL level. See `TODO `__ for our plans.\r\n\r\n- There are a lot of categories where we do not expect domain names to\r\n have any systematic patterns. Rather than make noisy predictions\r\n using just the domain names (the data that our current set of \r\n classifiers use), we plan to tackle this prediction task with \r\n some additional data. See `TODO `__ for our plans.\r\n\r\nDocumentation\r\n-------------\r\n\r\nFor more information, please see `project documentation `__.\r\n\r\nAuthors\r\n~~~~~~~~\r\n\r\nSuriyan Laohaprapanon and Gaurav Sood\r\n\r\nContributor Code of Conduct\r\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n\r\nThe project welcomes contributions from everyone! In fact, it depends on\r\nit. To maintain this welcoming atmosphere, and to collaborate in a fun\r\nand productive way, we expect contributors to the project to abide by\r\nthe `Contributor Code of\r\nConduct `__\r\n\r\nLicense\r\n~~~~~~~\r\n\r\nThe package is released under the `MIT\r\nLicense `__.\r\n", "description_content_type": "", "docs_url": null, "download_url": "", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "https://github.com/themains/pydomains", "keywords": "domain dmoz shalla phishing malware lstm", "license": "MIT", "maintainer": "", "maintainer_email": "", "name": "pydomains", "package_url": "https://pypi.org/project/pydomains/", "platform": "", "project_url": "https://pypi.org/project/pydomains/", "project_urls": { "Homepage": "https://github.com/themains/pydomains" }, "release_url": "https://pypi.org/project/pydomains/0.1.16/", "requires_dist": null, "requires_python": "", "summary": "Classifying the Content of Domains", "version": "0.1.16" }, "last_serial": 4463244, "releases": { "0.1.14": [ { "comment_text": "", "digests": { "md5": "1f7892f47ba70501f5c651672b3f7763", "sha256": "8aec09be05783f5a80af55147901b48b52629ee4b3a991b50b3df7b4848dbbb2" }, "downloads": -1, "filename": "pydomains-0.1.14.tar.gz", "has_sig": false, "md5_digest": "1f7892f47ba70501f5c651672b3f7763", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 22553, "upload_time": "2018-06-21T03:59:09", "url": "https://files.pythonhosted.org/packages/95/29/253358e98f0db5923b841912223eee654181ce26a82e72d3e04459581f36/pydomains-0.1.14.tar.gz" } ], "0.1.15": [ { "comment_text": "", "digests": { "md5": "dd753cf8e322d50cb6b43b8101da827b", "sha256": "55ec568b0a2ab5d7b0be3f8ea0b0b02f3ed8ba2cd54e840a84a7f991c6aa7cf4" }, "downloads": -1, "filename": "pydomains-0.1.15.tar.gz", "has_sig": false, "md5_digest": "dd753cf8e322d50cb6b43b8101da827b", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 22562, "upload_time": "2018-06-23T05:52:15", "url": "https://files.pythonhosted.org/packages/80/4d/de33cd7435b94bf031b01b29c103c5fae8028cfe2c5b6e443aaba8cc2ea4/pydomains-0.1.15.tar.gz" } ], "0.1.16": [ { "comment_text": "", "digests": { "md5": "31ca9824b6940722f697a17cf6967867", "sha256": "dd3a6a4c70ff8a00bdfb9142ce8adc4ec7f2b0e36d64cb5bbfea04c80f9c0199" }, "downloads": -1, "filename": "pydomains-0.1.16.tar.gz", "has_sig": false, "md5_digest": "31ca9824b6940722f697a17cf6967867", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 22557, "upload_time": "2018-11-07T22:19:49", "url": "https://files.pythonhosted.org/packages/c6/68/b84f859046ab52b2eb1c91a35804d29338a63acf159bef6485a9a48377cc/pydomains-0.1.16.tar.gz" } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "31ca9824b6940722f697a17cf6967867", "sha256": "dd3a6a4c70ff8a00bdfb9142ce8adc4ec7f2b0e36d64cb5bbfea04c80f9c0199" }, "downloads": -1, "filename": "pydomains-0.1.16.tar.gz", "has_sig": false, "md5_digest": "31ca9824b6940722f697a17cf6967867", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 22557, "upload_time": "2018-11-07T22:19:49", "url": "https://files.pythonhosted.org/packages/c6/68/b84f859046ab52b2eb1c91a35804d29338a63acf159bef6485a9a48377cc/pydomains-0.1.16.tar.gz" } ] }