{ "info": { "author": "Opendata Team", "author_email": "opendatateam@data.gouv.fr", "bugtrack_url": null, "classifiers": [ "Development Status :: 4 - Beta", "Environment :: Web Environment", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.5", "Programming Language :: Python :: 3.6", "Topic :: Software Development :: Libraries :: Python Modules", "Topic :: System :: Software Distribution" ], "description": "# Croquemort\n\n## Vision\n\nThe aim of this project is to provide a way to check HTTP resources: hunting 404s, updating redirections and so on.\n\nFor instance, given a website that stores a list of external resources (html, images or documents), this product allows the owner to send its URLs in bulk and retrieve information for each URL fetched in background (status code and useful headers for now). This way he can be informed of dead links or outdated resources and acts accordingly.\n\nThe name comes from the [French term](https://fr.wikipedia.org/wiki/Croque-mort) for [Funeral director](https://en.wikipedia.org/wiki/Funeral_director).\n\n\n## Language\n\nThe development language is English. All comments and documentation should be written in English, so that we don't end up with \u201cfranglais\u201d methods, and so we can share our learnings with developers around the world.\n\n\n## History\n\nWe started this project on May, 2015 for [data.gouv.fr](http://data.gouv.fr/).\n\nWe open-sourced it since the beginning because we want to design things in the open and involve citizens and hackers in our developments.\n\n\n## Installation\n\nWe\u2019re using these technologies: RabbitMQ and Redis. You have to install and launch these dependencies prior to install and run the Python packages.\n\nOnce installed, run these commands to setup the project:\n\n```shell\n$ python3 -m venv ~/.virtualenvs/croquemort\n$ source ~/.virtualenvs/croquemort/bin/activate\n$ pip3 install -r requirements/develop.pip\n```\n\nYou're good to go!\n\n\n## Usage\n\nFirst you have to run the `http` service in order to receive incoming HTTP calls. You can run it with this command:\n\n```shell\n$ nameko run croquemort.http\nstarting services: http_server\nConnected to amqp://guest:**@127.0.0.1:5672//\n```\n\nThen launch the `crawler` in a new shell that will fetch the submitted URL in the background.\n\n```shell\n$ nameko run croquemort.crawler\nstarting services: url_crawler\nConnected to amqp://guest:**@127.0.0.1:5672//\n```\n\nYou can optionnaly use the proposed configuration (and tweak it) to get some logs (`INFO` level by default):\n\n```shell\n$ nameko run --config config.yaml croquemort.crawler\n```\n\nYou can enable in the config file more workers for the crawler (from 10 (default) to 50):\n```yaml\nmax_workers: 50\n```\n\n\n\n### Browsing your data\n\nAt any time, you can open `http://localhost:8000/` and check the availability of your URLs collections within a nice dashboard that allows you to filter by statuses, content types, URL schemes, last updates and/or domains. There is even a CSV export of the data you are currently viewing if you want to script something.\n\n\n### Fetching one URL\n\nNow you can use your favorite HTTP client (mine is [httpie](https://github.com/jakubroztocil/httpie)) to issue a POST request againt `localhost:8000/check/one` with the URL as a parameter:\n\n```shell\n$ http :8000/check/one url=\"https://www.data.gouv.fr/fr/\"\nHTTP/1.1 200 OK\nConnection: keep-alive\nContent-Length: 28\nContent-Type: text/plain; charset=utf-8\nDate: Wed, 03 Jun 2015 14:21:50 GMT\n\n{\n \"url-hash\": \"u:fc6040c5\"\n}\n```\n\nThe service returns a URL hash that will be used to retrieve informations related to that URL:\n\n```shell\n$ http :8000/url/u:fc6040c5\nHTTP/1.1 200 OK\nConnection: keep-alive\nContent-Length: 335\nContent-Type: text/plain; charset=utf-8\nDate: Wed, 03 Jun 2015 14:22:57 GMT\n\n{\n \"etag\": \"\",\n \"checked-url\": \"https://www.data.gouv.fr/fr/\",\n \"final-url\": \"https://www.data.gouv.fr/fr/\",\n \"content-length\": \"\",\n \"content-disposition\": \"\",\n \"content-md5\": \"\",\n \"content-location\": \"\",\n \"expires\": \"\",\n \"final-status-code\": \"200\",\n \"updated\": \"2015-06-03T16:21:52.569974\",\n \"last-modified\": \"\",\n \"content-encoding\": \"gzip\",\n \"content-type\": \"text/html; charset=utf-8\"\n}\n```\n\nOr you can use the URL passed as a GET parameter (less error prone):\n\n```shell\n$ http GET :8000/url url=https://www.data.gouv.fr/fr/\nHTTP/1.1 200 OK\nConnection: keep-alive\nContent-Length: 335\nContent-Type: text/plain; charset=utf-8\nDate: Wed, 03 Jun 2015 14:23:35 GMT\n\n{\n \"etag\": \"\",\n \"checked-url\": \"https://www.data.gouv.fr/fr/\",\n \"final-url\": \"https://www.data.gouv.fr/fr/\",\n \"content-length\": \"\",\n \"content-disposition\": \"\",\n \"content-md5\": \"\",\n \"content-location\": \"\",\n \"expires\": \"\",\n \"final-status-code\": \"200\",\n \"updated\": \"2015-06-03T16:21:52.569974\",\n \"last-modified\": \"\",\n \"content-encoding\": \"gzip\",\n \"content-type\": \"text/html; charset=utf-8\"\n}\n```\n\nBoth return the same amount of information.\n\n\n### Fetching many URLs\n\nYou can also use your HTTP client to issue a POST request againt `localhost:8000/check/many` with the URLs and the name of the group as parameters:\n\n```shell\n$ http :8000/check/many urls:='[\"https://www.data.gouv.fr/fr/\",\"https://www.data.gouv.fr/s/images/2015-03-31/d2eb53b14c5f4e6690e150ea7be40a88/cover-datafrance-retina.png\"]' group=\"datagouvfr\"\nHTTP/1.1 200 OK\nConnection: keep-alive\nContent-Length: 30\nContent-Type: text/plain; charset=utf-8\nDate: Wed, 03 Jun 2015 14:24:00 GMT\n\n{\n \"group-hash\": \"g:efcf3897\"\n}\n```\n\nThis time, the service returns a group hash that will be used to retrieve informations related to that group:\n\n```shell\n$ http :8000/group/g:efcf3897\nHTTP/1.1 200 OK\nConnection: keep-alive\nContent-Length: 941\nContent-Type: text/plain; charset=utf-8\nDate: Wed, 03 Jun 2015 14:26:04 GMT\n\n{\n \"u:179d104f\": {\n \"content-encoding\": \"\",\n \"content-disposition\": \"\",\n \"group\": \"g:efcf3897\",\n \"last-modified\": \"Tue, 31 Mar 2015 14:38:37 GMT\",\n \"content-md5\": \"\",\n \"checked-url\": \"https://www.data.gouv.fr/s/images/2015-03-31/d2eb53b14c5f4e6690e150ea7be40a88/cover-datafrance-retina.png\",\n \"final-url\": \"https://www.data.gouv.fr/s/images/2015-03-31/d2eb53b14c5f4e6690e150ea7be40a88/cover-datafrance-retina.png\",\n \"final-status-code\": \"200\",\n \"expires\": \"\",\n \"content-type\": \"image/png\",\n \"content-length\": \"280919\",\n \"updated\": \"2015-06-03T16:24:00.405636\",\n \"etag\": \"\\\"551ab16d-44957\\\"\",\n \"content-location\": \"\"\n },\n \"name\": \"datagouvfr\",\n \"u:fc6040c5\": {\n \"content-disposition\": \"\",\n \"content-encoding\": \"gzip\",\n \"group\": \"g:efcf3897\",\n \"last-modified\": \"\",\n \"content-md5\": \"\",\n \"content-location\": \"\",\n \"content-length\": \"\",\n \"expires\": \"\",\n \"content-type\": \"text/html; charset=utf-8\",\n \"final-status-code\": \"200\",\n \"updated\": \"2015-06-03T16:24:02.398105\",\n \"etag\": \"\",\n \"checked-url\": \"https://www.data.gouv.fr/fr/\"\n \"final-url\": \"https://www.data.gouv.fr/fr/\"\n }\n}\n```\n\nOr you can use the group name passed as a GET parameter (less error prone):\n\n```shell\n$ http GET :8000/group/ group=datagouvfr\nHTTP/1.1 200 OK\nConnection: keep-alive\nContent-Length: 335\nContent-Type: text/plain; charset=utf-8\nDate: Wed, 03 Jun 2015 14:23:35 GMT\n\n{\n \"etag\": \"\",\n \"checked-url\": \"https://www.data.gouv.fr/fr/\",\n \"final-url\": \"https://www.data.gouv.fr/fr/\",\n \"content-length\": \"\",\n \"content-disposition\": \"\",\n \"content-md5\": \"\",\n \"content-location\": \"\",\n \"expires\": \"\",\n \"final-status-code\": \"200\",\n \"updated\": \"2015-06-03T16:21:52.569974\",\n \"last-modified\": \"\",\n \"content-encoding\": \"gzip\",\n \"content-type\": \"text/html; charset=utf-8\"\n}\n```\n\nBoth return the same amount of information.\n\n\n### Redirect handling\n\nBoth when fetching one and many urls, croquemort has basic support of HTTP redirections. First, croquemort follows eventual redirections to the final destination (`allow_redirects` option of the `requests` library). Further more, croquemort stores some information about the redirection: the first redirect code and the final url. When encountering a redirection, the JSON response looks like this (note `redirect-url` and `redirect-status-code`):\n\n```json\n{\n \"checked-url\": \"https://goo.gl/ovZB\",\n \"final-url\": \"http://news.ycombinator.com\",\n \"final-status-code\": \"200\",\n \"redirect-url\": \"https://goo.gl/ovZB\",\n \"redirect-status-code\": \"301\",\n \"etag\": \"\",\n \"content-length\": \"\",\n \"content-disposition\": \"\",\n \"content-md5\": \"\",\n \"content-location\": \"\",\n \"expires\": \"\",\n \"updated\": \"2015-06-03T16:21:52.569974\",\n \"last-modified\": \"\",\n \"content-encoding\": \"gzip\",\n \"content-type\": \"text/html; charset=utf-8\" \n}\n```\n\n\n### Filtering results\n\nYou can filter results returned for a given group by header (or status) with the `filter_` prefix:\n\n```shell\n$ http GET :8000/group/g:efcf3897 filter_content-type=\"image/png\"\nHTTP/1.1 200 OK\nConnection: keep-alive\nContent-Length: 539\nContent-Type: text/plain; charset=utf-8\nDate: Wed, 03 Jun 2015 14:27:07 GMT\n\n{\n \"u:179d104f\": {\n \"content-encoding\": \"\",\n \"content-disposition\": \"\",\n \"group\": \"g:efcf3897\",\n \"last-modified\": \"Tue, 31 Mar 2015 14:38:37 GMT\",\n \"content-md5\": \"\",\n \"checked-url\": \"https://www.data.gouv.fr/s/images/2015-03-31/d2eb53b14c5f4e6690e150ea7be40a88/cover-datafrance-retina.png\",\n \"final-url\": \"https://www.data.gouv.fr/s/images/2015-03-31/d2eb53b14c5f4e6690e150ea7be40a88/cover-datafrance-retina.png\",\n \"final-status-code\": \"200\",\n \"expires\": \"\",\n \"content-type\": \"image/png\",\n \"content-length\": \"280919\",\n \"updated\": \"2015-06-03T16:24:00.405636\",\n \"etag\": \"\\\"551ab16d-44957\\\"\",\n \"content-location\": \"\"\n },\n \"name\": \"datagouvfr\"\n}\n```\n\nYou can exclude results returned for a given group by header (or status) with the `exclude_` prefix:\n\n```shell\n$ http GET :8000/group/g:efcf3897 exclude_content-length=\"\"\nHTTP/1.1 200 OK\nConnection: keep-alive\nContent-Length: 539\nContent-Type: text/plain; charset=utf-8\nDate: Wed, 03 Jun 2015 14:27:58 GMT\n\n{\n \"u:179d104f\": {\n \"content-encoding\": \"\",\n \"content-disposition\": \"\",\n \"group\": \"g:efcf3897\",\n \"last-modified\": \"Tue, 31 Mar 2015 14:38:37 GMT\",\n \"content-md5\": \"\",\n \"checked-url\": \"https://www.data.gouv.fr/s/images/2015-03-31/d2eb53b14c5f4e6690e150ea7be40a88/cover-datafrance-retina.png\",\n \"final-url\": \"https://www.data.gouv.fr/s/images/2015-03-31/d2eb53b14c5f4e6690e150ea7be40a88/cover-datafrance-retina.png\",\n \"final-status-code\": \"200\",\n \"expires\": \"\",\n \"content-type\": \"image/png\",\n \"content-length\": \"280919\",\n \"updated\": \"2015-06-03T16:24:00.405636\",\n \"etag\": \"\\\"551ab16d-44957\\\"\",\n \"content-location\": \"\"\n },\n \"name\": \"datagouvfr\"\n}\n```\n\nNote that in both cases, the `http` and the `crawler` services return interesting logging information for debugging (if you pass the `--config config.yaml` option to the `run` command).\n\n\n### Computing many URLs\n\nYou can programmatically register new URLs and groups using the RPC proxy. There is an example within the `example_csv.py` file which computes URLs from a CSV file (one URL per line).\n\n```shell\n$ PYTHONPATH=. python tests/example_csv.py --csvfile path/to/your/file.csv --group groupname\nGroup hash: g:2752262332\n```\n\nThe script returns a group hash that you can use through the HTTP interface as documented above.\n\n\n### Frequencies\n\nYou may want to periodically check existing groups of URLs in the background. In that case launch the `timer` service:\n\n```shell\n$ nameko run croquemort.timer\nstarting services: timer\nConnected to amqp://guest:**@127.0.0.1:5672//\n```\n\nYou can now specify a `frequency` parameter when you `POST` against `/check/many` or when you launch the command via the shell:\n\n```shell\n$ PYTHONPATH=. python example_csv.py --csvfile path/to/your/file.csv --group groupname --frequency hourly\nGroup hash: g:2752262332\n```\n\nThere are three possibilities: \"hourly\", \"daily\" and \"monthly\". If you don't specify any you'll have to refresh URL checks manually. The `timer` service will check groups with associated frequencies and refresh associated URLs accordingly.\n\n\n### Webhook\n\nInstead of polling the results endpoints to get the results of one or many URLs checks, you can ask Croquemort to call a webhook when a check is completed.\n\n```shell\n$ nameko run croquemort.webhook\nstarting services: webhook_dispatcher\nConnected to amqp://guest:**@127.0.0.1:5672//\n```\n\nYou can now specify a `callback_url` parameter when you `POST` against `/check/one` or `/check/many`.\n\n```shell\n$ http :8000/check/one url=\"https://www.data.gouv.fr/fr/\" callback_url=\"http://example.org/cb\"\nHTTP/1.1 200 OK\nConnection: keep-alive\nContent-Length: 28\nContent-Type: text/plain; charset=utf-8\nDate: Wed, 03 Jun 2015 14:21:50 GMT\n\n{\n \"url-hash\": \"u:fc6040c5\"\n}\n```\n\nWhen the check is completed, a `POST` request should be emitted to `http://example.org/cb` with the metadata of the check. The webhook service expects a successfull (e.g. 200) HTTP status code. If not, it will retry (by default) 5 times, waiting at first 10 seconds before retrying then increasing the delay by a factor of 2 at each try. You can customize those values by setting the variables `WEBHOOK_NB_RETRY`, `WEBHOOK_DELAY_INTERVAL` and `WEBHOOK_BACKOFF_FACTOR`.\n\n```json\n{\n \"data\": {\n \"checked-url\": \"http://yahoo.fr\",\n \"final-url\": \"http://yahoo.fr\",\n \"group\": \"g:a80c20d4\",\n \"frequency\": \"hourly\",\n \"final-status-code\": \"200\",\n \"updated\": \"2017-07-10T12:50:20.219819\",\n \"etag\": \"\",\n \"expires\": \"-1\",\n \"last-modified\": \"\",\n \"charset\": \"utf-8\",\n \"content-type\": \"text/html\",\n \"content-length\": \"\",\n \"content-disposition\": \"\",\n \"content-md5\": \"\",\n \"content-encoding\": \"gzip\",\n \"content-location\": \"\"\n }\n}\n```\n\n\n### Migrations\n\nYou may want to migrate some data over time with the `migrations` service:\n\n```shell\n$ nameko run croquemort.migrations\nstarting services: migrations\nConnected to amqp://guest:**@127.0.0.1:5672//\n```\n\nYou can now run a nameko shell:\n\n```shell\n$ nameko shell\n>>> n.rpc.migrations.split_content_types()\n>>> n.rpc.migrations.delete_urls_for('www.data.gouv.fr')\n>>> n.rpc.migrations.delete_urls_for('static.data.gouv.fr')\n```\n\nThe `split_content_types` migration is useful if you use Croquemort prior to the integration of the report: we use to store the whole string without splitting on the `charset` leading to fragmentation of the Content-types report graph.\n\nThe `delete_urls_for` is useful if you want to delete all URLs related to a given `domain` you must pass as a paramater: we accidently checked URLs that are under our control so we decided to clean up in order to reduce the size of the Redis database and increase the relevance of reports.\n\nThe `migrate_from_1_to_2` (meta migration for `migrate_urls_redirect` and `add_hash_prefixes`) is used to migrate your database from croquemort `v1` to `v2`. In `v2` there are breaking changes from `v1` on the API JSON schema for a check result:\n- `url` becomes `checked-url`\n- `status` becomes `final-status-code`\n\nYou are encouraged to add your own generic migrations to the service and share those with the community via pull-requests (see below).\n\n\n## Contributing\n\nWe\u2019re really happy to accept contributions from the community, that\u2019s the main reason why we open-sourced it! There are many ways to contribute, even if you\u2019re not a technical person.\n\nWe\u2019re using the infamous [simplified Github workflow](http://scottchacon.com/2011/08/31/github-flow.html) to accept modifications (even internally), basically you\u2019ll have to:\n\n* create an issue related to the problem you want to fix (good for traceability and cross-reference)\n* fork the repository\n* create a branch (optionally with the reference to the issue in the name)\n* hack hack hack\n* commit incrementally with readable and detailed commit messages\n* submit a pull-request against the master branch of this repository\n\nWe\u2019ll take care of tagging your issue with the appropriated labels and answer within a week (hopefully less!) to the problem you encounter.\n\nIf you\u2019re not familiar with open-source workflows or our set of technologies, do not hesitate to ask for help! We can mentor you or propose good first bugs (as labeled in our issues). Also welcome to add your name to Credits section of this document.\n\n\n### Submitting bugs\n\nYou can report issues directly on Github, that would be a really useful contribution given that we lack some user testing on the project. Please document as much as possible the steps to reproduce your problem (even better with screenshots).\n\n\n### Adding documentation\n\nWe\u2019re doing our best to document each usage of the project but you can improve this file and add you own sections.\n\n\n### Hacking backend\n\nHello fellow hacker, it\u2019s good to have you on board! We plan to implement these features in a reasonable future, feel free to pick the one you want to contribute too and declare an issue for it:\n\n* verifying mimetypes, extensions, sizes, caches, etc\n* periodical fetching\n* reporting for a group of URLs\n\n\n### Testing\n\nBefore submitting any pull-request, you must ensure tests are passing.\nYou should add tests for any new feature and/or bugfix.\nYou can run tests with the following command:\n```shell\n$ python -m pytest tests/\n```\n\nYou must have rabbitmq and redis running to pass the test.\n\nA ``docker-compose.yml`` file is provided to be quickly ready:\n```shell\n$ docker-compose up -d\nCreating croquemort_redis_1...\nCreating croquemort_rabbitmq_1...\n$ python -m pytest tests/\n```\n\nIn the case you use your own middleware with different configuration,\nyou can pass this configuration as py.test command line arguments:\n```shell\npython -m pytest tests/ --redis-uri=redis://myredis:6379/0 --amqp-uri=amqp://john:doe@myrabbit\n```\n\nRead the py.test help to see all available options:\n```shell\npython -m pytest tests/ --help\n```\n\n\n## Versioning\n\nVersion numbering follows the [Semantic versioning](http://semver.org/) approach.\n\n\n## License\n\nWe\u2019re using the [MIT license](https://tldrlegal.com/license/mit-license).\n\n\n## Credits\n\n* [David Larlet](https://larlet.fr/david/)\n* [Alexandre Bult\u00e9](http://alexandre.bulte.net/)\n\n# Changelog\n\n## 2.1.0 (2019-05-07)\n\n- Fix HEAD timeout handling [#95](https://github.com/opendatateam/croquemort/pull/95)\n- Remove homepage/dashboard [#140](https://github.com/opendatateam/croquemort/pull/140)\n\n## 2.0.4 (2018-01-24)\n\n- Fix packaging and other enhancements [#49](https://github.com/opendatateam/croquemort/pull/49)\n\n## 2.0.3 (2018-01-08)\n\n- Fix setup.py\n\n## 2.0.2 (2018-01-08)\n\n- Fix publish job on Circle\n\n## 2.0.1 (2018-01-08)\n\n- Add packaging workflow, publish on Pypi\n\n## 2.0.0 (2017-10-23)\n\nMandatory migrations from `1.0.0` to `2.0.0`:\n\n```bash\n# Launch the migrations service - logs will be displayed here\nnameko run croquemort.migrations --config config.yaml\n# In another terminal session\nnameko shell\n>>> n.rpc.migrations.migrate_from_1_to_2()\n```\n\n### Breaking changes\n\n- Hash prefixes in Redis DB\n [#26](https://github.com/opendatateam/croquemort/issues/26)\n \u2014 associated migration: `add_hash_prefixes`\n- Redirection support\n [#1](https://github.com/opendatateam/croquemort/issues/1)\n \u2014 associated migration: `migrate_urls_redirect`\n\n### Features\n\n- Webhook support\n [#24](https://github.com/opendatateam/croquemort/issues/24)\n\n### Misc\n\n- Remove logbook in favor of logging\n [#29](https://github.com/opendatateam/croquemort/issues/29)\n\n## 1.0.0 (2017-01-23)\n\n- Initial version\n\n\n\n", "description_content_type": "", "docs_url": null, "download_url": "", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "https://github.com/opendatateam/croquemort", "keywords": "linkchecker croquemort", "license": "MIT", "maintainer": "", "maintainer_email": "", "name": "croquemort", "package_url": "https://pypi.org/project/croquemort/", "platform": "", "project_url": "https://pypi.org/project/croquemort/", "project_urls": { "Homepage": "https://github.com/opendatateam/croquemort" }, "release_url": "https://pypi.org/project/croquemort/2.1.0/", "requires_dist": [ "Jinja2 (==2.10.1)", "redis (==3.2.1)", "nameko (==2.12.0)", "wrapt (==1.11.1)", "validators (==0.12.5)", "requests-mock (==1.6.0) ; extra == 'test'", "pytest (==4.4.1) ; extra == 'test'" ], "requires_python": "", "summary": "Croquemort linkchecker", "version": "2.1.0" }, "last_serial": 5237540, "releases": { "2.0.3": [ { "comment_text": "", "digests": { "md5": "ea740f84e507664842983930c526ad6f", "sha256": "9d0147a744385935463e8d8e00522fbcc46567d776af4c6a97e8da66af70b5b8" }, "downloads": -1, "filename": "croquemort-2.0.3-py3-none-any.whl", "has_sig": false, "md5_digest": "ea740f84e507664842983930c526ad6f", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 35235, "upload_time": "2018-01-08T12:42:03", "url": "https://files.pythonhosted.org/packages/f6/58/7f345b0d3d3d27cf63f83f41a7c52ca427d279550c6e953ace8b40cea824/croquemort-2.0.3-py3-none-any.whl" } ], "2.0.4": [ { "comment_text": "", "digests": { "md5": "f6b197f82b0231065e69d0f90e68e8fb", "sha256": "416f29954edf3f10b59480b8407582a87fedfaf483395f3489970687b6f4962c" }, "downloads": -1, "filename": "croquemort-2.0.4-py3-none-any.whl", "has_sig": false, "md5_digest": "f6b197f82b0231065e69d0f90e68e8fb", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 37774, "upload_time": "2018-01-24T12:50:29", "url": "https://files.pythonhosted.org/packages/f8/77/63626c921c5402e7afcbe917a91d3aa0f6c9134529c995bcd6f1cd1707b9/croquemort-2.0.4-py3-none-any.whl" } ], "2.1.0": [ { "comment_text": "", "digests": { "md5": "7f5d4bd044843aed64d9df0590fe9eaf", "sha256": "85a6fb4dea92782283f19810df802ef81103e4f7d26bd2a80ab886fdb859f5fc" }, "downloads": -1, "filename": "croquemort-2.1.0-py3-none-any.whl", "has_sig": false, "md5_digest": "7f5d4bd044843aed64d9df0590fe9eaf", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 30088, "upload_time": "2019-05-07T10:08:35", "url": "https://files.pythonhosted.org/packages/85/58/77c327ffc31fb2d4b65dd355221b360efc67d52d7c12ccebfe9335412756/croquemort-2.1.0-py3-none-any.whl" } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "7f5d4bd044843aed64d9df0590fe9eaf", "sha256": "85a6fb4dea92782283f19810df802ef81103e4f7d26bd2a80ab886fdb859f5fc" }, "downloads": -1, "filename": "croquemort-2.1.0-py3-none-any.whl", "has_sig": false, "md5_digest": "7f5d4bd044843aed64d9df0590fe9eaf", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 30088, "upload_time": "2019-05-07T10:08:35", "url": "https://files.pythonhosted.org/packages/85/58/77c327ffc31fb2d4b65dd355221b360efc67d52d7c12ccebfe9335412756/croquemort-2.1.0-py3-none-any.whl" } ] }