{ "info": { "author": "Justus Schock", "author_email": "justus.schock@rwth-aachen.de", "bugtrack_url": null, "classifiers": [], "description": "Copyright (c) 2019, Justus Schock\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nDescription: # shapenet\n \n [![Build Status](https://travis-ci.com/justusschock/shapenet.svg?token=GsT2RFaJJMxpqLAN3xuh&branch=master)](https://travis-ci.com/justusschock/shapenet) [![codecov](https://codecov.io/gh/justusschock/shapenet/branch/master/graph/badge.svg?token=gpwVgQjw18)](https://codecov.io/gh/justusschock/shapenet) ![LICENSE](https://img.shields.io/github/license/justusschock/shapedata.svg)\n \n This repository contains the [PyTorch](https://pytorch.org) implementation of [our Paper \"SUPER-REALTIME FACIAL LANDMARK DETECTION AND SHAPE FITTING BY DEEP REGRESSION OF SHAPE MODEL PARAMETERS\"](#our-paper).\n \n ## Contents\n * [Installation](#installation)\n * [Usage](#usage)\n * [By Scripts](#by-scripts)\n * [From Python](#from-python)\n * [Pretrained Weights](#pretrained-weights)\n * [Our Paper](#our-paper)\n \n ## Installation\n \n ### From Binary:\n `pip install shapenet`\n \n ### From Source:\n `pip install git+https://github.com/justusschock/shapenet` \n \n ## Usage\n ### By Scripts\n For simplicity we provide several scripts to preprocess the data, train networks, predict from networks and export the network via [`torch.jit`](https://pytorch.org/docs/stable/jit.html).\n To get a list of the necessary and accepted arguments, run the script with the `-h` flag.\n \n #### Data Preprocessing\n * `prepare_all_data`: prepares multiple datasets (you can select the datasets to preprocess via arguments passed to this script)\n * `prepare_cat_dset`: Download and preprocesses the [Cat-Dataset](https://www.kaggle.com/crawford/cat-dataset)\n * `prepare_helen_dset`: Preprocesses an already downloaded ZIP file of the [HELEN Dataset](http://www.ifp.illinois.edu/~vuongle2/helen/) (Download is recommended from [here](https://ibug.doc.ic.ac.uk/download/annotations/helen.zip) since this already contains the landmarks)\n * `prepare_lfpw_dset`: Preprocesses an already downloaded ZIP file of the [LFPW Dataset](https://neerajkumar.org/databases/lfpw/) (Download is recommended from [here](https://ibug.doc.ic.ac.uk/download/annotations/lfpw.zip) since this already contains the landmarks)\n \n #### Training\n * `train_shapenet`: Trains the shapenet with the configuration specified in an extra configuration file (exemplaric configuration for all avaliable datasets are provided in the [example_configs](example_configs) folder)\n \n #### Prediction\n * `predict_from_net`: Predicts all images in a given directory (assumes existing groundtruths for cropping, otherwise the cropping to groundtruth could be replaced by a detector)\n \n #### JIT-Export\n * `export_to_jit`: Traces the given model and saves it as jit-ScriptModule, which can be accessed via Python and C++\n \n ### From Python\n This implementation uses the [`delira`-Framework](https://github.com/justusschock/delira) for training and validation handling. It supports mixed precision training and inference via [NVIDIA/APEX](https://github.com/NVIDIA/apex) (must be installed separately). The data-handling is outsourced to [shapedata](https://github.com/justusschock/shapedata).\n \n The following gives a short overview about the packages and classes.\n \n #### `shapenet.networks` \n The `networks` subpackage contains the actual implementation of the shapenet with bindings to integrate the `ShapeLayer` and other feature extractors (currently the ones registered in `torchvision.models`).\n \n #### `shapenet.layer`\n The `layer` subpackage contains the Python and C++ Implementations of the ShapeLayer and the Affine Transformations. It is supposed to use these Layers as layers in `shapenet.networks`\n \n #### `shapenet.jit`\n The `jit` subpackage is a less flexible reimplementation of the subpackages `shapenet.networks` and `shapenet.layer` to export trained weights as jit-ScriptModule\n \n #### `shapenet.utils`\n The `utils` subpackage contains everything that did not suit into the scope of any other package. Currently it is mainly responsible for parsing of configuration files.\n \n #### `shapenet.scripts`\n The `scripts` subpackage contains all scipts described in [Scripts](#by-scripts) and their helper functions.\n \n ### Pretrained Weights\n Currently Pretrained Weights are available for [grayscale faces](https://drive.google.com/file/d/1QS2GUZK9xKWvpbDYgUCc-m0qI60TMnLj/view?usp=sharing) and [cats](https://drive.google.com/file/d/13S-4vLmmUBNy2XKJl_yR1u7Z283Iu1zB/view?usp=sharing).\n \n For these Networks the image size is fixed to 224 and the pretrained weights can be loaded via `torch.jit.load(\"PATH/TO/NETWORK/FILE.ptj\")`. The inputs have to be of type `torch.Tensor` with dtype `torch.float` in shape `(BATCH_SIZE, 1, 224, 224)` and normalized in a range between (0, 1).\n \n \n ## Our Paper\n If you use our Code for your own research, please cite our paper:\n ```\n @article{Kopaczka2019,\n title = {Super-Realtime Facial Landmark Detection and Shape Fitting by Deep Regression of Shape Model Parameters},\n author = {Marcin Kopaczka and Justus Schock and Dorit Merhof},\n year = {2019},\n journal = {arXiV preprint}\n }\n ```\n A link to the PDF will be given as soon, as the preprint is online available.\n \nPlatform: UNKNOWN\nClassifier: Development Status :: 4 - Beta\nClassifier: Intended Audience :: Developers\nClassifier: Intended Audience :: Education\nClassifier: Intended Audience :: Science/Research\nClassifier: Natural Language :: English\nClassifier: Programming Language :: Python :: 3\nClassifier: Topic :: Scientific/Engineering :: Artificial Intelligence\nClassifier: Topic :: Scientific/Engineering :: Medical Science Apps.\n", "description_content_type": "", "docs_url": null, "download_url": "", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "https://github.com/justussschock/shapenet", "keywords": "", "license": "BSD 2-Clause License", "maintainer": "", "maintainer_email": "", "name": "shapenet", "package_url": "https://pypi.org/project/shapenet/", "platform": "", "project_url": "https://pypi.org/project/shapenet/", "project_urls": { "Homepage": "https://github.com/justussschock/shapenet" }, "release_url": "https://pypi.org/project/shapenet/0.1.0/", "requires_dist": [ "delira[torch]", "shapedata", "ninja", "kaggle", "pandas" ], "requires_python": "", "summary": "", "version": "0.1.0" }, "last_serial": 4799570, "releases": { "0.1.0": [ { "comment_text": "", "digests": { "md5": "5bb030b96783c2ca36fbf0ed580a6021", "sha256": "56f6b2f47261f5def322e291d0a9aad1ff83d489b1520a481ac45d9630627939" }, "downloads": -1, "filename": "shapenet-0.1.0-py3-none-any.whl", "has_sig": false, "md5_digest": "5bb030b96783c2ca36fbf0ed580a6021", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 37306, "upload_time": "2019-02-09T15:31:27", "url": "https://files.pythonhosted.org/packages/0e/ec/b70775b5848d69fe41e3b1b10454f6d887f8846cf9e2cdba872c5deaa42b/shapenet-0.1.0-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "280ce556883cbe67870fabcdeaf45314", "sha256": "926d9870cfaa9f9eef1aef97e8d47e728ded0522c183984d26ce9d2dc30115f7" }, "downloads": -1, "filename": "shapenet-0.1.0.tar.gz", "has_sig": false, "md5_digest": "280ce556883cbe67870fabcdeaf45314", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 25989, "upload_time": "2019-02-09T15:31:29", "url": "https://files.pythonhosted.org/packages/98/d8/8eef79f3992ad919f82fd2f61582b9f9bbbfc514d26ddb79beb9dad3e024/shapenet-0.1.0.tar.gz" } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "5bb030b96783c2ca36fbf0ed580a6021", "sha256": "56f6b2f47261f5def322e291d0a9aad1ff83d489b1520a481ac45d9630627939" }, "downloads": -1, "filename": "shapenet-0.1.0-py3-none-any.whl", "has_sig": false, "md5_digest": "5bb030b96783c2ca36fbf0ed580a6021", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 37306, "upload_time": "2019-02-09T15:31:27", "url": "https://files.pythonhosted.org/packages/0e/ec/b70775b5848d69fe41e3b1b10454f6d887f8846cf9e2cdba872c5deaa42b/shapenet-0.1.0-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "280ce556883cbe67870fabcdeaf45314", "sha256": "926d9870cfaa9f9eef1aef97e8d47e728ded0522c183984d26ce9d2dc30115f7" }, "downloads": -1, "filename": "shapenet-0.1.0.tar.gz", "has_sig": false, "md5_digest": "280ce556883cbe67870fabcdeaf45314", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 25989, "upload_time": "2019-02-09T15:31:29", "url": "https://files.pythonhosted.org/packages/98/d8/8eef79f3992ad919f82fd2f61582b9f9bbbfc514d26ddb79beb9dad3e024/shapenet-0.1.0.tar.gz" } ] }