{ "info": { "author": "Pavel Yakubovskiy", "author_email": "qubvel@gmail.com", "bugtrack_url": null, "classifiers": [ "License :: OSI Approved :: MIT License", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: PyPy" ], "description": "\n# Segmentation models\n[![Build Status](https://travis-ci.com/qubvel/segmentation_models.pytorch.svg?branch=master)](https://travis-ci.com/qubvel/segmentation_models.pytorch) [![Generic badge](https://img.shields.io/badge/License-MIT-.svg)](https://shields.io/)\n\nSegmentation models is python library with Neural Networks for Image Segmentation based on PyTorch.\n\nThe main features of this library are:\n\n - High level API (just two lines to create neural network)\n - 4 models architectures for binary and multi class segmentation (including legendary Unet)\n - 30 available encoders for each architecture\n - All encoders have pre-trained weights for faster and better convergence\n\n### Table of content\n 1. [Quick start](#start)\n 2. [Examples](#examples)\n 3. [Models](#models) \n 1. [Architectures](#architectires)\n 2. [Encoders](#encoders)\n 3. [Pretrained weights](#weights)\n 4. [Models API](#api)\n 5. [Installation](#installation)\n 6. [License](#license)\n\n### Quick start \nSince the library is built on the PyTorch framework, created segmentation model is just a PyTorch nn.Module, which can be created as easy as:\n```python\nimport segmentation_models_pytorch as smp\n\nmodel = smp.Unet()\n```\nDepending on the task, you can change the network architecture by choosing backbones with fewer or more parameters and use pretrainded weights to initialize it:\n\n```python\nmodel = smp.Unet('resnet34', encoder_weights='imagenet')\n```\n\nChange number of output classes in the model:\n\n```python\nmodel = smp.Unet('resnet34', classes=3, activation='softmax')\n```\n\nAll models have pretrained encoders, so you have to prepare your data the same way as during weights pretraining:\n```python\nfrom segmentation_models_pytorch.encoders import get_preprocessing_fn\n\npreprocess_input = get_preprocessing_fn('resnet18', pretrained='imagenet')\n```\n### Examples \n - Training model for cars segmentation on CamVid dataset [here](https://github.com/qubvel/segmentation_models.pytorch/blob/master/examples/cars%20segmentation%20(camvid).ipynb).\n - Training model with [Catalyst](https://github.com/catalyst-team/catalyst) (high-level framework for PyTorch) - [here](https://colab.research.google.com/gist/Scitator/e3fd90eec05162e16b476de832500576/cars-segmentation-camvid.ipynb).\n\n### Models \n\n#### Architectures \n - [Unet](https://arxiv.org/abs/1505.04597)\n - [Linknet](https://arxiv.org/abs/1707.03718)\n - [FPN](http://presentations.cocodataset.org/COCO17-Stuff-FAIR.pdf)\n - [PSPNet](https://arxiv.org/abs/1612.01105)\n\n#### Encoders \n\n| Type | Encoder names |\n|------------|---------------------------------------------------------------------------------------------|\n| VGG | vgg11, vgg13, vgg16, vgg19, vgg11bn, vgg13bn, vgg16bn, vgg19bn |\n| DenseNet | densenet121, densenet169, densenet201, densenet161 |\n| DPN | dpn68, dpn68b, dpn92, dpn98, dpn107, dpn131 |\n| Inception | inceptionresnetv2 |\n| ResNet | resnet18, resnet34, resnet50, resnet101, resnet152 |\n| ResNeXt | resnext50_32x4d, resnext101_32x8d, resnext101_32x16d, resnext101_32x32d, resnext101_32x48d |\n| SE-ResNet | se_resnet50, se_resnet101, se_resnet152 |\n| SE-ResNeXt | se_resnext50_32x4d, se_resnext101_32x4d |\n| SENet | senet154 | \n\n#### Weights \n\n| Weights name | Encoder names |\n|---------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| imagenet+5k | dpn68b, dpn92, dpn107 |\n| imagenet | vgg11, vgg13, vgg16, vgg19, vgg11bn, vgg13bn, vgg16bn, vgg19bn,
densenet121, densenet169, densenet201, densenet161, dpn68, dpn98, dpn131,
inceptionresnetv2,
resnet18, resnet34, resnet50, resnet101, resnet152,
resnext50_32x4d, resnext101_32x8d,
se_resnet50, se_resnet101, se_resnet152,
se_resnext50_32x4d, se_resnext101_32x4d,
senet154 |\n| [instagram](https://pytorch.org/hub/facebookresearch_WSL-Images_resnext/) | resnext101_32x8d, resnext101_32x16d, resnext101_32x32d, resnext101_32x48d |\n\n### Models API \n - `model.encoder` - pretrained backbone to extract features of different spatial resolution \n - `model.decoder` - segmentation head, depends on models architecture (`Unet`/`Linknet`/`PSPNet`/`FPN`) \n - `model.activation` - output activation function, one of `sigmoid`, `softmax`\n - `model.forward(x)` - sequentially pass `x` through model\\`s encoder and decoder (return logits!) \n - `model.predict(x)` - inference method, switch model to `.eval()` mode, call `.forward(x)` and apply activation function with `torch.no_grad()`\n\n### Installation \nPyPI version:\n```bash\n$ pip install segmentation-models-pytorch\n````\nLatest version from source:\n```bash\n$ pip install git+https://github.com/qubvel/segmentation_models.pytorch\n````\n### License \nProject is distributed under [MIT License](https://github.com/qubvel/segmentation_models.pytorch/blob/master/LICENSE)\n\n### Run tests\n```bash\n$ docker build -f docker/Dockerfile.dev -t smp:dev .\n$ docker run --rm smp:dev pytest -p no:cacheprovider\n```\n\n\n", "description_content_type": "text/markdown", "docs_url": null, "download_url": "", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "https://github.com/qubvel/segmentation_models.pytorch", "keywords": "", "license": "MIT", "maintainer": "", "maintainer_email": "", "name": "segmentation-models-pytorch", "package_url": "https://pypi.org/project/segmentation-models-pytorch/", "platform": "", "project_url": "https://pypi.org/project/segmentation-models-pytorch/", "project_urls": { "Homepage": "https://github.com/qubvel/segmentation_models.pytorch" }, "release_url": "https://pypi.org/project/segmentation-models-pytorch/0.0.3/", "requires_dist": [ "torchvision (<=0.4.0,>=0.2.2)", "pretrainedmodels (==0.7.4)", "pytest ; extra == 'test'" ], "requires_python": ">=3.0.0", "summary": "Image segmentation models with pre-trained backbones. PyTorch.", "version": "0.0.3" }, "last_serial": 5900346, "releases": { "0.0.1": [ { "comment_text": "", "digests": { "md5": "eec9faf0a8a6f82e8d71c77fda701fd7", "sha256": "694e7985d98e2a1a2caece322c5fa3c4b2b7fcc2c78cd00afb0372026a4a62cb" }, "downloads": -1, "filename": "segmentation_models_pytorch-0.0.1-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "eec9faf0a8a6f82e8d71c77fda701fd7", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": ">=3.0.0", "size": 24051, "upload_time": "2019-04-20T10:22:42", "url": "https://files.pythonhosted.org/packages/53/7f/a009f9d116ca46be5ce8be2e2de318c4da57b62e32fa0b11a938b2808bb8/segmentation_models_pytorch-0.0.1-py2.py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "6166491a531663816d3f3484b546adde", "sha256": "f4b2338f995565f9a0018ac869fc7240569d87a4a63dfb666ed59533c7405423" }, "downloads": -1, "filename": "segmentation_models_pytorch-0.0.1.tar.gz", "has_sig": false, "md5_digest": "6166491a531663816d3f3484b546adde", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.0.0", "size": 16828, "upload_time": "2019-04-20T10:22:44", "url": "https://files.pythonhosted.org/packages/52/5c/60d03e239f3c49ab293b4c26b16262fcde77ca1eb95d8b794f75f9e0ecf0/segmentation_models_pytorch-0.0.1.tar.gz" } ], "0.0.2": [ { "comment_text": "", "digests": { "md5": "14f368a5bbcd7a35854c2ed7161a6b42", "sha256": "38e4c9869505b350c1a278c07c1cada0cd8d0761b3dbc7bd3ffb47dbbd3e91a5" }, "downloads": -1, "filename": "segmentation_models_pytorch-0.0.2.tar.gz", "has_sig": false, "md5_digest": "14f368a5bbcd7a35854c2ed7161a6b42", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.0.0", "size": 16539, "upload_time": "2019-09-19T11:35:13", "url": "https://files.pythonhosted.org/packages/02/4f/a067a7f424cd087e1adc7b0245ce9837a403a063631a94bff7dcb96c6f69/segmentation_models_pytorch-0.0.2.tar.gz" } ], "0.0.3": [ { "comment_text": "", "digests": { "md5": "1ac87e28f00df7ee8d538cc64e8527a1", "sha256": "2c4c4f4d843c438813193eaaae6eb7fad1057cf2cd7e9490932e302a5ebfb99e" }, "downloads": -1, "filename": "segmentation_models_pytorch-0.0.3-py3-none-any.whl", "has_sig": false, "md5_digest": "1ac87e28f00df7ee8d538cc64e8527a1", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": ">=3.0.0", "size": 27265, "upload_time": "2019-09-28T18:54:19", "url": "https://files.pythonhosted.org/packages/20/c6/67e9d555d41094988aaaf033b1d7e732a326a2ef41a15b81211b56e464ce/segmentation_models_pytorch-0.0.3-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "e8a2e9b6a6c74098dab56cdd4aae96d8", "sha256": "3f5d95d6adc595814797d3e531ff8dc6f63c4f35d5bb6886fb7569533f8d538a" }, "downloads": -1, "filename": "segmentation_models_pytorch-0.0.3.tar.gz", "has_sig": false, "md5_digest": "e8a2e9b6a6c74098dab56cdd4aae96d8", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.0.0", "size": 16649, "upload_time": "2019-09-28T18:54:21", "url": "https://files.pythonhosted.org/packages/43/e7/294488cbb0696e215f9c40ef31b82603915c7618cb956bb5e3402325e846/segmentation_models_pytorch-0.0.3.tar.gz" } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "1ac87e28f00df7ee8d538cc64e8527a1", "sha256": "2c4c4f4d843c438813193eaaae6eb7fad1057cf2cd7e9490932e302a5ebfb99e" }, "downloads": -1, "filename": "segmentation_models_pytorch-0.0.3-py3-none-any.whl", "has_sig": false, "md5_digest": "1ac87e28f00df7ee8d538cc64e8527a1", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": ">=3.0.0", "size": 27265, "upload_time": "2019-09-28T18:54:19", "url": "https://files.pythonhosted.org/packages/20/c6/67e9d555d41094988aaaf033b1d7e732a326a2ef41a15b81211b56e464ce/segmentation_models_pytorch-0.0.3-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "e8a2e9b6a6c74098dab56cdd4aae96d8", "sha256": "3f5d95d6adc595814797d3e531ff8dc6f63c4f35d5bb6886fb7569533f8d538a" }, "downloads": -1, "filename": "segmentation_models_pytorch-0.0.3.tar.gz", "has_sig": false, "md5_digest": "e8a2e9b6a6c74098dab56cdd4aae96d8", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.0.0", "size": 16649, "upload_time": "2019-09-28T18:54:21", "url": "https://files.pythonhosted.org/packages/43/e7/294488cbb0696e215f9c40ef31b82603915c7618cb956bb5e3402325e846/segmentation_models_pytorch-0.0.3.tar.gz" } ] }