{ "info": { "author": "Congcong Li", "author_email": "luffy.lcc@gmail.com", "bugtrack_url": null, "classifiers": [ "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Topic :: Scientific/Engineering :: Artificial Intelligence" ], "description": "# High quality, fast, modular reference implementation of SSD in PyTorch 1.0\n\n\nThis repository implements [SSD (Single Shot MultiBox Detector)](https://arxiv.org/abs/1512.02325). The implementation is heavily influenced by the projects [ssd.pytorch](https://github.com/amdegroot/ssd.pytorch), [pytorch-ssd](https://github.com/qfgaohao/pytorch-ssd) and [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark). This repository aims to be the code base for researches based on SSD.\n\n
\n \n

Example SSD output (ssd300_voc0712).

\n
\n\n| Losses | Learning rate | Metrics |\n| :-----------: |:-------------:| :------:|\n| ![losses](figures/losses.png) | ![lr](figures/lr.png) | ![metric](figures/metrics.png) |\n\n## Highlights\n\n- **PyTorch 1.0**: Support PyTorch 1.0 or higher.\n- **Multi-GPU training and inference**: We use `DistributedDataParallel`, you can train or test with arbitrary GPU(s), the training schema will change accordingly.\n- **Modular**: And you own modules without pain. We abstract `backbone`,`Detector`, `BoxHead`, `BoxPredictor`, etc. You can replace every component with your own code without change the code base. For example, You can add [EfficientNet](https://github.com/lukemelas/EfficientNet-PyTorch) as backbone, just add `efficient_net.py` (ALREADY ADDED) and register it, specific it in the config file, It's done!\n- **CPU support for inference**: runs on CPU in inference time.\n- **Smooth and enjoyable training procedure**: we save the state of model, optimizer, scheduler, training iter, you can stop your training and resume training exactly from the save point without change your training `CMD`.\n- **Batched inference**: can perform inference using multiple images per batch per GPU.\n- **Evaluating during training**: eval you model every `eval_step` to check performance improving or not.\n- **Metrics Visualization**: visualize metrics details in tensorboard, like AP, APl, APm and APs for COCO dataset or mAP and 20 categories' AP for VOC dataset.\n- **Auto download**: load pre-trained weights from URL and cache it.\n## Installation\n### Requirements\n\n1. Python3\n1. PyTorch 1.0 or higher\n1. yacs\n1. [Vizer](https://github.com/lufficc/Vizer)\n1. GCC >= 4.9\n1. OpenCV\n\n\n### Step-by-step installation\n\n```bash\ngit clone https://github.com/lufficc/SSD.git\ncd SSD\n#Required packages\npip install torch torchvision yacs tqdm opencv-python vizer\n\n# Optional packages\n# If you want visualize loss curve. Default is enabled. Disable by using --use_tensorboard 0 when training.\npip install tensorboardX\n\n# If you train coco dataset, must install cocoapi.\ncd ~/github\ngit clone https://github.com/cocodataset/cocoapi.git\ncd cocoapi/PythonAPI\npython setup.py build_ext install\n```\n\n### Build\n\nNMS build is not necessary, as we provide a python-like nms, but is very slower than build-version.\n```bash\n# For faster inference you need to build nms, this is needed when evaluating. Only training doesn't need this.\ncd ext\npython build.py build_ext develop\n```\n\n## Train\n\n### Setting Up Datasets\n#### Pascal VOC\n\nFor Pascal VOC dataset, make the folder structure like this:\n```\nVOC_ROOT\n|__ VOC2007\n |_ JPEGImages\n |_ Annotations\n |_ ImageSets\n |_ SegmentationClass\n|__ VOC2012\n |_ JPEGImages\n |_ Annotations\n |_ ImageSets\n |_ SegmentationClass\n|__ ...\n```\nWhere `VOC_ROOT` default is `datasets` folder in current project, you can create symlinks to `datasets` or `export VOC_ROOT=\"/path/to/voc_root\"`.\n\n#### COCO\n\nFor COCO dataset, make the folder structure like this:\n```\nCOCO_ROOT\n|__ annotations\n |_ instances_valminusminival2014.json\n |_ instances_minival2014.json\n |_ instances_train2014.json\n |_ instances_val2014.json\n |_ ...\n|__ train2014\n |_ .jpg\n |_ ...\n |_ .jpg\n|__ val2014\n |_ .jpg\n |_ ...\n |_ .jpg\n|__ ...\n```\nWhere `COCO_ROOT` default is `datasets` folder in current project, you can create symlinks to `datasets` or `export COCO_ROOT=\"/path/to/coco_root\"`.\n\n### Single GPU training\n\n```bash\n# for example, train SSD300:\npython train.py --config-file configs/vgg_ssd300_voc0712.yaml\n```\n### Multi-GPU training\n\n```bash\n# for example, train SSD300 with 4 GPUs:\nexport NGPUS=4\npython -m torch.distributed.launch --nproc_per_node=$NGPUS train.py --config-file configs/vgg_ssd300_voc0712.yaml SOLVER.WARMUP_FACTOR 0.03333 SOLVER.WARMUP_ITERS 1000\n```\nThe configuration files that I provide assume that we are running on single GPU. When changing number of GPUs, hyper-parameter (lr, max_iter, ...) will also changed according to this paper: [Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour](https://arxiv.org/abs/1706.02677).\n\n## Evaluate\n\n### Single GPU evaluating\n\n```bash\n# for example, evaluate SSD300:\npython test.py --config-file configs/vgg_ssd300_voc0712.yaml\n```\n\n### Multi-GPU evaluating\n\n```bash\n# for example, evaluate SSD300 with 4 GPUs:\nexport NGPUS=4\npython -m torch.distributed.launch --nproc_per_node=$NGPUS test.py --config-file configs/vgg_ssd300_voc0712.yaml\n```\n\n## Demo\n\nPredicting image in a folder is simple:\n```bash\npython demo.py --config-file configs/vgg_ssd300_voc0712.yaml --images_dir demo\n```\nThen the predicted images with boxes, scores and label names will saved to `demo/result` folder.\n\n## MODEL ZOO\n### Origin Paper:\n\n| | VOC2007 test | coco test-dev2015 |\n| :-----: | :----------: | :----------: |\n| SSD300* | 77.2 | 25.1 |\n| SSD512* | 79.8 | 28.8 |\n\n### COCO:\n\n| Backbone | Input Size | box AP | Model Size | Download |\n| :------------: | :----------:| :--------------------------: | :--------: | :-------: |\n| VGG16 | 300 | 25.2 | 262MB | [model](https://github.com/lufficc/SSD/releases/download/1.2/vgg_ssd300_coco_trainval35k.pth) |\n| VGG16 | 512 | xx.x | xxx.xMB | |\n| Mobilenet V2 | 320 | xx.x | xxx.xMB | |\n\n### PASCAL VOC:\n\n| Backbone | Input Size | mAP | Model Size | Download |\n| :--------------: | :----------:| :--------------------------: | :--------: | :-------: |\n| VGG16 | 300 | 77.6 | 201MB | [model](https://github.com/lufficc/SSD/releases/download/1.2/vgg_ssd300_voc0712.pth) |\n| VGG16 | 512 | xx.x | xxx.xMB | |\n| Mobilenet V2 | 320 | 68.8 | 25.5MB | [model](https://github.com/lufficc/SSD/releases/download/1.2/mobilenet_v2_ssd320_voc0712.pth) |\n| EfficientNet-B3 | 300 | 73.9 | 97.1MB | [model](https://github.com/lufficc/SSD/releases/download/1.2/efficient_net_b3_ssd300_voc0712.pth) |\n\n## Troubleshooting\nIf you have issues running or compiling this code, we have compiled a list of common issues in [TROUBLESHOOTING.md](TROUBLESHOOTING.md). If your issue is not present there, please feel free to open a new issue.", "description_content_type": "text/markdown", "docs_url": null, "download_url": "", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "https://github.com/lufficc/SSD", "keywords": "", "license": "MIT", "maintainer": "", "maintainer_email": "", "name": "torch-ssd", "package_url": "https://pypi.org/project/torch-ssd/", "platform": "", "project_url": "https://pypi.org/project/torch-ssd/", "project_urls": { "Homepage": "https://github.com/lufficc/SSD" }, "release_url": "https://pypi.org/project/torch-ssd/1.2.0/", "requires_dist": null, "requires_python": ">=3.6", "summary": "High quality, fast, modular reference implementation of SSD in PyTorch", "version": "1.2.0", "yanked": false, "yanked_reason": null }, "last_serial": 6043876, "releases": { "1.2.0": [ { "comment_text": "", "digests": { "md5": "eda2d99b6d1546aa25bdb54944a9744e", "sha256": "05b26c5531ef9ba8685f857e15badd0cad2e1f30b8df809e3fbe4a8ba42d1b43" }, "downloads": -1, "filename": "torch-ssd-1.2.0.linux-x86_64.tar.gz", "has_sig": false, "md5_digest": "eda2d99b6d1546aa25bdb54944a9744e", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.6", "size": 89756, "upload_time": "2019-10-28T21:03:36", "upload_time_iso_8601": "2019-10-28T21:03:36.700568Z", "url": "https://files.pythonhosted.org/packages/15/e3/7f49d959ed3b241a5f822eed1ed83debbc30ab0c27cd788c467e7cc4d2e3/torch-ssd-1.2.0.linux-x86_64.tar.gz", "yanked": false, "yanked_reason": null } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "eda2d99b6d1546aa25bdb54944a9744e", "sha256": "05b26c5531ef9ba8685f857e15badd0cad2e1f30b8df809e3fbe4a8ba42d1b43" }, "downloads": -1, "filename": "torch-ssd-1.2.0.linux-x86_64.tar.gz", "has_sig": false, "md5_digest": "eda2d99b6d1546aa25bdb54944a9744e", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.6", "size": 89756, "upload_time": "2019-10-28T21:03:36", "upload_time_iso_8601": "2019-10-28T21:03:36.700568Z", "url": "https://files.pythonhosted.org/packages/15/e3/7f49d959ed3b241a5f822eed1ed83debbc30ab0c27cd788c467e7cc4d2e3/torch-ssd-1.2.0.linux-x86_64.tar.gz", "yanked": false, "yanked_reason": null } ], "vulnerabilities": [] }