{ "info": { "author": "X.Yang", "author_email": "pistonyang@gmail.com", "bugtrack_url": null, "classifiers": [ "Programming Language :: Python :: 3" ], "description": "# Pytorch-Toolbox\n[![CircleCI](https://circleci.com/gh/deeplearningforfun/torch-toolbox/tree/master.svg?style=svg)](https://circleci.com/gh/deeplearningforfun/torch-toolbox/tree/master)\n\nThis is toolbox project for Pytorch. Aiming to make you write Pytorch code more easier, readable and concise.\n\nYou could also regard this as a auxiliary tool for Pytorch. It will contain what you use most frequently tools.\n\n\n## Installing\nA easy way to install this is by using pip:\n```shell\npip install torchtoolbox\n```\nIf you want to install the nightly version(recommend for now):\n```shell\npip install -U git+https://github.com/deeplearningforfun/torch-toolbox.git@master\n```\n\n## Usage\nToolbox have two mainly parts:\n1. Additional tools to make you use Pytorch easier.\n2. Some fashion work which don't exist in Pytorch core.\n\n### Tools\n#### 1. Show your model parameters and FLOPs.\n```python\nimport torch\nfrom torchtoolbox.tools import summary\nfrom torchvision.models.mobilenet import mobilenet_v2\nmodel = mobilenet_v2()\nsummary(model, torch.rand((1, 3, 224, 224)))\n``` \nHere are some short outputs.\n```\n Layer (type) Output Shape Params FLOPs(M+A) #\n================================================================================\n Conv2d-1 [1, 64, 112, 112] 9408 235225088\n BatchNorm2d-2 [1, 64, 112, 112] 256 1605632\n ReLU-3 [1, 64, 112, 112] 0 0\n MaxPool2d-4 [1, 64, 56, 56] 0 0\n ... ... ... ...\n Linear-158 [1, 1000] 1281000 2560000\n MobileNetV2-159 [1, 1000] 0 0\n================================================================================\n Total parameters: 3,538,984 3.5M\n Trainable parameters: 3,504,872\nNon-trainable parameters: 34,112\nTotal flops(M) : 305,252,872 305.3M\nTotal flops(M+A): 610,505,744 610.5M\n--------------------------------------------------------------------------------\nParameters size (MB): 13.50\n```\n\n#### 2. Metric collection\nWhen we train a model we usually need to calculate some metrics like accuracy(top1-acc), loss etc.\nNow toolbox support as below:\n1. Accuracy: top-1 acc.\n2. TopKAccuracy: topK-acc.\n3. NumericalCost: This is a number metric collection which support `mean`, `max`, `min` calculate type.\n\n```python\nfrom torchtoolbox import metric\n\n# define first\ntop1_acc = metric.Accuracy(name='Top1 Accuracy')\ntop5_acc = metric.TopKAccuracy(top=5, name='Top5 Accuracy')\nloss_record = metric.NumericalCost(name='Loss')\n\n# reset before using\ntop1_acc.reset()\ntop5_acc.reset()\nloss_record.reset()\n\n...\nmodel.eval()\nfor data, labels in val_data:\n data = data.to(device, non_blocking=True)\n labels = labels.to(device, non_blocking=True)\n\n outputs = model(data)\n losses = Loss(outputs, labels)\n # update/record\n top1_acc.step(outputs, labels)\n top5_acc.step(outputs, labels)\n loss_record.step(losses)\ntest_msg = 'Test Epoch {}: {}:{:.5}, {}:{:.5}, {}:{:.5}\\n'.format(\n epoch, top1_acc.name, top1_acc.get(), top5_acc.name, top5_acc.get(),\n loss_record.name, loss_record.get())\n\nprint(test_msg)\n``` \nThen you may get outputs like this\n```\nTest Epoch 101: Top1 Accuracy:0.7332, Top5 Accuracy:0.91514, Loss:1.0605\n```\n\n#### 3. Model Initializer\nNow ToolBox support `XavierInitializer` and `KaimingInitializer`.\n```python\nfrom torchtoolbox.nn.init import KaimingInitializer\n\nmodel = XXX\nKaimingInitializer(model)\n```\n\n### Fashion work\n#### 1. LabelSmoothingLoss\n```python\nfrom torchtoolbox.nn import LabelSmoothingLoss\n# The num classes of your task should be defined.\nclasses = 10\n# Loss\nLoss = LabelSmoothingLoss(classes, smoothing=0.1)\n\n...\nfor i, (data, labels) in enumerate(train_data):\n data = data.to(device, non_blocking=True)\n labels = labels.to(device, non_blocking=True)\n\n optimizer.zero_grad()\n outputs = model(data)\n # just use as usual.\n loss = Loss(outputs, labels)\n loss.backward()\n optimizer.step()\n```\n\n#### 2. CosineWarmupLr\nCosine lr scheduler with warm-up epochs.It's helpful to improve acc for classification models.\n```python\nfrom torchtoolbox.optimizer import CosineWarmupLr\n\noptimizer = optim.SGD(...)\n# define scheduler\n# `batches_pre_epoch` means how many batches(times update/step the model) within one epoch.\n# `warmup_epochs` means increase lr how many epochs to `base_lr`.\n# you can find more details in file.\nlr_scheduler = CosineWarmupLr(optimizer, batches_pre_epoch, epochs,\n base_lr=lr, warmup_epochs=warmup_epochs)\n...\nfor i, (data, labels) in enumerate(train_data):\n ...\n optimizer.step()\n # remember to step/update status here.\n lr_scheduler.step()\n ...\n```\n\n#### 3. SwitchNorm2d/3d\n```python\nfrom torchtoolbox.nn import SwitchNorm2d, SwitchNorm3d\n```\nJust use it like Batchnorm2d/3d.\nMore details please refer to origin paper \n[Differentiable Learning-to-Normalize via Switchable Normalization](https://arxiv.org/pdf/1806.10779.pdf) \n[OpenSourse](https://github.com/switchablenorms/Switchable-Normalization)\n\n\n#### 4. Swish activation\n```python\nfrom torchtoolbox.nn import Swish\n```\nJust use it like Relu.\nMore details please refer to origin paper \n[SEARCHING FOR ACTIVATION FUNCTIONS](https://arxiv.org/pdf/1710.05941.pdf)\n\n#### 5. Lookahead optimizer\nA wrapper optimizer seems better than Adam. \n[Lookahead Optimizer: k steps forward, 1 step back](https://arxiv.org/abs/1907.08610)\n```python\nfrom torchtoolbox.optimizer import Lookahead\nfrom torch import optim\n\noptimizer = optim.Adam(...)\noptimizer = Lookahead(optimizer)\n```\n\n#### 5. Mixup training\nMixup method to train a classification model.\n[mixup: BEYOND EMPIRICAL RISK MINIMIZATION](https://arxiv.org/pdf/1710.09412.pdf)\n```python\nfrom torchtoolbox.tools import mixup_data, mixup_criterion\n\n# set beta distributed parm, 0.2 is recommend.\nalpha = 0.2\nfor i, (data, labels) in enumerate(train_data):\n data = data.to(device, non_blocking=True)\n labels = labels.to(device, non_blocking=True)\n\n data, labels_a, labels_b, lam = mixup_data(data, labels, alpha)\n optimizer.zero_grad()\n outputs = model(data)\n loss = mixup_criterion(Loss, outputs, labels_a, labels_b, lam)\n\n loss.backward()\n optimizer.step()\n```\n\n#### 6. Cutout\nA image transform method.\n[Improved Regularization of Convolutional Neural Networks with Cutout](https://arxiv.org/pdf/1708.04552.pdf)\n```python\nfrom torchvision import transforms\nfrom torchtoolbox.transform import Cutout\n\n_train_transform = transforms.Compose([\n transforms.RandomResizedCrop(224),\n Cutout(),\n transforms.RandomHorizontalFlip(),\n transforms.ColorJitter(0.4, 0.4, 0.4),\n transforms.ToTensor(),\n normalize,\n])\n```\n\n#### 7. No decay bias\nIf you train a model with big batch size, eg. 64k, you may need this,\n[Highly Scalable Deep Learning Training System with Mixed-Precision: Training ImageNet in Four Minutes](https://arxiv.org/pdf/1807.11205.pdf)\n\n```python\nfrom torchtoolbox.tools import split_weights\nfrom torch import optim\n\nmodel = XXX\nparameters = split_weights(model)\noptimizer = optim.SGD(parameters, ...)\n\n```", "description_content_type": "text/markdown", "docs_url": null, "download_url": "", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "https://github.com/deeplearningforfun/torch-toolbox", "keywords": "", "license": "BSD 3-Clause", "maintainer": "", "maintainer_email": "", "name": "torchtoolbox", "package_url": "https://pypi.org/project/torchtoolbox/", "platform": "", "project_url": "https://pypi.org/project/torchtoolbox/", "project_urls": { "Homepage": "https://github.com/deeplearningforfun/torch-toolbox" }, "release_url": "https://pypi.org/project/torchtoolbox/0.0.4/", "requires_dist": null, "requires_python": "", "summary": "ToolBox to make using Pytorch much easier.", "version": "0.0.4" }, "last_serial": 5741778, "releases": { "0.0.1": [ { "comment_text": "", "digests": { "md5": "e954b879c0741e4885df0afc30229130", "sha256": "56fc2a0d53ad9d4255a346a8254cca45cd11ab6387d5781cddcc231013f45f1b" }, "downloads": -1, "filename": "torchtoolbox-0.0.1.tar.gz", "has_sig": false, "md5_digest": "e954b879c0741e4885df0afc30229130", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 5678, "upload_time": "2019-07-01T07:14:03", "url": "https://files.pythonhosted.org/packages/22/51/5ffc4f617475e564e64e3b777f34d90633aac98f5812b78a71fb2e6d91b2/torchtoolbox-0.0.1.tar.gz" } ], "0.0.2": [ { "comment_text": "", "digests": { "md5": "696bb98428883d3ff5e1f4cc13f674ad", "sha256": "a3410580bb6ffd0a0e3c94b09f711558f32229236579a8004920d8e4c47c4d70" }, "downloads": -1, "filename": "torchtoolbox-0.0.2.tar.gz", "has_sig": false, "md5_digest": "696bb98428883d3ff5e1f4cc13f674ad", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 7651, "upload_time": "2019-07-09T09:21:59", "url": "https://files.pythonhosted.org/packages/1b/2a/f8186129d477f06b1eb7ea3736e33baaedc5ea2ff13047997815f885894b/torchtoolbox-0.0.2.tar.gz" } ], "0.0.4": [ { "comment_text": "", "digests": { "md5": "32ab328ca360928e3b7aed29f4bb9c13", "sha256": "3e9654886c2940e40a6536981b9380711b591650a27c7542c7eb81ddf754d04a" }, "downloads": -1, "filename": "torchtoolbox-0.0.4.tar.gz", "has_sig": false, "md5_digest": "32ab328ca360928e3b7aed29f4bb9c13", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 19682, "upload_time": "2019-08-28T10:20:48", "url": "https://files.pythonhosted.org/packages/14/40/9431a5c189ee16a0c1e442ae0b7ff659387da36058e2cdee669869957376/torchtoolbox-0.0.4.tar.gz" } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "32ab328ca360928e3b7aed29f4bb9c13", "sha256": "3e9654886c2940e40a6536981b9380711b591650a27c7542c7eb81ddf754d04a" }, "downloads": -1, "filename": "torchtoolbox-0.0.4.tar.gz", "has_sig": false, "md5_digest": "32ab328ca360928e3b7aed29f4bb9c13", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 19682, "upload_time": "2019-08-28T10:20:48", "url": "https://files.pythonhosted.org/packages/14/40/9431a5c189ee16a0c1e442ae0b7ff659387da36058e2cdee669869957376/torchtoolbox-0.0.4.tar.gz" } ] }