{ "info": { "author": "Tae Hwan Jung(@graykode)", "author_email": "nlkey2022@gmail.com", "bugtrack_url": null, "classifiers": [], "description": "## modelsummary (Pytorch Model summary)\n\n> Keras style model.summary() in PyTorch, [torchsummary](https://github.com/sksq96/pytorch-summary)\n\nThis is Pytorch library for visualization Improved tool of [torchsummary](https://github.com/sksq96/pytorch-summary) and [torchsummaryX](https://github.com/nmhkahn/torchsummaryX). I was inspired by [torchsummary](https://github.com/sksq96/pytorch-summary) and I written down code which i referred to. **It is not care with number of Input parameter!** \n\n```python\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom modelsummary import summary\n\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n self.conv2_drop = nn.Dropout2d()\n self.fc1 = nn.Linear(320, 50)\n self.fc2 = nn.Linear(50, 10)\n\n def forward(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n x = x.view(-1, 320)\n x = F.relu(self.fc1(x))\n x = F.dropout(x, training=self.training)\n x = self.fc2(x)\n return F.log_softmax(x, dim=1)\n\n# show input shape\nsummary(Net(), torch.zeros((1, 1, 28, 28)), show_input=True)\n\n# show output shape\nsummary(Net(), torch.zeros((1, 1, 28, 28)), show_input=False)\n```\n\n```\n-----------------------------------------------------------------------\n Layer (type) Input Shape Param #\n=======================================================================\n Conv2d-1 [-1, 1, 28, 28] 260\n Conv2d-2 [-1, 10, 12, 12] 5,020\n Dropout2d-3 [-1, 20, 8, 8] 0\n Linear-4 [-1, 320] 16,050\n Linear-5 [-1, 50] 510\n=======================================================================\nTotal params: 21,840\nTrainable params: 21,840\nNon-trainable params: 0\n-----------------------------------------------------------------------\n\n-----------------------------------------------------------------------\n Layer (type) Output Shape Param #\n=======================================================================\n Conv2d-1 [-1, 10, 24, 24] 260\n Conv2d-2 [-1, 20, 8, 8] 5,020\n Dropout2d-3 [-1, 20, 8, 8] 0\n Linear-4 [-1, 50] 16,050\n Linear-5 [-1, 10] 510\n=======================================================================\nTotal params: 21,840\nTrainable params: 21,840\nNon-trainable params: 0\n-----------------------------------------------------------------------\n```\n\n\n\n## Quick Start \n\nJust download with pip `modelsummary`\n\n`pip install modelsummary` and `from modelsummary import summary`\n\nYou can use this library like this. If you see more detail, Please see example code.\n\n```\nfrom modelsummary import summary\n\nmodel = your_model_name()\n\n# show input shape\nsummary(model, (input tensor you want), show_input=True)\n\n# show output shape\nsummary(model, (input tensor you want), show_input=False)\n\n# show hierarchical struct\nsummary(model, (input tensor you want), show_hierarchical=True)\n```\n\n\n\nsummary function has this parameter options`def summary(model, *inputs, batch_size=-1, show_input=True, show_hierarchical=False)`\n\n#### Options\n\n- model : your model class\n- *input : your input tensor **datas** (Asterisk)\n- batch_size : `-1` is same with tensor `None`\n- show_input : show input shape data, **if this parameter is False, it will show output shape** **default : True**\n- show_hierarchical : show hierarchical data structure, **default : False**\n\n\n\n## Result\n\nRun example using Transformer Model in [Attention is all you need paper(2017)](https://arxiv.org/abs/1706.03762)\n\n1) showing input shape\n\n```\n# show input shape\nsummary(model, enc_inputs, dec_inputs, show_input=True)\n\n-----------------------------------------------------------------------\n Layer (type) Input Shape Param #\n=======================================================================\n Encoder-1 [-1, 5] 0\n Embedding-2 [-1, 5] 3,072\n Embedding-3 [-1, 5] 3,072\n EncoderLayer-4 [-1, 5, 512] 0\n MultiHeadAttention-5 [-1, 5, 512] 0\n Linear-6 [-1, 5, 512] 262,656\n Linear-7 [-1, 5, 512] 262,656\n Linear-8 [-1, 5, 512] 262,656\n PoswiseFeedForwardNet-9 [-1, 5, 512] 0\n Conv1d-10 [-1, 512, 5] 1,050,624\n Conv1d-11 [-1, 2048, 5] 1,049,088\n EncoderLayer-12 [-1, 5, 512] 0\n MultiHeadAttention-13 [-1, 5, 512] 0\n Linear-14 [-1, 5, 512] 262,656\n Linear-15 [-1, 5, 512] 262,656\n Linear-16 [-1, 5, 512] 262,656\n PoswiseFeedForwardNet-17 [-1, 5, 512] 0\n Conv1d-18 [-1, 512, 5] 1,050,624\n Conv1d-19 [-1, 2048, 5] 1,049,088\n EncoderLayer-20 [-1, 5, 512] 0\n MultiHeadAttention-21 [-1, 5, 512] 0\n Linear-22 [-1, 5, 512] 262,656\n Linear-23 [-1, 5, 512] 262,656\n Linear-24 [-1, 5, 512] 262,656\n PoswiseFeedForwardNet-25 [-1, 5, 512] 0\n Conv1d-26 [-1, 512, 5] 1,050,624\n Conv1d-27 [-1, 2048, 5] 1,049,088\n EncoderLayer-28 [-1, 5, 512] 0\n MultiHeadAttention-29 [-1, 5, 512] 0\n Linear-30 [-1, 5, 512] 262,656\n Linear-31 [-1, 5, 512] 262,656\n Linear-32 [-1, 5, 512] 262,656\n PoswiseFeedForwardNet-33 [-1, 5, 512] 0\n Conv1d-34 [-1, 512, 5] 1,050,624\n Conv1d-35 [-1, 2048, 5] 1,049,088\n EncoderLayer-36 [-1, 5, 512] 0\n MultiHeadAttention-37 [-1, 5, 512] 0\n Linear-38 [-1, 5, 512] 262,656\n Linear-39 [-1, 5, 512] 262,656\n Linear-40 [-1, 5, 512] 262,656\n PoswiseFeedForwardNet-41 [-1, 5, 512] 0\n Conv1d-42 [-1, 512, 5] 1,050,624\n Conv1d-43 [-1, 2048, 5] 1,049,088\n EncoderLayer-44 [-1, 5, 512] 0\n MultiHeadAttention-45 [-1, 5, 512] 0\n Linear-46 [-1, 5, 512] 262,656\n Linear-47 [-1, 5, 512] 262,656\n Linear-48 [-1, 5, 512] 262,656\n PoswiseFeedForwardNet-49 [-1, 5, 512] 0\n Conv1d-50 [-1, 512, 5] 1,050,624\n Conv1d-51 [-1, 2048, 5] 1,049,088\n Decoder-52 [-1, 5] 0\n Embedding-53 [-1, 5] 3,584\n Embedding-54 [-1, 5] 3,072\n DecoderLayer-55 [-1, 5, 512] 0\n MultiHeadAttention-56 [-1, 5, 512] 0\n Linear-57 [-1, 5, 512] 262,656\n Linear-58 [-1, 5, 512] 262,656\n Linear-59 [-1, 5, 512] 262,656\n MultiHeadAttention-60 [-1, 5, 512] 0\n Linear-61 [-1, 5, 512] 262,656\n Linear-62 [-1, 5, 512] 262,656\n Linear-63 [-1, 5, 512] 262,656\n PoswiseFeedForwardNet-64 [-1, 5, 512] 0\n Conv1d-65 [-1, 512, 5] 1,050,624\n Conv1d-66 [-1, 2048, 5] 1,049,088\n DecoderLayer-67 [-1, 5, 512] 0\n MultiHeadAttention-68 [-1, 5, 512] 0\n Linear-69 [-1, 5, 512] 262,656\n Linear-70 [-1, 5, 512] 262,656\n Linear-71 [-1, 5, 512] 262,656\n MultiHeadAttention-72 [-1, 5, 512] 0\n Linear-73 [-1, 5, 512] 262,656\n Linear-74 [-1, 5, 512] 262,656\n Linear-75 [-1, 5, 512] 262,656\n PoswiseFeedForwardNet-76 [-1, 5, 512] 0\n Conv1d-77 [-1, 512, 5] 1,050,624\n Conv1d-78 [-1, 2048, 5] 1,049,088\n DecoderLayer-79 [-1, 5, 512] 0\n MultiHeadAttention-80 [-1, 5, 512] 0\n Linear-81 [-1, 5, 512] 262,656\n Linear-82 [-1, 5, 512] 262,656\n Linear-83 [-1, 5, 512] 262,656\n MultiHeadAttention-84 [-1, 5, 512] 0\n Linear-85 [-1, 5, 512] 262,656\n Linear-86 [-1, 5, 512] 262,656\n Linear-87 [-1, 5, 512] 262,656\n PoswiseFeedForwardNet-88 [-1, 5, 512] 0\n Conv1d-89 [-1, 512, 5] 1,050,624\n Conv1d-90 [-1, 2048, 5] 1,049,088\n DecoderLayer-91 [-1, 5, 512] 0\n MultiHeadAttention-92 [-1, 5, 512] 0\n Linear-93 [-1, 5, 512] 262,656\n Linear-94 [-1, 5, 512] 262,656\n Linear-95 [-1, 5, 512] 262,656\n MultiHeadAttention-96 [-1, 5, 512] 0\n Linear-97 [-1, 5, 512] 262,656\n Linear-98 [-1, 5, 512] 262,656\n Linear-99 [-1, 5, 512] 262,656\nPoswiseFeedForwardNet-100 [-1, 5, 512] 0\n Conv1d-101 [-1, 512, 5] 1,050,624\n Conv1d-102 [-1, 2048, 5] 1,049,088\n DecoderLayer-103 [-1, 5, 512] 0\n MultiHeadAttention-104 [-1, 5, 512] 0\n Linear-105 [-1, 5, 512] 262,656\n Linear-106 [-1, 5, 512] 262,656\n Linear-107 [-1, 5, 512] 262,656\n MultiHeadAttention-108 [-1, 5, 512] 0\n Linear-109 [-1, 5, 512] 262,656\n Linear-110 [-1, 5, 512] 262,656\n Linear-111 [-1, 5, 512] 262,656\nPoswiseFeedForwardNet-112 [-1, 5, 512] 0\n Conv1d-113 [-1, 512, 5] 1,050,624\n Conv1d-114 [-1, 2048, 5] 1,049,088\n DecoderLayer-115 [-1, 5, 512] 0\n MultiHeadAttention-116 [-1, 5, 512] 0\n Linear-117 [-1, 5, 512] 262,656\n Linear-118 [-1, 5, 512] 262,656\n Linear-119 [-1, 5, 512] 262,656\n MultiHeadAttention-120 [-1, 5, 512] 0\n Linear-121 [-1, 5, 512] 262,656\n Linear-122 [-1, 5, 512] 262,656\n Linear-123 [-1, 5, 512] 262,656\nPoswiseFeedForwardNet-124 [-1, 5, 512] 0\n Conv1d-125 [-1, 512, 5] 1,050,624\n Conv1d-126 [-1, 2048, 5] 1,049,088\n Linear-127 [-1, 5, 512] 3,584\n=======================================================================\nTotal params: 39,396,352\nTrainable params: 39,390,208\nNon-trainable params: 6,144\n```\n\n2) showing output shape\n\n```\n# show output shape\nsummary(model, enc_inputs, dec_inputs, show_input=False)\n\n-----------------------------------------------------------------------\n Layer (type) Output Shape Param #\n=======================================================================\n Embedding-1 [-1, 5, 512] 3,072\n Embedding-2 [-1, 5, 512] 3,072\n Linear-3 [-1, 5, 512] 262,656\n Linear-4 [-1, 5, 512] 262,656\n Linear-5 [-1, 5, 512] 262,656\n MultiHeadAttention-6 [-1, 8, 5, 5] 0\n Conv1d-7 [-1, 2048, 5] 1,050,624\n Conv1d-8 [-1, 512, 5] 1,049,088\n PoswiseFeedForwardNet-9 [-1, 5, 512] 0\n EncoderLayer-10 [-1, 8, 5, 5] 0\n Linear-11 [-1, 5, 512] 262,656\n Linear-12 [-1, 5, 512] 262,656\n Linear-13 [-1, 5, 512] 262,656\n MultiHeadAttention-14 [-1, 8, 5, 5] 0\n Conv1d-15 [-1, 2048, 5] 1,050,624\n Conv1d-16 [-1, 512, 5] 1,049,088\n PoswiseFeedForwardNet-17 [-1, 5, 512] 0\n EncoderLayer-18 [-1, 8, 5, 5] 0\n Linear-19 [-1, 5, 512] 262,656\n Linear-20 [-1, 5, 512] 262,656\n Linear-21 [-1, 5, 512] 262,656\n MultiHeadAttention-22 [-1, 8, 5, 5] 0\n Conv1d-23 [-1, 2048, 5] 1,050,624\n Conv1d-24 [-1, 512, 5] 1,049,088\n PoswiseFeedForwardNet-25 [-1, 5, 512] 0\n EncoderLayer-26 [-1, 8, 5, 5] 0\n Linear-27 [-1, 5, 512] 262,656\n Linear-28 [-1, 5, 512] 262,656\n Linear-29 [-1, 5, 512] 262,656\n MultiHeadAttention-30 [-1, 8, 5, 5] 0\n Conv1d-31 [-1, 2048, 5] 1,050,624\n Conv1d-32 [-1, 512, 5] 1,049,088\n PoswiseFeedForwardNet-33 [-1, 5, 512] 0\n EncoderLayer-34 [-1, 8, 5, 5] 0\n Linear-35 [-1, 5, 512] 262,656\n Linear-36 [-1, 5, 512] 262,656\n Linear-37 [-1, 5, 512] 262,656\n MultiHeadAttention-38 [-1, 8, 5, 5] 0\n Conv1d-39 [-1, 2048, 5] 1,050,624\n Conv1d-40 [-1, 512, 5] 1,049,088\n PoswiseFeedForwardNet-41 [-1, 5, 512] 0\n EncoderLayer-42 [-1, 8, 5, 5] 0\n Linear-43 [-1, 5, 512] 262,656\n Linear-44 [-1, 5, 512] 262,656\n Linear-45 [-1, 5, 512] 262,656\n MultiHeadAttention-46 [-1, 8, 5, 5] 0\n Conv1d-47 [-1, 2048, 5] 1,050,624\n Conv1d-48 [-1, 512, 5] 1,049,088\n PoswiseFeedForwardNet-49 [-1, 5, 512] 0\n EncoderLayer-50 [-1, 8, 5, 5] 0\n Encoder-51 [-1, 8, 5, 5] 0\n Embedding-52 [-1, 5, 512] 3,584\n Embedding-53 [-1, 5, 512] 3,072\n Linear-54 [-1, 5, 512] 262,656\n Linear-55 [-1, 5, 512] 262,656\n Linear-56 [-1, 5, 512] 262,656\n MultiHeadAttention-57 [-1, 8, 5, 5] 0\n Linear-58 [-1, 5, 512] 262,656\n Linear-59 [-1, 5, 512] 262,656\n Linear-60 [-1, 5, 512] 262,656\n MultiHeadAttention-61 [-1, 8, 5, 5] 0\n Conv1d-62 [-1, 2048, 5] 1,050,624\n Conv1d-63 [-1, 512, 5] 1,049,088\n PoswiseFeedForwardNet-64 [-1, 5, 512] 0\n DecoderLayer-65 [-1, 8, 5, 5] 0\n Linear-66 [-1, 5, 512] 262,656\n Linear-67 [-1, 5, 512] 262,656\n Linear-68 [-1, 5, 512] 262,656\n MultiHeadAttention-69 [-1, 8, 5, 5] 0\n Linear-70 [-1, 5, 512] 262,656\n Linear-71 [-1, 5, 512] 262,656\n Linear-72 [-1, 5, 512] 262,656\n MultiHeadAttention-73 [-1, 8, 5, 5] 0\n Conv1d-74 [-1, 2048, 5] 1,050,624\n Conv1d-75 [-1, 512, 5] 1,049,088\n PoswiseFeedForwardNet-76 [-1, 5, 512] 0\n DecoderLayer-77 [-1, 8, 5, 5] 0\n Linear-78 [-1, 5, 512] 262,656\n Linear-79 [-1, 5, 512] 262,656\n Linear-80 [-1, 5, 512] 262,656\n MultiHeadAttention-81 [-1, 8, 5, 5] 0\n Linear-82 [-1, 5, 512] 262,656\n Linear-83 [-1, 5, 512] 262,656\n Linear-84 [-1, 5, 512] 262,656\n MultiHeadAttention-85 [-1, 8, 5, 5] 0\n Conv1d-86 [-1, 2048, 5] 1,050,624\n Conv1d-87 [-1, 512, 5] 1,049,088\n PoswiseFeedForwardNet-88 [-1, 5, 512] 0\n DecoderLayer-89 [-1, 8, 5, 5] 0\n Linear-90 [-1, 5, 512] 262,656\n Linear-91 [-1, 5, 512] 262,656\n Linear-92 [-1, 5, 512] 262,656\n MultiHeadAttention-93 [-1, 8, 5, 5] 0\n Linear-94 [-1, 5, 512] 262,656\n Linear-95 [-1, 5, 512] 262,656\n Linear-96 [-1, 5, 512] 262,656\n MultiHeadAttention-97 [-1, 8, 5, 5] 0\n Conv1d-98 [-1, 2048, 5] 1,050,624\n Conv1d-99 [-1, 512, 5] 1,049,088\nPoswiseFeedForwardNet-100 [-1, 5, 512] 0\n DecoderLayer-101 [-1, 8, 5, 5] 0\n Linear-102 [-1, 5, 512] 262,656\n Linear-103 [-1, 5, 512] 262,656\n Linear-104 [-1, 5, 512] 262,656\n MultiHeadAttention-105 [-1, 8, 5, 5] 0\n Linear-106 [-1, 5, 512] 262,656\n Linear-107 [-1, 5, 512] 262,656\n Linear-108 [-1, 5, 512] 262,656\n MultiHeadAttention-109 [-1, 8, 5, 5] 0\n Conv1d-110 [-1, 2048, 5] 1,050,624\n Conv1d-111 [-1, 512, 5] 1,049,088\nPoswiseFeedForwardNet-112 [-1, 5, 512] 0\n DecoderLayer-113 [-1, 8, 5, 5] 0\n Linear-114 [-1, 5, 512] 262,656\n Linear-115 [-1, 5, 512] 262,656\n Linear-116 [-1, 5, 512] 262,656\n MultiHeadAttention-117 [-1, 8, 5, 5] 0\n Linear-118 [-1, 5, 512] 262,656\n Linear-119 [-1, 5, 512] 262,656\n Linear-120 [-1, 5, 512] 262,656\n MultiHeadAttention-121 [-1, 8, 5, 5] 0\n Conv1d-122 [-1, 2048, 5] 1,050,624\n Conv1d-123 [-1, 512, 5] 1,049,088\nPoswiseFeedForwardNet-124 [-1, 5, 512] 0\n DecoderLayer-125 [-1, 8, 5, 5] 0\n Decoder-126 [-1, 8, 5, 5] 0\n Linear-127 [-1, 5, 7] 3,584\n=======================================================================\nTotal params: 39,396,352\nTrainable params: 39,390,208\nNon-trainable params: 6,144\n-----------------------------------------------------------------------\n```\n\n3) showing hierarchical summary\n\n```\nTransformer(\n (encoder): Encoder(\n (src_emb): Embedding(6, 512), 3,072 params\n (pos_emb): Embedding(6, 512), 3,072 params\n (layers): ModuleList(\n (0): EncoderLayer(\n (enc_self_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (pos_ffn): PoswiseFeedForwardNet(\n (conv1): Conv1d(512, 2048, kernel_size=(1,), stride=(1,)), 1,050,624 params\n (conv2): Conv1d(2048, 512, kernel_size=(1,), stride=(1,)), 1,049,088 params\n ), 2,099,712 params\n ), 2,887,680 params\n (1): EncoderLayer(\n (enc_self_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (pos_ffn): PoswiseFeedForwardNet(\n (conv1): Conv1d(512, 2048, kernel_size=(1,), stride=(1,)), 1,050,624 params\n (conv2): Conv1d(2048, 512, kernel_size=(1,), stride=(1,)), 1,049,088 params\n ), 2,099,712 params\n ), 2,887,680 params\n (2): EncoderLayer(\n (enc_self_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (pos_ffn): PoswiseFeedForwardNet(\n (conv1): Conv1d(512, 2048, kernel_size=(1,), stride=(1,)), 1,050,624 params\n (conv2): Conv1d(2048, 512, kernel_size=(1,), stride=(1,)), 1,049,088 params\n ), 2,099,712 params\n ), 2,887,680 params\n (3): EncoderLayer(\n (enc_self_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (pos_ffn): PoswiseFeedForwardNet(\n (conv1): Conv1d(512, 2048, kernel_size=(1,), stride=(1,)), 1,050,624 params\n (conv2): Conv1d(2048, 512, kernel_size=(1,), stride=(1,)), 1,049,088 params\n ), 2,099,712 params\n ), 2,887,680 params\n (4): EncoderLayer(\n (enc_self_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (pos_ffn): PoswiseFeedForwardNet(\n (conv1): Conv1d(512, 2048, kernel_size=(1,), stride=(1,)), 1,050,624 params\n (conv2): Conv1d(2048, 512, kernel_size=(1,), stride=(1,)), 1,049,088 params\n ), 2,099,712 params\n ), 2,887,680 params\n (5): EncoderLayer(\n (enc_self_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (pos_ffn): PoswiseFeedForwardNet(\n (conv1): Conv1d(512, 2048, kernel_size=(1,), stride=(1,)), 1,050,624 params\n (conv2): Conv1d(2048, 512, kernel_size=(1,), stride=(1,)), 1,049,088 params\n ), 2,099,712 params\n ), 2,887,680 params\n ), 17,326,080 params\n ), 17,332,224 params\n (decoder): Decoder(\n (tgt_emb): Embedding(7, 512), 3,584 params\n (pos_emb): Embedding(6, 512), 3,072 params\n (layers): ModuleList(\n (0): DecoderLayer(\n (dec_self_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (dec_enc_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (pos_ffn): PoswiseFeedForwardNet(\n (conv1): Conv1d(512, 2048, kernel_size=(1,), stride=(1,)), 1,050,624 params\n (conv2): Conv1d(2048, 512, kernel_size=(1,), stride=(1,)), 1,049,088 params\n ), 2,099,712 params\n ), 3,675,648 params\n (1): DecoderLayer(\n (dec_self_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (dec_enc_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (pos_ffn): PoswiseFeedForwardNet(\n (conv1): Conv1d(512, 2048, kernel_size=(1,), stride=(1,)), 1,050,624 params\n (conv2): Conv1d(2048, 512, kernel_size=(1,), stride=(1,)), 1,049,088 params\n ), 2,099,712 params\n ), 3,675,648 params\n (2): DecoderLayer(\n (dec_self_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (dec_enc_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (pos_ffn): PoswiseFeedForwardNet(\n (conv1): Conv1d(512, 2048, kernel_size=(1,), stride=(1,)), 1,050,624 params\n (conv2): Conv1d(2048, 512, kernel_size=(1,), stride=(1,)), 1,049,088 params\n ), 2,099,712 params\n ), 3,675,648 params\n (3): DecoderLayer(\n (dec_self_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (dec_enc_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (pos_ffn): PoswiseFeedForwardNet(\n (conv1): Conv1d(512, 2048, kernel_size=(1,), stride=(1,)), 1,050,624 params\n (conv2): Conv1d(2048, 512, kernel_size=(1,), stride=(1,)), 1,049,088 params\n ), 2,099,712 params\n ), 3,675,648 params\n (4): DecoderLayer(\n (dec_self_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (dec_enc_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (pos_ffn): PoswiseFeedForwardNet(\n (conv1): Conv1d(512, 2048, kernel_size=(1,), stride=(1,)), 1,050,624 params\n (conv2): Conv1d(2048, 512, kernel_size=(1,), stride=(1,)), 1,049,088 params\n ), 2,099,712 params\n ), 3,675,648 params\n (5): DecoderLayer(\n (dec_self_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (dec_enc_attn): MultiHeadAttention(\n (W_Q): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_K): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n (W_V): Linear(in_features=512, out_features=512, bias=True), 262,656 params\n ), 787,968 params\n (pos_ffn): PoswiseFeedForwardNet(\n (conv1): Conv1d(512, 2048, kernel_size=(1,), stride=(1,)), 1,050,624 params\n (conv2): Conv1d(2048, 512, kernel_size=(1,), stride=(1,)), 1,049,088 params\n ), 2,099,712 params\n ), 3,675,648 params\n ), 22,053,888 params\n ), 22,060,544 params\n (projection): Linear(in_features=512, out_features=7, bias=False), 3,584 params\n), 39,396,352 params\n\n```\n\n\n\n## Reference\n\n```python\ncode_reference = { 'https://github.com/pytorch/pytorch/issues/2001',\n\t\t\t\t'https://gist.github.com/HTLife/b6640af9d6e7d765411f8aa9aa94b837',\n\t\t\t\t'https://github.com/sksq96/pytorch-summary',\n\t\t\t\t'Inspired by https://github.com/sksq96/pytorch-summary'}\n```", "description_content_type": "text/markdown", "docs_url": null, "download_url": "", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "https://github.com/graykode/modelsummary", "keywords": "pytorch model summary model.summary()", "license": "MIT", "maintainer": "", "maintainer_email": "", "name": "modelsummary", "package_url": "https://pypi.org/project/modelsummary/", "platform": "", "project_url": "https://pypi.org/project/modelsummary/", "project_urls": { "Homepage": "https://github.com/graykode/modelsummary" }, "release_url": "https://pypi.org/project/modelsummary/1.1.7/", "requires_dist": null, "requires_python": "", "summary": "All Model summary in PyTorch similar to `model.summary()` in Keras", "version": "1.1.7" }, "last_serial": 5181100, "releases": { "1.0.3": [ { "comment_text": "", "digests": { "md5": "77b38d4a0f604b9fe8beae6037e92652", "sha256": "2af773c14e7f2c2e7279a37b2b1984658d7d74bc7822c75a2e89e3a671d4db76" }, "downloads": -1, "filename": "modelsummary-1.0.3-py3-none-any.whl", "has_sig": false, "md5_digest": "77b38d4a0f604b9fe8beae6037e92652", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 10202, "upload_time": "2019-03-04T14:21:14", "url": "https://files.pythonhosted.org/packages/29/d3/7d944a841fdf711a0dc7e028eb1383c782e8fe62a5320a4a7c06c5d2c228/modelsummary-1.0.3-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "6d51b9682563789485f199b70444dbf4", "sha256": "131b3e9d6819ed553ac63ee99446165c94bef26aca0f8134ebf91e445b023ec0" }, "downloads": -1, "filename": "modelsummary-1.0.3.tar.gz", "has_sig": false, "md5_digest": "6d51b9682563789485f199b70444dbf4", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 10274, "upload_time": "2019-03-04T14:21:17", "url": "https://files.pythonhosted.org/packages/94/d6/2ca69673c01c3c2fbf3c15a953ad418b584bdfb7c112508e81e6bd71bd03/modelsummary-1.0.3.tar.gz" } ], "1.0.4": [ { "comment_text": "", "digests": { "md5": "a538ea7426d8e4e143a7de44281fb2ff", "sha256": "d7b4df376c98e8d6d29d9ad185b50c2b0ce0576e244cb4f3ba9b5e4964843017" }, "downloads": -1, "filename": "modelsummary-1.0.4-py3-none-any.whl", "has_sig": false, "md5_digest": "a538ea7426d8e4e143a7de44281fb2ff", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 7770, "upload_time": "2019-03-04T14:25:19", "url": "https://files.pythonhosted.org/packages/f3/65/a8b053f839eedc7590b52d13def0839c6c8a2824432522bd82bb2134b863/modelsummary-1.0.4-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "bb9c4f6da3c4ac2c22affd5050b14f82", "sha256": "289183150c3588db2bba5aa33295c989fd9a15ad91f811b0646187b17419e504" }, "downloads": -1, "filename": "modelsummary-1.0.4.tar.gz", "has_sig": false, "md5_digest": "bb9c4f6da3c4ac2c22affd5050b14f82", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 10257, "upload_time": "2019-03-04T14:25:21", "url": "https://files.pythonhosted.org/packages/fd/dd/d94ebbaea4cff981b52279751dd4572d0ff1278f40dc011f4b1f33a53e1f/modelsummary-1.0.4.tar.gz" } ], "1.0.5": [ { "comment_text": "", "digests": { "md5": "193c4e2059b92d381b326e4cab7bef8a", "sha256": "28145fdc896fd9c425c7a3bbaea8fe22ddcfa2bfcd7279eb6a1486bede186fd4" }, "downloads": -1, "filename": "modelsummary-1.0.5-py3-none-any.whl", "has_sig": false, "md5_digest": "193c4e2059b92d381b326e4cab7bef8a", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 7712, "upload_time": "2019-03-04T14:28:04", "url": "https://files.pythonhosted.org/packages/db/bb/34bd27e92bb1df161f3158de837d36e3964072b1fe57c0a13a820374f08b/modelsummary-1.0.5-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "800265253d4a6f17889cda6181b5a624", "sha256": "b1eb8c678bf8df1390f12bd7447a60c45001e20827b33d0634a42cc3d5b1ca39" }, "downloads": -1, "filename": "modelsummary-1.0.5.tar.gz", "has_sig": false, "md5_digest": "800265253d4a6f17889cda6181b5a624", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 10573, "upload_time": "2019-03-04T14:28:05", "url": "https://files.pythonhosted.org/packages/41/0c/86c161d25d44e4e9fdc42868903ee9d6e9c6adac5fdd97bee4d5290ae501/modelsummary-1.0.5.tar.gz" } ], "1.0.6": [ { "comment_text": "", "digests": { "md5": "89f807bcf3b8bdd0be08548b60da3b51", "sha256": "d37c18d5be18313b5dc83db6618e7a884c18582a163a580eebdad55b30037d55" }, "downloads": -1, "filename": "modelsummary-1.0.6-py3-none-any.whl", "has_sig": false, "md5_digest": "89f807bcf3b8bdd0be08548b60da3b51", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 7769, "upload_time": "2019-03-04T14:30:08", "url": "https://files.pythonhosted.org/packages/98/52/42b374ba742d54258d2566187cbf3f526f4fed7adc7f65189d571c5c68a0/modelsummary-1.0.6-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "ab984ce033806957378b401518714bea", "sha256": "41eca683a567a67462b284825cc3917163ab70d584e5ac4e93e3385f7c2bd0bd" }, "downloads": -1, "filename": "modelsummary-1.0.6.tar.gz", "has_sig": false, "md5_digest": "ab984ce033806957378b401518714bea", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 10171, "upload_time": "2019-03-04T14:30:09", "url": "https://files.pythonhosted.org/packages/61/44/8b820ede0f870d2d52683bd4d31793a005f35af76eb490a93a6a31f6eecf/modelsummary-1.0.6.tar.gz" } ], "1.0.7": [ { "comment_text": "", "digests": { "md5": "2e6de982212b78297d46a66a41634565", "sha256": "238e14eae606a338f76b1b238247b6581eaa1d20283217f7ac52685c7f9f5a4c" }, "downloads": -1, "filename": "modelsummary-1.0.7.tar.gz", "has_sig": false, "md5_digest": "2e6de982212b78297d46a66a41634565", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 10177, "upload_time": "2019-03-04T14:33:41", "url": "https://files.pythonhosted.org/packages/ea/33/eb76f741c22aded23d9b4da414d0675fda01c9568234c3d9405831e12b53/modelsummary-1.0.7.tar.gz" } ], "1.0.8": [ { "comment_text": "", "digests": { "md5": "8548e93356aa41c7861b424eb8873f9c", "sha256": "5cf9f1d4e57cde6cf0a4bed21656b1ead6cfe98a193354d9c3d18da7d4fa223e" }, "downloads": -1, "filename": "modelsummary-1.0.8-py3.6.egg", "has_sig": false, "md5_digest": "8548e93356aa41c7861b424eb8873f9c", "packagetype": "bdist_egg", "python_version": "3.6", "requires_python": null, "size": 4111, "upload_time": "2019-03-04T14:35:46", "url": "https://files.pythonhosted.org/packages/f0/8e/0abd272fec784ab29bd1c1ebb13b50a73c067abffaa0cc27309e9fd25303/modelsummary-1.0.8-py3.6.egg" }, { "comment_text": "", "digests": { "md5": "bbef5300b9a989ea7215a1b04cfd2259", "sha256": "8854b16a6bd31985fe6e0cc7181d5d8fb564726f558afb5200017e7ac5408e9b" }, "downloads": -1, "filename": "modelsummary-1.0.8.tar.gz", "has_sig": false, "md5_digest": "bbef5300b9a989ea7215a1b04cfd2259", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 10166, "upload_time": "2019-03-04T14:35:47", "url": "https://files.pythonhosted.org/packages/cc/9d/6a3817a807718804e357eb7a0b77fb138077139dd6343eaf511bf2e30975/modelsummary-1.0.8.tar.gz" } ], "1.0.9": [ { "comment_text": "", "digests": { "md5": "9ec9e15182956e31c4176d7bc5994834", "sha256": "59dd8b36f81df3820c62033cf1bedd7a15337730bc923c8d1cbab6c9554f845b" }, "downloads": -1, "filename": "modelsummary-1.0.9-py3.6.egg", "has_sig": false, "md5_digest": "9ec9e15182956e31c4176d7bc5994834", "packagetype": "bdist_egg", "python_version": "3.6", "requires_python": null, "size": 4111, "upload_time": "2019-03-04T14:37:08", "url": "https://files.pythonhosted.org/packages/9d/d1/1bd20ea598c452787ed93ae7990d788156e66acaf81d25f23f4d109cf5ec/modelsummary-1.0.9-py3.6.egg" }, { "comment_text": "", "digests": { "md5": "0c4cebec9e66dc7913a949d6661f647f", "sha256": "2e940e3e5fe1b99edaa889943fe8b7ef7bf7c12c4f5e11fcdc5c874b8439220b" }, "downloads": -1, "filename": "modelsummary-1.0.9-py3-none-any.whl", "has_sig": false, "md5_digest": "0c4cebec9e66dc7913a949d6661f647f", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 7769, "upload_time": "2019-03-04T14:37:07", "url": "https://files.pythonhosted.org/packages/2f/de/360c78e48cff6e10d5870b153d490e90f2fc2a70bc53868d33719c058605/modelsummary-1.0.9-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "a53c1265527c8af3dba1b20ef2c42c80", "sha256": "28f44ad1b8a677414ddf57cd1c9277a8c92862cf8e31d2e939576930cc5cdffc" }, "downloads": -1, "filename": "modelsummary-1.0.9.tar.gz", "has_sig": false, "md5_digest": "a53c1265527c8af3dba1b20ef2c42c80", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 10179, "upload_time": "2019-03-04T14:37:10", "url": "https://files.pythonhosted.org/packages/07/fd/29339f611f7b749c95052883f79c6b9e4ee5067ede8bb617b72ce6b35bff/modelsummary-1.0.9.tar.gz" } ], "1.1.0": [ { "comment_text": "", "digests": { "md5": "30cdaac11e29ca6e3a89650176866c13", "sha256": "51367e8a5fc24422d33f68c078d962611085670365f69938625d87b49c2a24ac" }, "downloads": -1, "filename": "modelsummary-1.1.0.tar.gz", "has_sig": false, "md5_digest": "30cdaac11e29ca6e3a89650176866c13", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 10185, "upload_time": "2019-03-04T14:43:44", "url": "https://files.pythonhosted.org/packages/64/9f/821f50b6ba374241f0eefbba0d7384300dcd86c5008e0a397871dfd98f07/modelsummary-1.1.0.tar.gz" } ], "1.1.1": [ { "comment_text": "", "digests": { "md5": "9a21d714538ea70d5ec0d8edec5edd74", "sha256": "0f51121fa89bb458efe8c50053f5bdf22e338bc1b4619adf3c979137301538f1" }, "downloads": -1, "filename": "modelsummary-1.1.1.tar.gz", "has_sig": false, "md5_digest": "9a21d714538ea70d5ec0d8edec5edd74", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 11652, "upload_time": "2019-03-05T05:14:07", "url": "https://files.pythonhosted.org/packages/68/ed/7c7b5a6518515053d7b42cdaf138acdaa8b8b17a17e649f4c943069bb816/modelsummary-1.1.1.tar.gz" } ], "1.1.2": [ { "comment_text": "", "digests": { "md5": "187fea419acac306290b172c9bf6ea3e", "sha256": "44b1b128d5ec5a43d8083efe7557c1eee4161994c816d4dfaac15d941defb7cc" }, "downloads": -1, "filename": "modelsummary-1.1.2-py3.6.egg", "has_sig": false, "md5_digest": "187fea419acac306290b172c9bf6ea3e", "packagetype": "bdist_egg", "python_version": "3.6", "requires_python": null, "size": 10109, "upload_time": "2019-03-05T06:33:36", "url": "https://files.pythonhosted.org/packages/ff/92/be95201691cacd2174ed25cfb2b28fddf5189d71234630bc0d923d5075b4/modelsummary-1.1.2-py3.6.egg" }, { "comment_text": "", "digests": { "md5": "87dfc75b2273a83d6155ec73a528a9b2", "sha256": "30174c523da1b74438d5f1853b7d9b67c3e894f3d7c6e0870ba67f6fc1cf1f1d" }, "downloads": -1, "filename": "modelsummary-1.1.2.tar.gz", "has_sig": false, "md5_digest": "87dfc75b2273a83d6155ec73a528a9b2", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 11632, "upload_time": "2019-03-05T06:33:38", "url": "https://files.pythonhosted.org/packages/78/b8/6b5234a3fc036b555b7992b4e1f8569f6108126602cf5a85b799bc0625e1/modelsummary-1.1.2.tar.gz" } ], "1.1.3": [ { "comment_text": "", "digests": { "md5": "0561e6ef49599988c8f7b1d6735da94c", "sha256": "c94d6e411eaa508d07cc14f23b8342c557974b064e5ca79fd449a91faa31fe26" }, "downloads": -1, "filename": "modelsummary-1.1.3-py3.6.egg", "has_sig": false, "md5_digest": "0561e6ef49599988c8f7b1d6735da94c", "packagetype": "bdist_egg", "python_version": "3.6", "requires_python": null, "size": 10108, "upload_time": "2019-03-05T06:42:01", "url": "https://files.pythonhosted.org/packages/44/9c/d336a9e5af85da81d89b179cf74346bea3c2e75fa74f2579f572d53296b5/modelsummary-1.1.3-py3.6.egg" }, { "comment_text": "", "digests": { "md5": "07753daa6ef6b07fe0dfba94240a47de", "sha256": "0764d7a2ff861b8a5d6398bf7a76f1b7a679f9cb2f1bc56f93d47afa7e404ca4" }, "downloads": -1, "filename": "modelsummary-1.1.3.tar.gz", "has_sig": false, "md5_digest": "07753daa6ef6b07fe0dfba94240a47de", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 11609, "upload_time": "2019-03-05T06:42:02", "url": "https://files.pythonhosted.org/packages/50/8b/931ad4f247ce5ce6ed0e67d214cc0552e60ee69de13098c74fb073ee5632/modelsummary-1.1.3.tar.gz" } ], "1.1.4": [ { "comment_text": "", "digests": { "md5": "42f7d39f83784dd2c746560f9d41938a", "sha256": "353158c1a9010e1828c713aa6c9767213066b91ca542b82d74d353ab5434ef51" }, "downloads": -1, "filename": "modelsummary-1.1.4-py3.6.egg", "has_sig": false, "md5_digest": "42f7d39f83784dd2c746560f9d41938a", "packagetype": "bdist_egg", "python_version": "3.6", "requires_python": null, "size": 10137, "upload_time": "2019-03-05T06:53:45", "url": "https://files.pythonhosted.org/packages/e2/19/5680f28cf6a49fcbe2bfe9e04162c53b06fef07fdc475b1c527af9357d08/modelsummary-1.1.4-py3.6.egg" }, { "comment_text": "", "digests": { "md5": "5404a466fce3f2f999a14f6a31eca242", "sha256": "3d567b8c988a482696e6fe22a53d870b03da9bdbf56aa48d5bc204b26dfabf5c" }, "downloads": -1, "filename": "modelsummary-1.1.4.tar.gz", "has_sig": false, "md5_digest": "5404a466fce3f2f999a14f6a31eca242", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 11637, "upload_time": "2019-03-05T06:53:46", "url": "https://files.pythonhosted.org/packages/03/91/45f49318945921782afb1139249597efce794bd100119960d635f9e24275/modelsummary-1.1.4.tar.gz" } ], "1.1.5": [ { "comment_text": "", "digests": { "md5": "84e197a35cd5a2b9478529026681e88c", "sha256": "5015a711405f3bdf0e31855465ca45945069596a38050a8eb08aaf6e0899b1fb" }, "downloads": -1, "filename": "modelsummary-1.1.5-py3.6.egg", "has_sig": false, "md5_digest": "84e197a35cd5a2b9478529026681e88c", "packagetype": "bdist_egg", "python_version": "3.6", "requires_python": null, "size": 10137, "upload_time": "2019-03-05T07:00:27", "url": "https://files.pythonhosted.org/packages/b3/27/05fd6537383ddfa8a19fa2ded3a50bba59ed886b4a96147b4c960d8a5ee8/modelsummary-1.1.5-py3.6.egg" }, { "comment_text": "", "digests": { "md5": "a9958bff231dc7420733341ba0ea03c3", "sha256": "fee4cb121e138bca567c677c3a14de3c6866b85583834f06a4b90f8dcfac8390" }, "downloads": -1, "filename": "modelsummary-1.1.5.tar.gz", "has_sig": false, "md5_digest": "a9958bff231dc7420733341ba0ea03c3", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 11627, "upload_time": "2019-03-05T07:00:28", "url": "https://files.pythonhosted.org/packages/07/85/1d7b54f5e7bb15d3f6f6f8a348fc51f5e68a6fdc932441da9111451708f6/modelsummary-1.1.5.tar.gz" } ], "1.1.6": [ { "comment_text": "", "digests": { "md5": "c499cae27a7e7ccad1217ee8b32551af", "sha256": "53ffebf545e8554c1ae1d9536796e0bc13d82647f181201f523c92376c367122" }, "downloads": -1, "filename": "modelsummary-1.1.6-py3.6.egg", "has_sig": false, "md5_digest": "c499cae27a7e7ccad1217ee8b32551af", "packagetype": "bdist_egg", "python_version": "3.6", "requires_python": null, "size": 10139, "upload_time": "2019-03-05T07:59:29", "url": "https://files.pythonhosted.org/packages/9f/ab/617a7b96cae70626ff85b4be845caa62e2ca4debdd1b3fef269b243db87e/modelsummary-1.1.6-py3.6.egg" }, { "comment_text": "", "digests": { "md5": "51be6c852fb0048968de0c2578d5884b", "sha256": "ec38ead159a20974196181ae611236a9cd9e5110c47001572779b783beae257c" }, "downloads": -1, "filename": "modelsummary-1.1.6.tar.gz", "has_sig": false, "md5_digest": "51be6c852fb0048968de0c2578d5884b", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 11647, "upload_time": "2019-03-05T07:59:31", "url": "https://files.pythonhosted.org/packages/63/8f/4563fafa31c228a258ee92d727db598299397b7f9b546b92ea05cf38a9cc/modelsummary-1.1.6.tar.gz" } ], "1.1.7": [ { "comment_text": "", "digests": { "md5": "539330330b9dc916a103d8e54e96f536", "sha256": "b5af8d6684395629b16287bb22dc54cf522f2e4a704115a67b20caec7e44299b" }, "downloads": -1, "filename": "modelsummary-1.1.7.tar.gz", "has_sig": false, "md5_digest": "539330330b9dc916a103d8e54e96f536", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 11799, "upload_time": "2019-04-24T09:05:07", "url": "https://files.pythonhosted.org/packages/36/98/08d46b021de6aebad12ac368d03afe41d399285b2629cff2a2c076822981/modelsummary-1.1.7.tar.gz" } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "539330330b9dc916a103d8e54e96f536", "sha256": "b5af8d6684395629b16287bb22dc54cf522f2e4a704115a67b20caec7e44299b" }, "downloads": -1, "filename": "modelsummary-1.1.7.tar.gz", "has_sig": false, "md5_digest": "539330330b9dc916a103d8e54e96f536", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 11799, "upload_time": "2019-04-24T09:05:07", "url": "https://files.pythonhosted.org/packages/36/98/08d46b021de6aebad12ac368d03afe41d399285b2629cff2a2c076822981/modelsummary-1.1.7.tar.gz" } ] }