{ "info": { "author": "Praveen Patil", "author_email": "pspatil@usc.edu", "bugtrack_url": null, "classifiers": [ "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.4", "Programming Language :: Python :: 3.5", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7" ], "description": "\ufeff\n\n\n# ntap: Neural Text Analysis Pipeline\n\n`ntap` is a python package built on top of `tensorflow`, `sklearn`, `pandas`, `gensim`, `nltk`, and other libraries to facilitate the core functionalities of text analysis using modern methods from NLP. \n\n## Data loading and Text featurization\n\nAll `ntap` functionalities use the Dataset object class, which is responsible for loading datasets from file, cleaning text, transforming text into features, and saving results to file. \n\n## ntap.data.Dataset\n```\nDataset(source, tokenizer=\"wordpunct\", vocab_size=5000, embed=\"glove\",\n\t\tmin_token=5, stopwords=None, stem=False, lower=True, max_len=100,\n\t\tinclude_nums=False, include_symbols=False, num_topics=100, \n\t\tlda_max_iter=500)\n```\n### Parameters\n\n* `source`: _str_, path to single data file. Supported formats: newline-delimited `.json`, `.csv`, `.tsv`, saved Pandas DataFrame as `.pkl` file\n* `tokenizer`: _str_, select which tokenizer to use. if `None`, will tokenize based on white-space. Options are based on `nltk` word tokenizers: \"wordpunct\", ... (others not currently supported)\n* `vocab_size`: _int_, keep the top `vocab_size` types, by frequency. Used in bag-of-words features, as well as neural methods. If `None`, use all of vocabulary.\n* `embed`: _str_, select which word embedding to use for initialization of embedding layer. Currently only `glove` is supported\n* `min_token`: _int_, indicates the minimum size, by number of tokens, for a document to be included after calling `clean`. \n* `stopwords`: _iterable_ or _str_, set of words to exclude. Default is `None`, which excludes no words. Options include lists/sets, as well as strings indicating the use of a saved list: `nltk` is the only currently supported option, and indicates the default `nltk` English list\n* `stem`: _bool_ or _str_, if `False` then do not stem/lemmatize, otherwise follow the stemming procedure named by `stem`. Options are `snowball`\n* `lower`: _bool_, if `True` then cast all alpha characters to lowercase\n* `max_len`: _int_, maximum length, by number of valid tokens, for a document to be included during modeling. `None` will result in the maximum length being calculated by the existing document set\n* `include_nums`: _bool_, if `True`, then do not discard tokens which contain numeric characters. Examples of this include dates, figures, and other numeric datatypes.\n* `include_symbols`: _bool_, if `True`, then do not discard tokens which contain non-alphanumeric symbols\n* `num_topics`: _int_, sets default number of topics to use if `lda` method is called at a later point. \n* `lda_max_iter`: _int_, sets default number of iterations of Gibbs sampling to run during LDA model fitting\n\n### Methods\n\nThe Dataset class has a number of methods for control over the internal functionality of the class, which are called by Method objects. The most important stand-alone methods are the following:\n\n* `Dataset.set_params(**kwargs)`:\n\t* Can be called at any time to reset a subset of the parameters in `Dataset`\n\t* TODO: call specific refitting (i.e. `__learn_vocab`)\n* `Dataset.clean(column, remove=[\"hashtags\", \"mentions\", \"links\"], mode=\"remove\")`:\n\t* Removes any tokens (before calling tokenizer) matching the descriptions in the `remove` list. Then tokenizes documents in `column`, defines the vocabulary, the prunes documents from the Dataset instance that do not match the length criteria. All these are defined by the stored parameters in Dataset\n\t* `column`: _str_, indicates the column name of the text file\n\t* `remove`: _list_ of _str_, each item indicates a type of token to remove. If `None` or list is empty, no tokens are removed\n\t* `mode`: _str_, for later iterations, could potentially store hashtag or links. Currently only option is `remove`\n\nThe Dataset object supports a number of feature methods (e.g. LDA, TFIDF), which can be called directly by the user, or implicitly during a Method construction (see Method documentation)\n\n* `Dataset.lda(column, method=\"mallet\", save_model=None, load_model=None)`:\n\t* Uses `gensim` wrapper of `Mallet` java application. Currently only this is supported, though other implementations of LDA can be added. `save_model` and `load_model` are currently unsupported\n\t* `column`: _str_, text column\n\t* `method`: only \"mallet\" is supported\n\t* `save_model`: _str_, indicate path to save trained topic model. Not yet implemented\n\t* `load_model`: _str_, indicate path to load trained topic model. Not yet implemented\n* `Dataset.ddr(column, dictionary, **kwargs)`:\n\t* Only method which must be called in advance (currently; advanced versions will store dictionary internally\n\t* `column`: column in Dataset containing text. Does not have to be tokenized.\n\t* `dictionary`: _str_, path to dictionary file. Current supported types are `.json` and `.csv`. `.dic` to be added in a later version\n\t* possible `kwargs` include `embed`, which can be used to set the embedding source (i.e. `embed=\"word2vec\"`, but this feature has not yet been added)\n* `Dataset.tfidf(column)`:\n\t* uses `gensim` TFIDF implementation. If `vocab` has been learned previously, uses that. If not, relearns and computes DocTerm matrix\n\t* `column`: _str_, text column\n* Later methods will include BERT, GLOVE embedding averages\n\n### Examples\n\nBelow are a set of use-cases for the Dataset object. Methods like `SVM` are covered elsewhere, and are included here only for illustrative purposes.\n\n```\nfrom ntap.data import Dataset\nfrom ntap.models import RNN, SVM\n\ngab_data = Dataset(\"./my_data/gab.tsv\")\nother_gab_data = Dataset(\"./my_data/gab.tsv\", vocab_size=20000, stem=\"snowball\", max_len=1000)\ngab_data.clean()\nother_gab_data.clean() # using stored parameters\nother_gab_data.set_params(include_nums=True) # reset parameter\nother_gab_data.clean() # rerun using updated parameters\n\ngab_data.set_params(num_topics=50, lda_max_iter=100)\nbase_gab = SVM(\"hate ~ lda(text)\", data=gab_data)\nbase_gab2 = SVM(\"hate ~ lda(text)\", data=other_gab_data)\n```\n\n# Base Models\n\nFor supervised learning tasks, `ntap` provides two (currently) baseline methods, `SVM` and `LM`. `SVM` uses `sklearn`'s implementation of Support Vector Machine classifier, while `LM` uses either `ElasticNet` (supporting regularized linear regression) or `LinearRegression` from `sklearn`. Both models support the same type of core modeling functions: `CV`, `train`, and `predict`, with `CV` optionally supporting Grid Search.\n\nAll methods are created using an `R`-like formula syntax. Base models like `SVM` and `LM` only support single target models, while other models support multiple targets.\n\n## ntap.models.SVM\n\n```\nSVM(formula, data, C=1.0, class_weight=None, dual=False, penalty='l2', loss='squared_hinge', tol=0.0001, max_iter=1000, random_state=None)\n\nLM(formula, data, alpha=0.0, l1_ratio=0.5, max_iter=1000, tol=0.001, random_state=None)\n```\n\n### Parameters\n* formula: _str_, contains a single `~` symbol, separating the left-hand side (the target/dependent variable) from the right-hand side (a series of `+`-delineated text tokens). The right hand side tokens can be either a column in Dataset object given to the constructor, or a feature call in the following form: `()`. \n* `data`: _Dataset_, an existing Dataset instance\n* `tol`: _float_, stopping criteria (difference in loss between epochs)\n* `max_iter`: _int_, max iterations during training \n* `random_state`: _int_\n\nSVM:\n* `C`: _float_, corresponds to the `sklearn` \"C\" parameter in SVM Classifier\n* `dual`: _bool_, corresponds to the `sklearn` \"dual\" parameter in SVM Classifier\n* `penalty`: _string_, regularization function to use, corresponds to the `sklearn` \"penalty\" parameter\n* `loss`: _string_, loss function to use, corresponds to the `sklearn` \"loss\" parameter\n\nLM: \n* `alpha`: _float_, controls regularization. `alpha=0.0` corresponds to Least Squares regression. `alpha=1.0` is the default ElasticNet setting\n* `l1_ratio`: _float_, trade-off between L1 and L2 regularization. If `l1_ratio=1.0` then it is LASSO, if `l1_ratio=0.0` it is Ridge\n\n### Functions\n\nA number of functions are common to both `LM` and `SVM`\n\n* `set_params(**kwargs)`\n* `CV`:\n\t* Cross validation that implicitly support Grid Search. If a list of parameter values (instead of a single value) is given, `CV` runs grid search over all possible combinations of parameters\n\t* `LM`: `CV(data, num_folds=10, metric=\"r2\", random_state=None)`\n\t* `SVM`: `CV(data, num_epochs, num_folds=10, stratified=True, metric=\"accuracy\")`\n\t\t* `num_epochs`: number of epochs/iterations to train. This should be revised\n\t\t* `num_folds`: number of cross folds\n\t\t* `stratified`: if true, split data using stratified folds (even split with reference to target variable)\n\t\t* `metric`: metric on which to compare different CV results from different parameter grids (if no grid search is specified, no comparison is done and `metric` is disregarded)\n\t* Returns: An instance of Class `CV_Results`\n\t\t* Contains information of all possible classification (or regression) metrics, for each CV fold and the mean across folds\n\t\t* Contains saved parameter set \n* `train`\n\t* Not currently advised for user application. Use `CV` instead\n* `predict\n\t* Not currently advised for user application. Use `CV` instead\n\n### Examples\n\n```\nfrom ntap.data import Dataset\nfrom ntap.models import SVM\n\ndata = Dataset(\"./my_data.csv\")\nmodel = SVM(\"hate ~ tfidf(text)\", data=data)\nbasic_cv_results = model.CV(num_folds=5)\nbasic_cv_results.summary()\nmodel.set_params(C=[1., .8, .5, .2, .01]) # setting param\ngrid_searched = model.CV(num_folds=5)\nbasic_cv_results.summary()\nbasic_cv_results.params\n```\n\n# Models\n\nOne basic model has been implemented for `ntap`: `RNN`. Later models will include `CNN` and other neural variants. All model classes (`CNN`, `RNN`, etc.) have the following methods: `CV`, `predict`, and `train`. \n\nModel formulas using text in a neural architecture should use the following syntax: \n`\" ~ seq()\"`\n\n## `ntap.models.RNN`\n\n```\nRNN(formula, data, hidden_size=128, cell=\"biLSTM\", rnn_dropout=0.5, embedding_dropout=None,\n\toptimizer='adam', learning_rate=0.001, rnn_pooling='last', embedding_source='glove', \n\trandom_state=None)\n```\n\n### Parameters\n\n* `formula`\n\t* similar to base methods, but supports multiple targets (multi-task learning). The format for this would be: `\"hate + moral ~ seq(text)\"`\n* `data`: _Dataset_ object \n* `hidden_size`: _int_, number of hidden units in the 1-layer RNN-type model\\\n* `cell`: _str_, type of RNN cell. Default is a bidirectional Long Short-Term Memory (LSTM) cell. Options include `biLSTM`, `LSTM`, `GRU`, and `biGRU` (bidirectional Gated Recurrent Unit)\n* `rnn_dropout`: _float_, proportion of parameters in the network to randomly zero-out during dropout, in a layer applied to the outputs of the RNN. If `None`, no dropout is applied (not advised)\n* `embedding_dropout`: _str_, not implemented\n* `optimizer`: _str_, optimizer to use during training. Options are: `adam`, `sgd`, `momentum`, and `rmsprop`\n* `learning_rate`: learning rate during training\n* `rnn_pooling`: _str_ or _int_. If _int_, model has self-attention, and a Feed-Forward layer of size `rnn_pooling` is applied to the outputs of the RNN layer in order to produce the attention alphas. If string, possible options are `last` (default RNN behavior, where the last hidden vector is taken as the sentence representation and prior states are removed) `mean` (average hidden states across the entire sequence) and `max` (select the max hidden vector)\n* `embedding_source`: _str_, either `glove` or (other not implemented)\n* `random_state`: _int_\n\n### Functions\n\n* `CV(data, num_folds, num_epochs, comp='accuracy', model_dir=None)`\n\t* Automatically performs grid search if multiple values are given for a particular parameter\n\t* `data`: _Dataset_ on which to perform CV\n\t* `num_folds`: _int_\n\t* `comp`: _str_, metric on which to compare different parameter grids (does not apply if no grid search)\n\t* `model_dir`: if `None`, trained models are saved in a temp directory and then discarded after script exits. Otherwise, `CV` attempts to save each model in the path given by `model_dir`. \n\t* Returns: _CV_results_ instance with best model stats (if grid search), and best parameters (not supported)\n* `train(data, num_epochs=30, batch_size=256, indices=None, model_path=None)`\n\t* method called by `CV`, can be called independently. Can train on all data (`indices=None`) or a specified subset. If `model_path` is `None`, does not save model, otherwise attempt to save model at `model_path`\n\t* `indices`: either `None` (train on all data) or list of _int_, where each value is an index in the range `(0, len(data) - 1)`\n* `predict(data, model_path, indices=None, batch_size=256, retrieve=list())`\n\t* Predicts on new data. Requires a saved model to exist at `model_path`.\n\t* `indices`: either `None` (train on all data) or list of _int_, where each value is an index in the range `(0, len(data) - 1)`\n\t* `retrieve`: contains list of strings which indicate which model variables to retrieve during prediction. Includes: `rnn_alpha` (if attention model) and `hidden_states` (any model)\n\t* Returns: dictionary with {variable_name: value_list}. Contents are predicted values for each target variable and any model variables that are given in `retrieve`.\n\n```\nfrom ntap.data import Dataset\nfrom ntap.models import RNN\n\ndata = Dataset(\"./my_data.csv\")\nbase_lstm = RNN(\"hate ~ seq(text)\", data=data)\nattention_lstm = RNN(\"hate ~ seq(text)\", data=data, rnn_pooling=100) # attention\ncontext_lstm = RNN(\"hate ~ seq(text) + speaker_party\", data=data) # categorical variable\nbase_model.set_params({\"hidden\"=[200, 50], lr=[0.01, 0.05]}) # enable grid search during CV\n\n# Grid search and print results from best parameters\nbase_results = base_model.CV()\nbase_results.summary()\n\n# Train model and save. Predict for 6 specific instances and get alphas\nattention_lstm.train(data, model_path=\"./trained_model\")\npredictions = attention_lstm.predict(data, model_path=\"./trained_model\",\n\t\t\t\t\t\t\tindices=[0,1,2,3,4,5], retrieve=[\"rnn_alphas\"])\nfor alphas in predictions[\"rnn_alphas\"]:\n\tprint(alphas) # prints list of floats, each the weight of a word in the ith document\n```\n\n# Coming soon...\n\n`MIL(formula, data, ...)`\n- not implemented\n- \n`HAN(formula, data, ...)`\n- not implemented\n\n`CNN()`\n- not implemented\n\n## `NTAP.data.Tagme`\nNot implemented\n`Tagme(token=\"system\", p=0.15, tweet=False)`\n* `token` (`str`): Personal `Tagme` token. Users can retrieve token by [Creating Account](https://sobigdata.d4science.org/home?p_p_id=58&p_p_lifecycle=0&p_p_state=maximized&p_p_mode=view&p_p_col_id=column-1&p_p_col_count=2&saveLastPath=false&_58_struts_action=%2Flogin%2Fcreate_account). Default behavior (\"system\") assumes `Tagme` token has been set during installation of `NTAP`.\nMembers:\n* get_tags(list-like of strings)\n\t* Stores `abstracts` and `categories` as member variables \n* reset()\n* `abstracts`: dictionary of {`entity_id`: `abstract text ...`}\n* `categories`: dictionary of {`entity_id`: `[category1, category2, `}\n```\ndata = Dataset(\"path.csv\")\ndata.tokenize(tokenizer='tweettokenize')\nabstracts, categories = data.get_tagme(tagme_token=ntap.tagme_token, p=0.15, tweet=False)\n# tagme saved as data object at data.entities\ndata.background_features(method='pointwise-mi', ...) # assumes data.tagme is set; creates features\nsaves features at data.background\n\nbackground_mod = RNN(\"purity ~ seq(words) + background\", data=data)\nbackground_mod.CV(kfolds=10)\n```\n## `NTAP.data.TACIT`\nnot implemented. Wrapper around TACIT instance\n`TACIT(path_to_tacit_directory, params to create tacit session)`\n\n\n", "description_content_type": "text/markdown", "docs_url": null, "download_url": "", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "https://github.com/USC-CSSL/NTAP", "keywords": "", "license": "", "maintainer": "", "maintainer_email": "", "name": "ntap", "package_url": "https://pypi.org/project/ntap/", "platform": "", "project_url": "https://pypi.org/project/ntap/", "project_urls": { "Homepage": "https://github.com/USC-CSSL/NTAP" }, "release_url": "https://pypi.org/project/ntap/1.0.8/", "requires_dist": [ "absl-py (==0.7.1)", "astor (==0.8.0)", "boto (==2.49.0)", "boto3 (==1.9.199)", "botocore (==1.12.199)", "certifi (==2019.6.16)", "chardet (==3.0.4)", "docutils (==0.14)", "gast (==0.2.2)", "gensim (==3.8.0)", "google-pasta (==0.1.7)", "grpcio (==1.22.0)", "h5py (==2.9.0)", "idna (==2.8)", "jmespath (==0.9.4)", "joblib (==0.13.2)", "Keras-Applications (==1.0.8)", "Keras-Preprocessing (==1.1.0)", "Markdown (==3.1.1)", "nltk (==3.4.4)", "numpy (==1.17.0)", "pandas (==0.25.0)", "protobuf (==3.9.0)", "python-dateutil (==2.8.0)", "pytz (==2019.2)", "requests (==2.22.0)", "s3transfer (==0.2.1)", "scikit-learn (==0.21.3)", "scipy (==1.3.0)", "six (==1.12.0)", "sklearn (==0.0)", "smart-open (==1.8.4)", "tensorboard (==1.14.0)", "tensorflow (==1.14.0)", "tensorflow-estimator (==1.14.0)", "termcolor (==1.1.0)", "urllib3 (==1.25.3)", "Werkzeug (==0.15.5)", "wrapt (==1.11.2)" ], "requires_python": "", "summary": "NTAP - CSSL", "version": "1.0.8" }, "last_serial": 5673274, "releases": { "1.0.1": [ { "comment_text": "", "digests": { "md5": "74afd19a2a6d0cf89fb30a0a487a3341", "sha256": "460173a3282538bd31b986d4ab82cb79d4b95e46a06d61da238719b9afa089c0" }, "downloads": -1, "filename": "ntap-1.0.1-py3-none-any.whl", "has_sig": false, "md5_digest": "74afd19a2a6d0cf89fb30a0a487a3341", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 15957, "upload_time": "2019-07-25T04:53:44", "url": "https://files.pythonhosted.org/packages/70/12/04e921c9c117189c5cba25e053ec33a7644e2fbec32b246d14ec0d85b391/ntap-1.0.1-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "12a7fa20a0a8208f26a789189a03ae96", "sha256": "97d485c2ffd5925d50de2c7ad5b1daf3f4d1c2a6776486d0434161f9b75a3e02" }, "downloads": -1, "filename": "ntap-1.0.1.tar.gz", "has_sig": false, "md5_digest": "12a7fa20a0a8208f26a789189a03ae96", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 17206, "upload_time": "2019-07-25T04:53:46", "url": "https://files.pythonhosted.org/packages/7d/3e/26e4c6392c457432050a4e75092ba71251d3435c005fa6de79f14cdd4a2e/ntap-1.0.1.tar.gz" } ], "1.0.2": [ { "comment_text": "", "digests": { "md5": "5c3cd063a3ed70b53d0eada63269d71b", "sha256": "c982334a94784a969e8b7959e8e084fa81e723c86c3dcc1d5ed88d1d9e48930d" }, "downloads": -1, "filename": "NTAP-1.0.2-py3-none-any.whl", "has_sig": false, "md5_digest": "5c3cd063a3ed70b53d0eada63269d71b", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 15939, "upload_time": "2019-07-23T04:13:33", "url": "https://files.pythonhosted.org/packages/59/a1/9d8934f6d019068a7c147e101d9cd3e023480699565f11e361ed0babe9a6/NTAP-1.0.2-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "174fa8689b9b34914c65bc5e3acb2e2f", "sha256": "41418c7400d97d46d7b2e10e690de7e782cefe9fad635c421832240a7e13f226" }, "downloads": -1, "filename": "NTAP-1.0.2.tar.gz", "has_sig": false, "md5_digest": "174fa8689b9b34914c65bc5e3acb2e2f", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 14908, "upload_time": "2019-07-23T04:13:34", "url": "https://files.pythonhosted.org/packages/25/e4/251be6ec86601248355d2dcc52ad5e4c1d12f720548f8a53937b0422e25e/NTAP-1.0.2.tar.gz" } ], "1.0.3": [ { "comment_text": "", "digests": { "md5": "51fabd4480bdfcc903276e1002d5e5da", "sha256": "7fa2aabd8060c836c419bdf912cb6ae17eeef31ea048cf5f2d02638537a4a99b" }, "downloads": -1, "filename": "ntap-1.0.3-py3-none-any.whl", "has_sig": false, "md5_digest": "51fabd4480bdfcc903276e1002d5e5da", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 15958, "upload_time": "2019-07-25T04:58:48", "url": "https://files.pythonhosted.org/packages/b9/8a/656fa435feda6d982e0ac4813d500392cd76f820465017dbcd9b4cb0896b/ntap-1.0.3-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "ff1d4c19fb996037483f8cb2dee7935e", "sha256": "3aa2f8e1893137a224f5674f0be2dccc716d55bbdcea2503473cea502e8b901c" }, "downloads": -1, "filename": "ntap-1.0.3.tar.gz", "has_sig": false, "md5_digest": "ff1d4c19fb996037483f8cb2dee7935e", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 17199, "upload_time": "2019-07-25T04:58:50", "url": "https://files.pythonhosted.org/packages/a3/85/e72b92089fd321e817c434c7b3ea27c4781185cf7b34dc7fb3a7b998691a/ntap-1.0.3.tar.gz" } ], "1.0.4": [ { "comment_text": "", "digests": { "md5": "5d27a06714a5a2ca550b315ca1064ccd", "sha256": "53dab8029d48c889afef832b6291c97ec57820dffc4666f2cb2e96592875fb0b" }, "downloads": -1, "filename": "ntap-1.0.4-py3-none-any.whl", "has_sig": false, "md5_digest": "5d27a06714a5a2ca550b315ca1064ccd", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 15961, "upload_time": "2019-07-30T22:38:37", "url": "https://files.pythonhosted.org/packages/a4/8a/4265c49802e53d977ca048b4378055f64885c654d1df6b0acee463e1c731/ntap-1.0.4-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "377a44142939ff2a4539b54cc505cb54", "sha256": "3dfe073ac8a91eb384cce5f592335f34f1f7936c5e8155fc69d3743c2d9387bc" }, "downloads": -1, "filename": "ntap-1.0.4.tar.gz", "has_sig": false, "md5_digest": "377a44142939ff2a4539b54cc505cb54", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 17188, "upload_time": "2019-07-30T22:38:39", "url": "https://files.pythonhosted.org/packages/26/ce/084697a44506b00459bd7763eaa6a342229ad80bbc8776581635dfc96be5/ntap-1.0.4.tar.gz" } ], "1.0.5": [ { "comment_text": "", "digests": { "md5": "8bd34dd54ca4cca08023fa1c92067da6", "sha256": "b52b0575622e5c80d46a42d1e861527a4c98def14f11eca443f1bda8abd2bfae" }, "downloads": -1, "filename": "ntap-1.0.5-py3-none-any.whl", "has_sig": false, "md5_digest": "8bd34dd54ca4cca08023fa1c92067da6", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 15956, "upload_time": "2019-07-30T23:02:48", "url": "https://files.pythonhosted.org/packages/2f/88/fcd923fc4cb76fe2e601af4a84dbe69854d9508bb79e646aa81190178d8c/ntap-1.0.5-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "418fab4f30b30d2d8f1be7ab7a5f5973", "sha256": "4f1f97dd958cb28baf6f8ea73565738dc8b8fc6b1b66430e1de7ff8cf22f5469" }, "downloads": -1, "filename": "ntap-1.0.5.tar.gz", "has_sig": false, "md5_digest": "418fab4f30b30d2d8f1be7ab7a5f5973", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 17180, "upload_time": "2019-07-30T23:02:49", "url": "https://files.pythonhosted.org/packages/df/11/41cc8ef01c57acd06269747e588b2a31517565c945038eeaa8ac2724fc7f/ntap-1.0.5.tar.gz" } ], "1.0.6": [ { "comment_text": "", "digests": { "md5": "de0ca911b1bbcc640d27f5e487fe4bb1", "sha256": "9f18629d0c642173733a54f46281f2dc749ee291f87f1cb0c1214bc8fb1c045f" }, "downloads": -1, "filename": "ntap-1.0.6-py3-none-any.whl", "has_sig": false, "md5_digest": "de0ca911b1bbcc640d27f5e487fe4bb1", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 20457, "upload_time": "2019-08-01T22:27:51", "url": "https://files.pythonhosted.org/packages/71/a9/105c37d3269e452dfa2d7df1b7a64e2694944dbaa910f7af9e2e94997963/ntap-1.0.6-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "b6dd720ea7fcf1ccfbaae1da31fc2379", "sha256": "faf79b46d427c624b08ccbc39b4b66f19598d7b4f6a4dea0cb33ba30a35774cd" }, "downloads": -1, "filename": "ntap-1.0.6.tar.gz", "has_sig": false, "md5_digest": "b6dd720ea7fcf1ccfbaae1da31fc2379", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 26179, "upload_time": "2019-08-01T22:27:53", "url": "https://files.pythonhosted.org/packages/17/b2/7f801d7b01744f208d4f5fbcc8fc7ac22849c8e90102f1523368a085e9a2/ntap-1.0.6.tar.gz" } ], "1.0.7": [ { "comment_text": "", "digests": { "md5": "5d07615ebcaf3728baa96517718f84cc", "sha256": "9334b4d6f8213cec570a48c0097de914d8da0ce97b62d6cd5baa93c1e1df1756" }, "downloads": -1, "filename": "ntap-1.0.7-py3-none-any.whl", "has_sig": false, "md5_digest": "5d07615ebcaf3728baa96517718f84cc", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 20457, "upload_time": "2019-08-13T18:41:48", "url": "https://files.pythonhosted.org/packages/a5/54/8dcb973fc29ba3ca25980b9a2b6ad53f71da322b5e44146014c4fb79014f/ntap-1.0.7-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "19482867078cb73940bb8958fe280317", "sha256": "47603895a1248337cae8c3af9b1b576773884f99ec001e2fbc8e9ec99dedfb84" }, "downloads": -1, "filename": "ntap-1.0.7.tar.gz", "has_sig": false, "md5_digest": "19482867078cb73940bb8958fe280317", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 26200, "upload_time": "2019-08-13T18:41:50", "url": "https://files.pythonhosted.org/packages/47/1a/54d2219fc29e93b95b97fc5407efca0be3e1d7ccf606416bdffa6d921faa/ntap-1.0.7.tar.gz" } ], "1.0.8": [ { "comment_text": "", "digests": { "md5": "e21d2469f2340ee6b187ca6502fb5d27", "sha256": "891262373b09771ab1939889eb863350a1d305bbab369017c487c4f9583dd150" }, "downloads": -1, "filename": "ntap-1.0.8-py3-none-any.whl", "has_sig": false, "md5_digest": "e21d2469f2340ee6b187ca6502fb5d27", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 20458, "upload_time": "2019-08-13T19:33:38", "url": "https://files.pythonhosted.org/packages/96/da/108a00102ddfd17c31cc87a8bdf106c5f3171139c970b844ca7fff984a15/ntap-1.0.8-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "b02f31b48de705369084323b0942c259", "sha256": "6a87f74776e96f718f0a5c653c5d134364cf697d0f193083d56b7bdcdeff6de4" }, "downloads": -1, "filename": "ntap-1.0.8.tar.gz", "has_sig": false, "md5_digest": "b02f31b48de705369084323b0942c259", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 26196, "upload_time": "2019-08-13T19:33:39", "url": "https://files.pythonhosted.org/packages/e7/de/88363ae26006646fc35a9f5b0414726a6551722e4ca4f4890a9d08e8a5af/ntap-1.0.8.tar.gz" } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "e21d2469f2340ee6b187ca6502fb5d27", "sha256": "891262373b09771ab1939889eb863350a1d305bbab369017c487c4f9583dd150" }, "downloads": -1, "filename": "ntap-1.0.8-py3-none-any.whl", "has_sig": false, "md5_digest": "e21d2469f2340ee6b187ca6502fb5d27", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 20458, "upload_time": "2019-08-13T19:33:38", "url": "https://files.pythonhosted.org/packages/96/da/108a00102ddfd17c31cc87a8bdf106c5f3171139c970b844ca7fff984a15/ntap-1.0.8-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "b02f31b48de705369084323b0942c259", "sha256": "6a87f74776e96f718f0a5c653c5d134364cf697d0f193083d56b7bdcdeff6de4" }, "downloads": -1, "filename": "ntap-1.0.8.tar.gz", "has_sig": false, "md5_digest": "b02f31b48de705369084323b0942c259", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 26196, "upload_time": "2019-08-13T19:33:39", "url": "https://files.pythonhosted.org/packages/e7/de/88363ae26006646fc35a9f5b0414726a6551722e4ca4f4890a9d08e8a5af/ntap-1.0.8.tar.gz" } ] }