{ "info": { "author": "Steve Schmerler", "author_email": "git@elcorto.com", "bugtrack_url": null, "classifiers": [], "description": "=====================================================\npsweep -- loop like a pro, make parameter studies fun\n=====================================================\n\nAbout\n=====\n\nThis package helps you to set up and run parameter studies.\n\nMostly, you'll start with a script and a for-loop and ask \"why do I need a\npackage for that\"? Well, soon you'll want housekeeping tools and a database for\nyour runs and results. This package exists because sooner or later, everyone\ndoing parameter scans arrives at roughly the same workflow and tools.\n\nThis package deals with commonly encountered boilerplate tasks:\n\n* write a database of parameters and results automatically\n* make a backup of the database and all results when you repeat or extend the\n study\n* append new rows to the database when extending the study\n* simulate a parameter scan\n\nOtherwise, the main goal is to not constrain your flexibility by building a\ncomplicated framework -- we provide only very basic building blocks. All data\nstructures are really simple (dicts), as are the provided functions. The\ndatabase is a normal pandas DataFrame.\n\n\nGetting started\n===============\n\nA trivial example: Loop over two parameters 'a' and 'b':\n\n.. code-block:: python\n\n #!/usr/bin/env python3\n\n import random\n from itertools import product\n from psweep import psweep as ps\n\n\n def func(pset):\n return {'result': random.random() * pset['a'] * pset['b']}\n\n\n if __name__ == '__main__':\n a = ps.seq2dicts('a', [1,2,3,4])\n b = ps.seq2dicts('b', [8,9])\n params = ps.loops2params(product(a,b))\n df = ps.run(func, params)\n print(df)\n\nThis produces a list ``params`` of parameter sets (dicts ``{'a': ..., 'b': ...}``) to loop\nover::\n\n [{'a': 1, 'b': 8},\n {'a': 1, 'b': 9},\n {'a': 2, 'b': 8},\n {'a': 2, 'b': 9},\n {'a': 3, 'b': 8},\n {'a': 3, 'b': 9},\n {'a': 4, 'b': 8},\n {'a': 4, 'b': 9}]\n\n\nand a database of results (pandas DataFrame ``df``, pickled file ``calc/results.pk``\nby default)::\n\n _calc_dir _pset_id \\\n 2018-07-22 20:06:07.401398 calc 99a0f636-10b3-438c-ab43-c583fda806e8\n 2018-07-22 20:06:07.406902 calc 6ec59d2b-7562-4262-b8d6-8f898a95f521\n 2018-07-22 20:06:07.410227 calc d3c22d7d-bc6d-4297-afc3-285482e624b5\n 2018-07-22 20:06:07.412210 calc f2b2269b-86e3-4b15-aeb7-92848ae25f7b\n 2018-07-22 20:06:07.414637 calc 8e1db575-1be2-4561-a835-c88739dc0440\n 2018-07-22 20:06:07.416465 calc 674f8a2c-bc21-40f4-b01f-3702e0338ae8\n 2018-07-22 20:06:07.418866 calc b4d3d11b-0f22-4c73-a895-7363c635c0c6\n 2018-07-22 20:06:07.420706 calc a265ca2f-3a9f-4323-b494-4b6763c46929\n\n _run_id \\\n 2018-07-22 20:06:07.401398 3e09daf8-c3a7-49cb-8aa3-f2c040c70e8f\n 2018-07-22 20:06:07.406902 3e09daf8-c3a7-49cb-8aa3-f2c040c70e8f\n 2018-07-22 20:06:07.410227 3e09daf8-c3a7-49cb-8aa3-f2c040c70e8f\n 2018-07-22 20:06:07.412210 3e09daf8-c3a7-49cb-8aa3-f2c040c70e8f\n 2018-07-22 20:06:07.414637 3e09daf8-c3a7-49cb-8aa3-f2c040c70e8f\n 2018-07-22 20:06:07.416465 3e09daf8-c3a7-49cb-8aa3-f2c040c70e8f\n 2018-07-22 20:06:07.418866 3e09daf8-c3a7-49cb-8aa3-f2c040c70e8f\n 2018-07-22 20:06:07.420706 3e09daf8-c3a7-49cb-8aa3-f2c040c70e8f\n\n _time_utc a b result\n 2018-07-22 20:06:07.401398 2018-07-22 20:06:07.401398 1 8 2.288036\n 2018-07-22 20:06:07.406902 2018-07-22 20:06:07.406902 1 9 7.944922\n 2018-07-22 20:06:07.410227 2018-07-22 20:06:07.410227 2 8 14.480190\n 2018-07-22 20:06:07.412210 2018-07-22 20:06:07.412210 2 9 3.532110\n 2018-07-22 20:06:07.414637 2018-07-22 20:06:07.414637 3 8 9.019944\n 2018-07-22 20:06:07.416465 2018-07-22 20:06:07.416465 3 9 4.382123\n 2018-07-22 20:06:07.418866 2018-07-22 20:06:07.418866 4 8 2.713900\n 2018-07-22 20:06:07.420706 2018-07-22 20:06:07.420706 4 9 27.358240\n\nYou see the columns 'a' and 'b', the column 'result' (returned by ``func``) and\na number of reserved fields for book-keeping such as\n\n::\n\n _run_id\n _pset_id\n _calc_dir\n _time_utc\n\nand a timestamped index.\n\nObserve that one call ``ps.run(func, params)`` creates one ``_run_id`` -- a\nUUID identifying this run. Inside that, each call ``func(pset)`` creates a\nunique ``_pset_id``, a timestamp and a new row in the DataFrame (the database).\n\nConcepts\n========\n\nThe basic data structure for a param study is a list ``params`` of dicts\n(called \"parameter sets\" or short `pset`).\n\n.. code-block:: python\n\n params = [{'a': 1, 'b': 'lala'}, # pset 1\n {'a': 2, 'b': 'zzz'}, # pset 2\n ... # ...\n ]\n\nEach `pset` contains values of parameters ('a' and 'b') which are varied\nduring the parameter study.\n\nYou need to define a callback function ``func``, which takes exactly one `pset`\nsuch as::\n\n {'a': 1, 'b': 'lala'}\n\nand runs the workload for that `pset`. ``func`` must return a dict, for example::\n\n {'result': 1.234}\n\nor an updated `pset`::\n\n {'a': 1, 'b': 'lala', 'result': 1.234}\n\nWe always merge (``dict.update``) the result of ``func`` with the `pset`,\nwhich gives you flexibility in what to return from ``func``.\n\nThe `psets` form the rows of a pandas ``DataFrame``, which we use to store\nthe `pset` and the result from each ``func(pset)``.\n\nThe idea is now to run ``func`` in a loop over all `psets` in ``params``. You\ndo this using the ``ps.run`` helper function. The function adds some special\ncolumns such as ``_run_id`` (once per ``ps.run`` call) or ``_pset_id`` (once\nper `pset`). Using ``ps.run(... poolsize=...)`` runs ``func`` in parallel on\n``params`` using ``multiprocessing.Pool``.\n\nThis package offers some very simple helper functions which assist in creating\n``params``. Basically, we define the to-be-varied parameters ('a' and 'b')\nand then use something like ``itertools.product`` to loop over them to create\n``params``, which is passed to ``ps.run`` to actually perform the loop over all\n`psets`.\n\n.. code-block:: python\n\n >>> from itertools import product\n >>> from psweep import psweep as ps\n >>> x=ps.seq2dicts('x', [1,2,3])\n >>> y=ps.seq2dicts('y', ['xx','yy','zz'])\n >>> x\n [{'x': 1}, {'x': 2}, {'x': 3}]\n >>> y\n [{'y': 'xx'}, {'y': 'yy'}, {'y': 'zz'}]\n >>> ps.loops2params(product(x,y))\n [{'x': 1, 'y': 'xx'},\n {'x': 1, 'y': 'yy'},\n {'x': 1, 'y': 'zz'},\n {'x': 2, 'y': 'xx'},\n {'x': 2, 'y': 'yy'},\n {'x': 2, 'y': 'zz'},\n {'x': 3, 'y': 'xx'},\n {'x': 3, 'y': 'yy'},\n {'x': 3, 'y': 'zz'}]\n\nThe logic of the param study is entirely contained in the creation of ``params``.\nE.g., if parameters shall be varied together (say x and y), then instead of\n\n.. code-block:: python\n\n >>> product(x,y,z)\n\nuse\n\n.. code-block:: python\n\n >>> product(zip(x,y), z)\n\nThe nesting from ``zip()`` is flattened in ``loops2params()``.\n\n.. code-block:: python\n\n >>> z=ps.seq2dicts('z', [None, 1.2, 'X'])\n >>> ps.loops2params(product(zip(x,y),z))\n [{'x': 1, 'y': 'xx', 'z': None},\n {'x': 1, 'y': 'xx', 'z': 1.2},\n {'x': 1, 'y': 'xx', 'z': 'X'},\n {'x': 2, 'y': 'yy', 'z': None},\n {'x': 2, 'y': 'yy', 'z': 1.2},\n {'x': 2, 'y': 'yy', 'z': 'X'},\n {'x': 3, 'y': 'zz', 'z': None},\n {'x': 3, 'y': 'zz', 'z': 1.2},\n {'x': 3, 'y': 'zz', 'z': 'X'}]\n\nIf you want a parameter which is constant, use a list of length one:\n\n.. code-block:: python\n\n >>> c=ps.seq2dicts('c', ['const'])\n >>> ps.loops2params(product(zip(x,y),z,c))\n [{'x': 1, 'c': 'const', 'y': 'xx', 'z': None},\n {'x': 1, 'c': 'const', 'y': 'xx', 'z': 1.2},\n {'x': 1, 'c': 'const', 'y': 'xx', 'z': 'X'},\n {'x': 2, 'c': 'const', 'y': 'yy', 'z': None},\n {'x': 2, 'c': 'const', 'y': 'yy', 'z': 1.2},\n {'x': 2, 'c': 'const', 'y': 'yy', 'z': 'X'},\n {'x': 3, 'c': 'const', 'y': 'zz', 'z': None},\n {'x': 3, 'c': 'const', 'y': 'zz', 'z': 1.2},\n {'x': 3, 'c': 'const', 'y': 'zz', 'z': 'X'}]\n\nSo, as you can see, the general idea is that we do all the loops *before*\nrunning any workload, i.e. we assemble the parameter grid to be sampled before\nthe actual calculations. This has proven to be very practical as it helps\ndetecting errors early.\n\nYou are, by the way, of course not restricted to use ``itertools.product``. You\ncan use any complicated manual loop you can come up with. The point is: you\ngenerate ``params``, we run the study.\n\n\n_pset_id, _run_id and repeated runs\n-----------------------------------\n\nSee ``examples/vary_2_params_repeat.py``.\n\nIt is important to get the difference between the two special fields\n``_run_id`` and ``_pset_id``, the most important one being ``_pset_id``.\n\nBoth are random UUIDs. They are used to uniquely identify things.\n\nBy default, ``ps.run()`` writes a database ``calc/results.pk`` (a pickled\nDataFrame) with the default ``calc_dir='calc'``. If you run ``ps.run()``\nagain\n\n.. code-block:: python\n\n df = ps.run(func, params)\n df = ps.run(func, other_params)\n\nit will read and append to that file. The same happens in an interactive\nsession when you pass in ``df`` again:\n\n.. code-block:: python\n\n df = ps.run(func, params) # default is df=None -> create empty df\n df = ps.run(func, other_params, df=df)\n\n\nOnce per ``ps.run`` call, a ``_run_id`` is created. Which means that when you\ncall ``ps.run`` multiple times *using the same database* as just shown, you\nwill see multiple (in this case two) ``_run_id`` values.\n\n::\n\n _run_id _pset_id\n afa03dab-071e-472d-a396-37096580bfee 21d2185d-b900-44b3-a98d-4b8866776a77\n afa03dab-071e-472d-a396-37096580bfee 3f63742b-6457-46c2-8ed3-9513fe166562\n afa03dab-071e-472d-a396-37096580bfee 1a812d67-0ffc-4ab1-b4bb-ad9454f91050\n afa03dab-071e-472d-a396-37096580bfee 995f5b0b-f9a6-45ee-b4d1-5784a25be4c6\n e813db52-7fb9-4777-a4c8-2ce0dddc283c 7b5d8f76-926c-44e2-a0e3-2e68deb86abb\n e813db52-7fb9-4777-a4c8-2ce0dddc283c f46bb714-4677-4a11-b371-dd2d41a83d19\n e813db52-7fb9-4777-a4c8-2ce0dddc283c 5fdcc88b-d467-4117-aa03-fd256656299b\n e813db52-7fb9-4777-a4c8-2ce0dddc283c 8c5c07ca-3862-4726-a7d0-15d60e281407\n\nEach ``ps.run`` call in turn calls ``func(pset)`` for each `pset` in\n``params``. Each ``func`` invocation created a unique ``_pset_id``. Thus, we\nhave a very simple, yet powerful one-to-one mapping and a way to refer to a\nspecific `pset`.\n\n\nBest practices\n==============\n\nThe following workflows and practices come from experience. They are, if you\nwill, the \"framework\" for how to do things. However, we decided to not codify\nany of these ideas but to only provide tools to make them happen easily,\nbecause you will probably have quite different requirements and workflows.\n\nPlease also have a look at the ``examples/`` dir where we document these and\nmore common workflows.\n\nSave data on disk, use UUIDs\n----------------------------\n\nSee ``examples/save_data_on_disk.py``.\n\nAssume that you need to save results from a run not only in the returned dict\nfrom ``func`` (or even not at all!) but on disk, for instance when you call an\nexternal program which saves data on disk. Consider this example:\n\n.. code-block:: python\n\n import os\n import subprocess\n from psweep import psweep as ps\n\n\n def func(pset):\n fn = os.path.join(pset['_calc_dir'],\n pset['_pset_id'],\n 'output.txt')\n cmd = \"mkdir -p $(dirname {fn}); echo {a} > {fn}\".format(a=pset['a'],\n fn=fn)\n pset['cmd'] = cmd\n subprocess.run(cmd, shell=True)\n return pset\n\n\nIn this case, you call an external program (here a dummy shell command) which\nsaves its output on disk. Note that we don't return any output from the\nexternal command from ``func``. We only update ``pset`` with the shell ``cmd``\nwe call to have that in the database.\n\nAlso note how we use the special fields ``_pset_id`` and ``_calc_dir``, which\nare added in ``ps.run`` to ``pset`` *before* ``func`` is called.\n\nAfter the run, we have four dirs for each `pset`, each simply named with\n``_pset_id``::\n\n calc\n \u251c\u2500\u2500 63b5daae-1b37-47e9-a11c-463fb4934d14\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 output.txt\n \u251c\u2500\u2500 657cb9f9-8720-4d4c-8ff1-d7ddc7897700\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 output.txt\n \u251c\u2500\u2500 d7849792-622d-4479-aec6-329ed8bedd9b\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 output.txt\n \u251c\u2500\u2500 de8ac159-b5d0-4df6-9e4b-22ebf78bf9b0\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 output.txt\n \u2514\u2500\u2500 results.pk\n\nThis is a useful pattern. History has shown that in the end, most naming\nconventions start simple but turn out to be inflexible and hard to adapt later\non. I have seen people write scripts which create things\nlike::\n\n calc/param_a=1.2_param_b=66.77\n calc/param_a=3.4_param_b=88.99\n\ni.e. encode the parameter values in path names, because they don't have a\ndatabase. Good luck parsing that. I don't say this cannot be done -- sure it\ncan (in fact the example above easy to parse). It is just not fun -- and there\nis no need to. What if you need to add a \"column\" for parameter 'c' later?\nImpossible (well, painful at least). This approach makes sense for very quick\nthrow-away test runs, but gets out of hand quickly.\n\nSince we have a database, we can simply drop all data in ``calc/<_pset_id>``\nand be done with it. Each parameter set is identified by a UUID that will never\nchange. This is the only kind of naming convention which makes sense in the\nlong run.\n\n\nIterative extension of a parameter study\n----------------------------------------\n\nSee ``examples/{10,20}multiple_1d_scans_with_backup.py``.\n\nWe recommend to always use `backup_calc_dir`:\n\n.. code-block:: python\n\n df = ps.run(func, params, backup_calc_dir=True)\n\n`backup_calc_dir` will save a copy of the old\n`calc_dir` to ``calc_``, i.e. something like\n``calc_2018-09-06T20:22:27.845008Z`` before doing anything else. That way, you\ncan track old states of the overall study, and recover from mistakes.\n\nFor any non-trivial work, you won't use an interactive session.\nInstead, you will have a driver script which defines ``params`` and starts\n``ps.run()``. Also in a common workflow, you won't define ``params`` and run a\nstudy once. Instead you will first have an idea about which parameter values to\nscan. You will start with a coarse grid of parameters and then inspect the\nresults and identify regions where you need more data (e.g. more dense\nsampling). Then you will modify ``params`` and run the study again. You will\nmodify the driver script multiple times, as you refine your study. To save the\nold states of that script, use `backup_script`:\n\n.. code-block:: python\n\n df = ps.run(func, params, backup_calc_dir=True, backup_script=__file__)\n\n`backup_script` will save a copy of the script which you use to drive your study\nto ``calc/backup_script/<_run_id>.py``. Since each ``ps.run()`` will create a new\n``_run_id``, you will have a backup of the code which produced your results for\nthis ``_run_id`` (without putting everything in a git repo, which may be\nunpleasant if your study produces large amounts of data).\n\nSimulate / Dry-Run: look before you leap\n----------------------------------------\n\nSee ``examples/vary_1_param_simulate.py``.\n\nWhen you fiddle with finding the next good ``params`` and even when using\n`backup_calc_dir`, appending to the old database might be a hassle if you find\nthat you made a mistake when setting up ``params``. You need to abort the\ncurrent run, delete\n`calc_dir` and copy the last backup back:\n\n.. code-block:: sh\n\n $ rm -r calc\n $ mv calc_2018-09-06T20:22:27.845008Z calc\n\nInstead, while you tinker with ``params``, use another `calc_dir`, e.g.\n\n.. code-block:: python\n\n df = ps.run(func, params, calc_dir='calc_test')\n\nBut what's even better: keep everything as it is and just set ``simulate=True``\n\n.. code-block:: python\n\n df = ps.run(func, params, backup_calc_dir=True, backup_script=__file__,\n simulate=True)\n\nThis will copy only the database, not all the (possible large) data in\n``calc/`` to ``calc.simulate/`` and run the study there, but w/o actually\ncalling ``func()``. So you still append to your old database as in a real run,\nbut in a safe separate dir which you can delete later.\n\n\nGive runs names for easy post-processing\n----------------------------------------\n\nSee ``examples/vary_1_param_study_column.py``.\n\nPost-processing is not the scope of the package. The database is a DataFrame\nand that's it. You can query it and use your full pandas Ninja skills here,\ne.g. \"give me all psets where parameter 'a' was between 10 and 100, while 'b'\nwas constant, which were run last week and the result was not < 0\" ... you get\nthe idea.\n\nTo ease post-processing, it is useful practice to add a constant parameter\nnamed \"study\" or \"scan\" to label a certain range of runs. If you, for\ninstance, have 5 runs where you scan values for parameter 'a' while keeping\nparameters 'b' and 'c' constant, you'll have 5 ``_run_id`` values. When\nquerying the database later, you could limit by ``_run_id`` if you know the\nvalues:\n\n.. code-block:: python\n\n >>> df = df[(df._run_id=='afa03dab-071e-472d-a396-37096580bfee') |\n (df._run_id=='e813db52-7fb9-4777-a4c8-2ce0dddc283c') |\n ...\n ]\n\nThis doesn't look like fun. It shows that the UUIDs (``_run_id`` and\n``_pset_id``) are rarely ment to be used directly. Instead, you should (in this\nexample) limit by the constant values of the other parameters:\n\n.. code-block:: python\n\n >>> df = df[(df.b==10) & (df.c=='foo')]\n\nMuch better! This is what most post-processing scripts will do.\n\nBut when you have a column \"study\" which has the value ``'a'`` all the time, it\nis just\n\n.. code-block:: python\n\n >>> df = df[df.study=='a']\n\nYou can do more powerful things with this approach. For instance, say you vary\nparameters 'a' and 'b', then you could name the \"study\" field 'scan=a:b'\nand encode which parameters (thus column names) you have varied. Later in the\npost-processing\n\n.. code-block:: python\n\n >>> study = 'scan=a:b'\n # cols = ['a', 'b']\n >>> cols = study.split('=')[1].split(':')\n >>> values = df[cols].values\n\nSo in this case, a naming convention *is* useful in order to bypass possibly\ncomplex database queries. But it is still flexible -- you can change the\n\"study\" column at any time, or delete it again.\n\nPro tip: You can manipulate the database at any later point and add the \"study\"\ncolumn after all runs have been done.\n\nSuper Pro tip: Make a backup of the database first!\n\n\nInstall\n=======\n\n::\n\n $ pip3 install psweep\n\n\nDev install of this repo::\n\n $ pip3 install -e .\n\nSee also https://github.com/elcorto/samplepkg.\n\nTests\n=====\n\n::\n\n # apt-get install python3-nose\n $ nosetests3\n\n\n", "description_content_type": "", "docs_url": null, "download_url": "", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "https://github.com/elcorto/psweep", "keywords": "parameter study sweep scan database pandas", "license": "BSD 3-Clause", "maintainer": "", "maintainer_email": "", "name": "psweep", "package_url": "https://pypi.org/project/psweep/", "platform": "", "project_url": "https://pypi.org/project/psweep/", "project_urls": { "Homepage": "https://github.com/elcorto/psweep" }, "release_url": "https://pypi.org/project/psweep/0.3.1/", "requires_dist": [ "docopt", "pandas (>=0.19.2)", "tabulate (>=0.8.2)" ], "requires_python": "", "summary": "loop like a pro, make parameter studies fun: set up and run a parameter study/sweep/scan, save a database", "version": "0.3.1" }, "last_serial": 5736144, "releases": { "0.2.1": [ { "comment_text": "", "digests": { "md5": "c7bf5b2dd91e4ce6e0b630d64d67f4d6", "sha256": "c35de839b64ef732fea88542d378645f7a03770a8666eee782499bc3c03c29b5" }, "downloads": -1, "filename": "psweep-0.2.1-py3-none-any.whl", "has_sig": false, "md5_digest": "c7bf5b2dd91e4ce6e0b630d64d67f4d6", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 5509, "upload_time": "2018-08-26T20:00:48", "url": "https://files.pythonhosted.org/packages/f3/9f/f83be8f219ea6e663e6c8d8d61a2934c80401c8c78a717563662b121fa36/psweep-0.2.1-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "aa820b31a164285872f5c32bd1d071b1", "sha256": "8cf3be1baf2099ec597b22ca22dd14730948996707d59cbfeec2956e6e42895c" }, "downloads": -1, "filename": "psweep-0.2.1.tar.gz", "has_sig": false, "md5_digest": "aa820b31a164285872f5c32bd1d071b1", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 7076, "upload_time": "2018-08-26T20:00:50", "url": "https://files.pythonhosted.org/packages/f3/3c/ade75985db5c8bb467ecfee9a37dd4df817af7afc951176e1ac38d072535/psweep-0.2.1.tar.gz" } ], "0.3.0": [ { "comment_text": "", "digests": { "md5": "39102bbdaabd67342013d535277d9c4d", "sha256": "155944d4bb80278acd960e4651b1a716195a775457854c030a95f616417a2356" }, "downloads": -1, "filename": "psweep-0.3.0-py3-none-any.whl", "has_sig": false, "md5_digest": "39102bbdaabd67342013d535277d9c4d", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 13350, "upload_time": "2018-10-06T18:07:01", "url": "https://files.pythonhosted.org/packages/9d/59/2ce9b6fa8eaa82153573255e3832ce4d92c0df8d3f854da04f28a2c372e6/psweep-0.3.0-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "6a730c9d8cfc6290c8bffaf9d7ca3236", "sha256": "d78a87eaf610b1bff4fc2b1fc1e15fb33b2704610a10a8f1bf0e6bcd6fe5502f" }, "downloads": -1, "filename": "psweep-0.3.0.tar.gz", "has_sig": false, "md5_digest": "6a730c9d8cfc6290c8bffaf9d7ca3236", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 16575, "upload_time": "2018-10-06T18:07:03", "url": "https://files.pythonhosted.org/packages/11/9a/0d264fc01434c3aab13f4e0f18ecacf071faec1ae645b3c2f44fdeb9b4e0/psweep-0.3.0.tar.gz" } ], "0.3.1": [ { "comment_text": "", "digests": { "md5": "d7d0e43c7b75ca7bf3ea452db17a7bf7", "sha256": "b9483d61782a6a93aa26237655f715587053901f0975a97d3a302a87d5662c98" }, "downloads": -1, "filename": "psweep-0.3.1-py3-none-any.whl", "has_sig": false, "md5_digest": "d7d0e43c7b75ca7bf3ea452db17a7bf7", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 14313, "upload_time": "2019-08-27T10:54:42", "url": "https://files.pythonhosted.org/packages/ca/14/b6e9aa752f2eed25b86e97ae875ef533d5b01c056ee692c7612352dd4a05/psweep-0.3.1-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "367a6a8191aaccbae638fe07d7b5ae36", "sha256": "49d3009d940be351aab9fffe309427fe260d47ad8f75dcc20b4b5ed56ad2221f" }, "downloads": -1, "filename": "psweep-0.3.1.tar.gz", "has_sig": false, "md5_digest": "367a6a8191aaccbae638fe07d7b5ae36", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 21132, "upload_time": "2019-08-27T10:54:45", "url": "https://files.pythonhosted.org/packages/5f/41/8442eda3223317fe8cf4f1eb9222f4a3b29d4b531215a68e5f200d2dda37/psweep-0.3.1.tar.gz" } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "d7d0e43c7b75ca7bf3ea452db17a7bf7", "sha256": "b9483d61782a6a93aa26237655f715587053901f0975a97d3a302a87d5662c98" }, "downloads": -1, "filename": "psweep-0.3.1-py3-none-any.whl", "has_sig": false, "md5_digest": "d7d0e43c7b75ca7bf3ea452db17a7bf7", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 14313, "upload_time": "2019-08-27T10:54:42", "url": "https://files.pythonhosted.org/packages/ca/14/b6e9aa752f2eed25b86e97ae875ef533d5b01c056ee692c7612352dd4a05/psweep-0.3.1-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "367a6a8191aaccbae638fe07d7b5ae36", "sha256": "49d3009d940be351aab9fffe309427fe260d47ad8f75dcc20b4b5ed56ad2221f" }, "downloads": -1, "filename": "psweep-0.3.1.tar.gz", "has_sig": false, "md5_digest": "367a6a8191aaccbae638fe07d7b5ae36", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 21132, "upload_time": "2019-08-27T10:54:45", "url": "https://files.pythonhosted.org/packages/5f/41/8442eda3223317fe8cf4f1eb9222f4a3b29d4b531215a68e5f200d2dda37/psweep-0.3.1.tar.gz" } ] }