{ "info": { "author": "Dummy Name", "author_email": "dd@dummysolutions.com", "bugtrack_url": null, "classifiers": [ "License :: OSI Approved :: GNU General Public License v3 (GPLv3)", "Operating System :: OS Independent", "Programming Language :: Python :: 3 :: Only" ], "description": "# python3-template [![Build Status](https://travis-ci.org/andres-fr/python3-template.svg?branch=master)](https://travis-ci.org/andres-fr/python3-template) [![PyPI version](https://badge.fury.io/py/dummypackage-dummyname.svg)](https://badge.fury.io/py/dummypackage-dummyname) [![Documentation Status](https://readthedocs.org/projects/python3-template/badge/?version=latest)](https://python3-template.readthedocs.io/en/latest/?badge=latest)\n\n\nDummy Python3 project providing structure for development, static typecheck, unit testing, runtime/memory benchmarking, PEP8 check, [autodocumentation](https://python3-template.readthedocs.io), and deployment to [PyPI](https://pypi.org/project/dummypackage-dummyname) and [GitHub Releases](https://github.com/andres-fr/python3-template/releases), automated via [Travis CI](https://travis-ci.org/andres-fr/python3-template) (online and locally).\n\n\n* The actual code is to be developed in the `dummypackage` library, and used in an application like `dummyapp.py`, which can be run with `python dummyapp.py`. **To ensure proper function of the tools, all subdirectories must include an `__init.py__` file**.\n\n* The unit tests are developed in the `dummypackage_utest` directory. They can also be arbitrarily nested, but also **must include an `__init.py__` file in each test directory**.\n\n* Reports for runtime and memory benchmarks can be generated into `timebenchmark` and `memorybenchmark` respectively.\n\n* Autodocs are also generated using `sphinx` into the `docs` directory, in PDF as well as HTML (which can be deployed [online](https://python3-template.readthedocs.io)).\n\n* A `setup.py` script to create an OS-agnostic package into `dist` is provided, as well as functionality to keeping track of versioning and deploying into git releases and PyPI.\n\n* All these tasks can be performed manually as explained later on, and also are automated via Travis CI as it can be seen in the `.travis.yml` file, and the `ci_scripts` directory, for maximal efficiency and efficacy.\n\n\nThis readme has been developed for Ubuntu-like systems, but with the intention of it being as much OS-agnostic as possible (all Python examples should transfer well to other systems). Also note that this has been tested for Python 3 only, make sure that your `python` command invokes a Python 3 binary (or that you adapt the exmples presented here).\n\n\n# Dependencies:\n\nSome general dependencies are listed under the `addons` tag in the `.travis.yml` file. See `requirements.txt` for the python-specific dependencies. Also, if not using `conda`, add this line to your .bashrc to allow executing binaries installed with `pip --user`:\n\n```bash\nexport PATH=${PATH}:$HOME/.local/bin\n```\n\n# Unit Testing:\n\nAll unit test files must fulfill the following requirements (see the examples for more details):\n\n* They have to be in the `dummypackage_utest` directory\n* Their name has to end in `_test.py` (although this is customizable)\n* They import functionality from the package like this: `from dummypackage.foo_module import Foo`\n* They import functionality from each other like this: `from .foo_test import FooTestCaseCpu`\n* They import `unittest` and extend `unittest.TestCase` classes\n* All test method names from the extended unittest classes begin with `test` (also customizable)\n* Nested directories are allowed. However, **each test directory has to contain an empty `__init__.py` file**\n\nIf these conditions are met, all the tests can be run from the CLI or within Python.\n\n**Note**: the `dummypackage_utest` directory includes a special test case, `tautology.py` which should always pass and can be used to ensure that the testing facilities work correctly. Its name doesn't end in `_test.py`, so it doesn't get included in the ordinary tests and has to be called explicitly.\n\n### Run from CLI:\n\n```bash\n# run all existing tests\npython -m unittest discover -s dummypackage_utest -t . -p \"*_test.py\" -v\n# Run individual test modules (from the repo root dir)\npython -m unittest dummypackage_utest/foo_test.py -v\npython -m unittest dummypackage_utest/bar_test.py -v\npython -m unittest dummypackage_utest/nestedtests/nested_test.py -v\n# Run individual classes\npython -m unittest dummypackage_utest.foo_test.FooTestCaseCpu -v\npython -m unittest dummypackage_utest.nestedtests.nested_test.QuackTestCaseCpu -v\n# Run individual methods\npython -m unittest dummypackage_utest.bar_test.BarTestCaseCpu.test_inheritance -v\n```\n\n### Run within Python:\n\n```python\n\n# run all existing tests and collect results\nimport dummypackage_utest\nresults = dummypackage_utest.run_all_tests()\nprint(results)\n\n# run all tests for a given module and collect results\nimport dummypackage_utest.foo_test\nresults = dummypackage_utest.run_module(utest.foo_test);\nprint(results)\n\n# run for several modules:\nimport dummypackage_utest\nimport dummypackage_utest.foo_test as f\nimport dummypackage_utest.nestedtests.nested_test as n\nresults = dummypackage_utest.run_modules([f, n]);\nprint(results)\n\n# run for a single testcase\nimport dummypackage_utest\nfrom dummypackage_utest.foo_test import FooTestCaseCpu\nresults = dummypackage_utest.run_testcase(FooTestCaseCpu);\nprint(results)\n\n# run for several testcases\nimport dummypackage_utest\nfrom dummypackage_utest.foo_test import FooTestCaseCpu\nfrom dummypackage_utest.bar_test import BarTestCaseCpu\nresults = dummypackage_utest.run_testcases([FooTestCaseCpu, BarTestCaseCpu]);\nprint(results)\n\n# run for a single method\nimport dummypackage_utest\nresults = dummypackage_utest.run_testmethod(\"dummypackage_utest.foo_test.FooTestCaseCpu.test_loop\");\nprint(results)\n\n# run for several methods\nimport dummypackage_utest\nresults = dummypackage_utest.run_testmethods([\"dummypackage_utest.foo_test.FooTestCaseCpu.test_loop\",\n \"dummypackage_utest.bar_test.BarTestCaseCpu.test_loop\"]);\nprint(results)\n```\n\n\n# Static Typing:\n\nPython 3 allows to statically annotate the type of a variable, with the so-called [type hints](https://mypy.readthedocs.io/en/latest/cheat_sheet_py3.html), e.g.:\n\n```\nfrom typing import Dict\ntest: str = \"hello\"\nd: Dict[int, int] = {1: 2, 3: 4}\ndef test_fn(a: str, b: int) -> str:\n return a + str(b)\n```\n\nBut the interpreter itself won't check for static errors. For that, the `mypy` package can be used as follows:\n\n```\npython -m mypy dummyapp.py\npython -m mypy dummypackage\npython -m mypy dummypackage_utest\n```\n\nWill return all errors found **on annotated code** within `dummypackage` (type errors in non-annotated functions will be ignored!). Programatical run within Python is probably not supported or encouraged, in words of the ex-BDFL himself: https://github.com/python/mypy/issues/2369#issuecomment-256984061 but if you are reading this and know an easy way please do let know.\n\n\n# Code Coverage:\n\nCode coverage determines how much of the implemented code do the unit tests go through. Although not particularly sound, it can be used as a way to check that the unit testing relates closely to the developed code. Plus, it is easy to automate.\n\n### From CLI:\n\nThe usage is very similar to running with `python`, but prepending `coverage run` instead, and adding a `--source` directory, which contains the actual code pool being inspected:\n\n```\n# define the location for the coverage file:\nexport COVERAGE_FILE=codecoverage/coverage`date \"+%Y%m%d_%H%M%S\"`\n\n# Run whatever you want to run but prepending 'coverage run'\n# the --branch flag activates branch coverage (as opposed to statement coverage)\n# the -a flag will accumulate the reports\ncoverage run --source dummypackage --branch -m unittest discover -s dummypackage_utest -t . -p \"*_test.py\" -v\n\n# Print report on terminal\ncoverage report -m\n\n# more elaborated XML report:\ncoverage xml -o $COVERAGE_FILE.xml\n\n# interactive HTML report\ncoverage html -d $COVERAGE_FILE\\_html\nfirefox $COVERAGE_FILE\\_html/index.html\n```\n\n### Within Python:\n\nThis sample script performs unit testing AND test coverage wthout creating any files (see the script `ci_scripts/utest_with_coverage.py`):\n\n```python\n\nimport coverage\nimport dummypackage_utest\n\n# same as with the CLI. A suffix will be automatically added\nCOVERAGE_FILE = \"codecoverage/coverage\"\n\n# wrap the testing with the coverage analyzer:\nc = coverage.Coverage(data_file=COVERAGE_FILE, data_suffix=True, branch=True,\n source=[\"dummypackage\"])\nc.start()\ntest_results = dummypackage_utest.run_all_tests()\nc.stop()\n\n# at this point c.save() and c.html_report(outfile=etc) would generate files.\n# This instead handles the data within Python, as coverage.CoverageData\npercentage = c.report()\n\nprint(\"This script did\", test_results.testsRun, \"tests in total.\")\nprint(\"No. of test errors:\", len(test_results.errors))\nprint(\"No. of test failures:\", len(test_results.failures))\nprint(\"Code coverage of tests (percentage):\", percentage)\n```\n\n\n\n\n\n# CPU Runtime Benchmarking:\n\n\n### Run and print on terminal, sorted by total time:\n\n```bash\npython -m cProfile -s tottime dummyapp.py\n```\n\nIt can be sorted by any of these:\n```\ncalls (call count)\ncumulative (cumulative time)\ncumtime (cumulative time)\nfile (file name)\nfilename (file name)\nmodule (file name)\nncalls (call count)\npcalls (primitive call count)\nline (line number)\nname (function name)\nnfl (name/file/line)\nstdname (standard name)\ntime (internal time)\ntottime (internal time)\n```\n\nSample output:\n\n```\n ncalls tottime percall cumtime percall filename:lineno(function)\n 100 1.496 0.015 1.496 0.015 {method 'index' of 'list' objects}\n 1 0.023 0.023 0.023 0.023 bar_module.py:15(__init__)\n 93/1 0.016 0.000 1.590 1.590 {built-in method builtins.exec}\n 1 0.010 0.010 1.590 1.590 dummyapp.py:6()\n 47 0.005 0.000 0.005 0.000 {built-in method marshal.loads}\n 229/227 0.004 0.000 0.011 0.000 {built-in method builtins.__build_class__}\n```\n\n\n### Save to file and explore interactively with web browser using `snakeviz`:\n\n```bash\npython -m cProfile -o timebenchmark/dummyapp.`date \"+%Y%m%d_%H%M%S\"` dummyapp.py\n# open results in browser using the snakeviz server\nsnakeviz timebenchmark/dummyapp.20190112_195256\n```\n\n### Within Python:\n\nThis small snippet prints the results to the terminal and returns some relevant figures (see the script `ci_scripts/runtime_benchmark.py`):\n\n```\nimport cProfile\nimport pstats\nfrom io import StringIO\n\ndef get_total_time_and_calls(fn, sort_by=\"time\"):\n \"\"\"\n This function calls the given a parameterless functor while benchmarking\n it via cProfile. A report is printed to the terminal and the tuple\n (n_seconds, n_calls) is returned.\n \"\"\"\n pr = cProfile.Profile()\n pr.enable()\n fn()\n pr.disable()\n #\n s = StringIO()\n ps = pstats.Stats(pr, stream=s)\n ps.strip_dirs().sort_stats(sort_by)\n ps.print_stats()\n print(s.getvalue())\n #\n return ps.total_tt, ps.total_calls\n\nget_total_time_and_calls(lambda: 2+2)\n```\n\n\n# Memory Benchmarking:\n\nThis package reports memory usage of the Python process, line-by-line or as a function of time. As with the runtime profiler, there are several ways to run this functionality.\n\n\n### Decorators:\n\nThis way requires to import and use the `@profile` decorator in the desired `def` declarations of your Python files. Unlike the other methods, it has the drawback that it requires to alter or duplicate the source code. To perform the profiling, run the decorated files (see e.g. the `memorybenchmark` directory) with the python interpreter. Sample output:\n\n\n```\nFilename: memorybenchmark/loop_benchmark.py\nLine Mem usage Increment Line Contents\n================================================\n 21 14.6 MiB 14.6 MiB @profile # decorator for (https://pypi.org/project/memory-profiler/)\n 22 def do_loop(clss, memsize, loopsize):\n 23 \"\"\"\n 24 Create a Foo instance and loop it.\n 25 \"\"\"\n 26 14.6 MiB 0.0 MiB x = clss(memsize)\n 27 14.6 MiB 0.0 MiB x.loop(loopsize)\n\n 21 14.6 MiB 14.6 MiB @profile # decorator for (https://pypi.org/project/memory-profiler/)\n 22 def do_loop(clss, memsize, loopsize):\n 23 \"\"\"\n 24 Create a Foo instance and loop it.\n 25 \"\"\"\n 26 53.3 MiB 38.7 MiB x = clss(memsize)\n 27 53.3 MiB 0.0 MiB x.loop(loopsize)\n```\n\n\n\n### Time-based memory usage:\n\nThe module provides the `mprof` binary to perform time-based analysis. Calling `mprof run loop_benchmark.py` will generate a `mprofile_TIMESTAMP.dat` file, which can be visualized with `mprof plot`(if pylab/matplotlib is installed) and looks like this:\n\n```\nFUNC __main__.do_loop 14.4766 1547322141.5961 14.4766 1547322141.5963\nFUNC __main__.do_loop 14.4766 1547322141.5964 14.8281 1547322143.1454\nCMDLINE /usr/bin/python dummyapp.py\nMEM 1.511719 1547322141.5060\nMEM 31.281250 1547322141.6064\nMEM 53.195312 1547322141.7067\nMEM 53.195312 1547322141.8071\nMEM 53.195312 1547322141.9074\nMEM 53.195312 1547322142.0078\nMEM 53.195312 1547322142.1081\nMEM 53.195312 1547322142.2085\n```\n\nThe available `mprof` commands are:\n```bash\nmprof run: running an executable, recording memory usage\nmprof plot: plotting one the recorded memory usage (by default, the last one)\nmprof list: listing all recorded memory usage files in a user-friendly way.\nmprof clean: removing all recorded memory usage files.\nmprof rm: removing specific recorded memory usage files\n```\n\nFurther information can be found in the package homepage.\n\n### Within Python:\n\n\nThis small snippet returns memory usage as a function of time (see the script `ci_scripts/runtime_benchmark.py`). Line-by-line analysis using the `LineProfiler` class doesn't seem to be supported by this method, see the first and second approaches for alternatives:\n\n```\nfrom memory_profiler import memory_usage\n\ndef fn(times=1000, message=\"hello!\"):\n l = []\n for i in range(times):\n l.append(message)\n for i in range(times**2):\n l.append(message)\n\nusage = memory_usage((fn, (1000, \"goodbye\")), interval=0.05)\nprint(usage)\n\n```\n\n\n# Codestyle Check:\n\nUsing it is pretty straightforward: simply call `flake8` or `python -m flake8` in the repository root directory. If any errors are present, the command will print them and return with error status.\n\nNote that some \"errors\" are actually design decissions. These can be bypassed with the `noqa` directive, like this: `import environment # noqa: F401`. The `noqa` directives themselves can be ignored by running `flake8 --disable-noqa`.\n\n\n\n\n# Autodoc:\n\nThis project provides a script that generates the package's autodocs as HTML and PDF from scratch, using `sphinx` (and `LaTeX` for the PDF). Usage example:\n\n```\npython ci_scripts/make_sphinx_docs.py -n dummypackage -a \"Dummy Dumson\" -f .bumpversion.cfg -o docs -l\n```\n\nThe docs will be generated into `docs/_build` (they are also being uploaded to the repository as they aren't filtered by `.gitignore`).\n\n\nOptionally, you can deploy your docs into https://readthedocs.org/ by synchronizing it with your github account. Importing the repository should be straightforward: the page will automatically find your `conf.py` and generate the docs. The docs homepage of the project should provide a badge like the one at the top of this README and a link to the online docs. Note that the advertisment can be removed in the \"Admin\" tab. This repo's docs are being deployed to https://python3-template.readthedocs.io\n\nBy default, it will provide two versions of the doc: `latest` and `stable`. See this link about versioning in readthedocs: [here](https://docs.readthedocs.io/en/stable/versions.html).\n\n\n### From CLI:\n\nThe Python script provided follows closely what a CLI user would do with something like the following bash script:\n\n\n```\n# usage example: ./make_sphinx_docs \"compuglobalhypermeganet\" \"Homer Simpson\"\n# ONLY CALLABLE FROM REPO ROOT\n\nPACKAGE_NAME=$1\nAUTHOR=$2\nVERSION=`grep \"current_version\" .bumpversion.cfg | cut -d'=' -f2 | xargs`\nCONF_PY_PATH=\"docs/conf.py\"\nrm -rf docs/*\n\nsphinx-quickstart -q -p \"$PACKAGE_NAME\" -a \"$AUTHOR\" --makefile --batchfile --ext-autodoc --ext-imgmath --ext-viewcode --ext-githubpages -d version=\"$VERSION\" -d release=\"$VERSION\" docs/\n\n# the following is needed to change the html_theme flag?\nsed -i '/html_theme/d' \"$CONF_PY_PATH\" # remove the html_theme line\nsed -i '1r ci_scripts/sphinx_doc_config.txt' \"$CONF_PY_PATH\" # add the desired config after line 1\necho -e \"\\nlatex_elements = {'extraclassoptions': 'openany,oneside'}\" >> \"$CONF_PY_PATH\" # override latex config at end of file to minimize blank pages\n\n# cleanest way to allow apidoc edit index.rst without altering conf.py?\nrm docs/index.rst\nsphinx-apidoc -F \"$PACKAGE_NAME\" -o docs/\n\nmake -C docs clean && make -C docs latexpdf && make -C docs html\n```\n\n\n\n\n# Versioning, Build and Deploy:\n\n\n### Tagging:\n\nOnce we reach a given milestone, we can label that specific commit with a version tag (like `v1.0.13`). The basics are discussed here: `https://git-scm.com/book/en/v2/Git-Basics-Tagging`. Especially, **note that tags have to be explicitly included in the push**: `git push --follow-tags`, otherwise, they won't be sent to the server. If you want this to be the default behaviour (as assumed in this project), call `git config --global push.followTags true`, after that every regular `git push` will also push all the valid existing tags.\n\nThere are different ways of handling the tag system, the `git` CLI and `gitpython` being very popular among them. Since we want here very specific functionality, we use the `bump2version` third-party library: It allows to automatically increment and set the version tag following semantic versioning (see `https://semver.org/`), which is highly encouraged. It works as follows:\n\n* The `.bumpversion.cfg` file holds the current version and the configuration.\n* Calling `bump2version ` will automatically update the version in the `.bumpversion.cfg` and other specified files, and add the corresponding tag to the git repo. The `` parameter can be either **major**, **minor**, or **patch**.\n\nSo we just need to start our repo by setting the desired version in the `.bumpversion.cfg` file (usually `0.1.0`), and then commit normally. The process of adding a tag will be something like:\n\n1. Once a milestone is reached, **UPDATE THE `CHANGELOG.md` FILE** with the changes since last version. To maintain an optimal changelog, see `https://keepachangelog.com`. \n * Ideally, you have kept an `Unreleased` section at the top that you can easily name into the upcoming release. Added for new features.\n * The main tokens to list are `added, changed, deprecated, removed, fixed, and security`.\n * State clearly and extensively the deprecations and things that will break backward compatibility.\n * Follow semantic versioning for each tag, and add a date to it.\n2. Make sure you saved all files and commited to avoid `Git working directory is not clean` error. Also make sure that all specified files in the `.bumpversion.cfg` file exist (e.g. `docs/conf.py`)\n3. Call `bump2version