{ "info": { "author": "Ross Patterson", "author_email": "me@rpatterson.net", "bugtrack_url": null, "classifiers": [ "Programming Language :: Python", "Topic :: Software Development :: Libraries :: Python Modules" ], "description": ".. -*-doctest-*-\n\n===================\ncollective.funkload\n===================\n\nComplex functional load testing and benchmarking\n\n``collective.funkload`` provides some extensions of `Funkload\n`_, a web performance testing and\nreporting tool. These extensions provide flexible yet simple ways to:\n\n - run benchmarks of multiple test scenarios\n - run these benchmarks against multiple setups\n - generate comparisons between those setups\n\nAll of the console scripts provided by collective.funkload provide a\n\"--help\" option which documents the command and its options.\n\n.. contents:: Table of Contents\n\nThe collective.funkload Workflow\n================================\n\n1. Develop test scenarios in Python eggs as with unittest\n---------------------------------------------------------\n\nFunkload test cases can be generated using the `funkload test recorder\n`_ and then placed in a\nPython egg's \"tests\" package as with normal `unittest\n`_ test cases.\n\nThese test cases should be developed which reflect all of the\napplication's supported usage patterns. Take care to balance\nbetween separating tests by usage scenario (anonymous visitors,\nread access, write access, etc.) and keeping the number of tests\nlow enough to scan results.\n\n2. Benchmark the baseline setup\n-------------------------------\n\nIf there is no single baseline, such as when comparing multiple\nsetups to each other, then the term baseline here might be\nslightly inaccurate. The important part, however, is to\nestablish that the test cases successfully cover the usage\nscenarios.\n\nUse the fl-run-bench provided by collective.funkload using\n`zope.testing.testrunner\n`_ semantics to\nspecify the tests to run and the \"--label\" option to specify a label\nindicating the benchmark it the baseline (or some other label if\nbaseline isn't appropriate)::\n\n $ fl-run-bench -s foo.loadtests --label=baseline\n Running zope.testing.testrunner.layer.UnitTests tests:\n Set up zope.testing.testrunner.layer.UnitTests in 0.000 seconds.\n ========================================================================\n Benching FooTestCase.test_FooTest\n ========================================================================\n ...\n Bench status: **SUCCESS**\n Ran # tests with 0 failures and 0 errors in # minutes #.### seconds.\n Tearing down left over layers:\n Tear down zope.testing.testrunner.layer.UnitTests in 0.000 seconds.\n\nUse \"fl-build-report --html\" to build an HTML report from the XML\ngenerated by running the benchmark above::\n\n $ fl-build-report --html FooTest-bench-YYYYMMDDThhmmss.xml\n Creating html report ...done:\n file:///.../test_ReadOnly-YYYYMMDDThhmmss-baseline/index.html\n\nExamine the report details. If the test cases don't sufficiently\ncover the application's supported useage patterns, repeat steps 1\nand 2 until the test cases provide sufficient coverage.\n\n3. Benchmark the other setups\n-----------------------------\n\nIn turn, deploy each of the setups. This procedure will be\ndictated by the application. If using different buildout\nconfigurations, for example, deploy each configuration::\n\n $ ...\n\nThen use the same fl-run-bench command as before (or adusted as\nneeded for the setup) giving a differen \"--label\" option\ndsignating the setup::\n\n $ fl-run-bench -s foo.loadtests --label=foo-setup\n Running zope.testing.testrunner.layer.UnitTests tests:\n Set up zope.testing.testrunner.layer.UnitTests in 0.000 seconds.\n ========================================================================\n Benching FooTestCase.test_FooTest\n ========================================================================\n ...\n Bench status: **SUCCESS**\n Ran # tests with 0 failures and 0 errors in # minutes #.### seconds.\n Tearing down left over layers:\n Tear down zope.testing.testrunner.layer.UnitTests in 0.000 seconds.\n\nRepeat this step for each setup.\n\n4. Build the HTML and differential reports and the matrix index\n---------------------------------------------------------------\n\nUse the \"fl-build-label-reports\" command with the \"--x-label\" and\n\"--y-label\" options to automatically build all the HTML reports, the\ndifferential reports based on the labels, and an index matrix to the\nHTML and differential reports. The \"fl-build-label-reports\" script\nwill use a default title and sub-title based on the labels but may\nspecified using the \"--title\" and \"--sub-title\" options. Arbitrary\ntext or HTML may also be included on stdin or using the \"--input\"\noption::\n\n $ echo \"When deciding which setup to use...\" | \\\n fl-build-label-reports --x-label=baseline --y-label=foo-setup \\\n --y-label=bar-setup --title=\"Setup Comparison\"\n --sub-title=\"Compare setups foo and bar against baseline\"\n Creating html report ...done:\n file:///.../test_ReadOnly-YYYYMMDDThhmmss-baseline/index.html\n Creating html report ...done:\n file:///.../test_ReadOnly-YYYYMMDDThhmmss-foo-label/index.html\n Creating html report ...done:\n file:///.../test_ReadOnly-YYYYMMDDThhmmss-bar-label/index.html\n Creating diff report ...done:\n /.../diff_ReadOnly-YYYYMMDDT_hhmmss-foo-label_vs_hhmmss-baseline/index.html\n Creating diff report ...done:\n /.../diff_ReadOnly-YYYYMMDDT_hhmmss-bar-label_vs_hhmmss-baseline/index.html\n Creating report index ...done:\n file:///.../index.html\n\nBoth the \"--x-label\" and \"--y-label\" options may be given multiple\ntimes or may use `Python regular expressions\n`_ to create an MxN matrix\nof differential reports. See the \"fl-build-label-reports --help\"\ndocumentation for more details.\n\n5. Examine the results using the matrix index\n---------------------------------------------\n\nOpen the index.html generated by the last command to survey the\nHTML reports and differential reports.\n\n6. Repeat as changes are made\n-----------------------------\n\nAs changes are made in your application or setups or to test new\nsetups, repeat steps 3 and 4. When step 4 is repeated by running\n\"fl-build-label-reports\" adjusting the \"--x-label\" and\n\"--y-label\" options as appropriate, new HTML and differential\nreports will be generated as appropriate for the new load test\nbenchmark results and the matrix index will be updated.\n\nfl-run-bench\n============\n\nThe scripts that Funkload installs generally require that they be\nexecuted from the directory where the test modules live. While this\nis appropriate for generating test cases with the Funkload recorder,\nit's often not the desirable behavior when running load test\nbenchmarks. Additionally, the argument handling for the benchmark\nrunner doesn't allow for specifying which tests to benchmark using the\nzope.testing.testrunner semantics, such as specifying modules and\npackages with dotted paths, as one is often wont to do when working\nwith setuptools and eggs.\n\nTo accommodate this usage pattern, the collective.funkload package\nprovides a wrapper around the Funkload benchmark runner that handles\ndotted path arguments gracefully. Specifically, rather than pass\n``*.py`` file and TestCase.test_method arguments, the\n\"fl-bench-runner\" provided by collective.funkload supports\nzope.testing.testrunner semantics for finding tests with \"-s\", \"-m\"\nand \"-t\".\n\n >>> from collective.funkload import bench\n >>> bench.run(defaults, (\n ... 'test.py -s foo -t test_foo '\n ... '--cycles 1 --url http://bar.com').split())\n t...\n Benching FooTestCase.test_foo...\n * Server: http://bar.com...\n * Cycles: [1]...\n\n.. -*-doctest-*-\n\nfl-build-label-reports\n======================\n\nThe fl-build-label-reports script builds HTML (fl-build-report --html)\nand differential (fl-build-report --diff) reports for multiple bench\nresults at once based on the bench result labels. Labels are selected\nfor the X and Y axes to be compared against each other using the\n\"--x-label\" and \"--y-label\" options. These options accept the same\nregular expression filters as the zope.testing.testrunner --module and\n--test options and like those options maybe given multiple times.\n\nThe direction or polarity of the differential reports, which report is\nthe reference and which report is the challenger, is determined by\nsorting the labels involved. This avoids confusion that could occur\nif differential reports of both directions are included the same\nmatrix, one showing green and the other read. As such, labels should\nbe specified such that their sort order will reflect the desired\ndifferential polarity. The \"--reverse\" option can also be used to\nreverse the sort order for polarity only without affecting the sort\norder used on the axes. IOW, when the polarity of the differentials\nshould be the reverse of the order of the axes, use \"--reverse\".\n\nThe title and sub-title rendered on the matrix index may be specified\nusing the \"--title\" and \"--sub-title\" options. If not specified a\ndefault title will be used and a sub-title will be generated based on\nthe labels on the X and Y axes. Arbitrary text or HTML may also be\nincluded on stdin or using the \"--input\" option. If provided, it will\nbe rendered beneath the sub-title and above the matrix.\n\nIn the examples below, load tests have been run to measure read,\nwrite, and add performance under Python 2.4, 2.5, and 2.6. There are\nthree different tests to measure read, write and add performance.\nLabels are used to designate which Python version the load tests have\nbeen run under. Thus fl-build-label-reports can be used to quickly\ngenerate reports which can be used to evaluate any performance trade\noffs the various python versions might have for the application being\ntested.\n\nStart with some bench result XML files.\n\n >>> import os\n >>> from collective.funkload import testing\n >>> testing.setUpReports(reports_dir)\n >>> sorted(os.listdir(reports_dir), reverse=True)\n ['write-bench-20081211T071242.xml',\n 'write-bench-20081211T071242.log',\n 'read-bench-20081211T071242.xml',\n 'read-bench-20081211T071242.log',\n 'read-bench-20081211T071241.xml',\n 'read-bench-20081211T071241.log',\n 'read-bench-20081210T071243.xml',\n 'read-bench-20081210T071243.log',\n 'read-bench-20081210T071241.xml',\n 'read-bench-20081210T071241.log',\n 'add-bench-20081211T071243.xml',\n 'add-bench-20081211T071243.log',\n 'add-bench-20081211T071242.xml',\n 'add-bench-20081211T071242.log']\n\nThese bench results cover multiple tests and have multiple labels.\nSome labels are applied to bench results for multiple tests.\n\n >>> import pprint\n >>> pprint.pprint(testing.listReports(reports_dir))\n [(u'python-2.4',\n [(u'test_add',\n [(u'2008-12-11T07:12:43.000000',\n Bench(path='add-bench-20081211T071243.xml', diffs={}))]),\n (u'test_read',\n [(u'2008-12-11T07:12:42.000000',\n Bench(path='read-bench-20081211T071242.xml', diffs={})),\n (u'2008-12-10T07:12:43.000000',\n Bench(path='read-bench-20081210T071243.xml', diffs={}))])]),\n (u'python-2.5',\n [(u'test_read',\n [(u'2008-12-10T07:12:41.000000',\n Bench(path='read-bench-20081210T071241.xml', diffs={}))])]),\n (u'python-2.6',\n [(u'test_add',\n [(u'2008-12-11T07:12:42.000000',\n Bench(path='add-bench-20081211T071242.xml', diffs={}))]),\n (u'test_read',\n [(u'2008-12-11T07:12:41.000000',\n Bench(path='read-bench-20081211T071241.xml', diffs={}))])])]\n\nWhen labels are specified for the X or Y axes, HTML reports are\ngenerated for the latest bench result XML file for each combination of\nthe specified label and each test for which there are bench results\navailable. Then differential reports are generated between the X and\nY axes forming a grid of reports. Finally, an index.html file is\ngenerated providing clear and easy access to the generated reports.\nGenerate reports and comparisons for python-2.4 vs python-2.6. Also\nspecify the \"--reverse\" option so that the differential polarity will\nbe the reverse of the axes label order.\n\n >>> from collective.funkload import label\n >>> input_ = os.path.join(reports_dir, 'input.html')\n >>> open(input_, 'w').write('foo')\n >>> args = (\n ... '-o %s --x-label python-2.4 --y-label !.*-2.5 --reverse'\n ... % reports_dir).split() + [\n ... '--title', 'Python 2.6 vs Python 2.4',\n ... '--sub-title', 'Comparing Python versions',\n ... '--input', input_]\n >>> options, _ = label.parser.parse_args(args=args)\n >>> label.run(options)\n Creating html report ...done: \n .../reports/test_add-20081211T071242-python-2.6/index.html\n Creating html report ...done: \n .../reports/test_read-20081211T071241-python-2.6/index.html\n Creating html report ...done: \n .../reports/test_add-20081211T071243-python-2.4/index.html\n Creating diff report ...done: \n .../reports/diff_add-20081211T_071242-python-2.6_vs_071243-python-2.4/index.html\n Creating html report ...done: \n .../reports/test_read-20081211T071242-python-2.4/index.html\n Creating diff report ...done: \n .../reports/diff_read-20081211T_071241-python-2.6_vs_071242-python-2.4/index.html\n Creating report index ...done:\n .../reports/index.html\n '.../reports/index.html'\n\nThe report index renders a table with links to the HTML reports on the\nX and Y axes and links to the differential reports in the table cells.\nIn this case there's only one HTML report on the X axis and four\nreports on the Y axis. Note that report links aren't included in the\ncolumn headers for the X axis to conserve space and avoid duplication.\nWhen using only one label for the X axis, it may be useful to include\nit in the Y axis even though the differential report cells will be\nempty in order to include the links to the non-differential test\nreports for each test.\n\n >>> print open(os.path.join(reports_dir, 'index.html')).read()\n <...\n Python 2.6 vs Python 2.4...\n

Python 2.6 vs Python 2.4

\n

Comparing Python versions

\n foo\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
 \n Label\n
LabelTestpython-2.4
python-2.4\n \n \"foo.sampletests.FooTestCase.test_add\"\n\n
test_add
\n
\n
\n
\n \n \"foo.sampletests.FooTestCase.test_read\"\n\n
test_read
\n
\n
\n
python-2.6\n \n \"foo.sampletests.FooTestCase.test_add\"\n\n
test_add
\n
\n
\n \n \"diff\n
python-2.6 vs python-2.4
\n
\n
\n \n \"foo.sampletests.FooTestCase.test_read\"\n\n
test_read
\n
\n
\n \n \"diff\n
python-2.6 vs python-2.4
\n
\n
LabelTestpython-2.4
 \n Label\n
...\n\nIf no labels are specified for the X or Y axes then all labels are\nselected for both the X and Y axes for a full NxN comparison. Both\nHTML and differential reports are only generated if they haven't been\nalready. IOW, existing reports will be re-used. Reports or results\nwithout labels will be ignored. Since the HTML report contains the\nbench run XML results file, the original is removed and any\ncorresponding log file is moved into the HTML report directory.\n\n >>> open(input_, 'w').write('')\n >>> args = ('-o %s' % reports_dir).split()+['--input', input_]\n >>> options, _ = label.parser.parse_args(args=args)\n >>> label.run(options)\n Creating html report ...done: \n .../reports/test_read-20081210T071241-python-2.5/index.html\n Creating diff report ...done: \n .../reports/diff_read_20081211T071242-python-2.4_vs_20081210T071241-python-2.5/index.html\n Creating diff report ...done: \n .../reports/diff_read_20081210T071241-python-2.5_vs_20081211T071241-python-2.6/index.html\n Creating report index ...done:\n .../reports/index.html\n '.../reports/index.html'\n\n >>> pprint.pprint(sorted(os.listdir(reports_dir), reverse=True))\n ['write-bench-20081211T071242.xml',\n 'write-bench-20081211T071242.log',\n 'test_read-20081211T071242-python-2.4',\n 'test_read-20081211T071241-python-2.6',\n 'test_read-20081210T071241-python-2.5',\n 'test_add-20081211T071243-python-2.4',\n 'test_add-20081211T071242-python-2.6',\n 'read-bench-20081210T071243.xml',\n 'read-bench-20081210T071243.log',\n 'input.html',\n 'index.html',\n 'diff_read_20081211T071242-python-2.4_vs_20081210T071241-python-2.5',\n 'diff_read_20081210T071241-python-2.5_vs_20081211T071241-python-2.6',\n 'diff_read-20081211T_071242-python-2.4_vs_071241-python-2.6',\n 'diff_read-20081211T_071241-python-2.6_vs_071242-python-2.4',\n 'diff_add-20081211T_071243-python-2.4_vs_071242-python-2.6',\n 'diff_add-20081211T_071242-python-2.6_vs_071243-python-2.4']\n >>> os.path.isfile(os.path.join(\n ... reports_dir, 'test_read-20081211T071242-python-2.4',\n ... 'funkload.log'))\n True\n >>> os.path.isfile(os.path.join(\n ... reports_dir, 'test_read-20081211T071242-python-2.4',\n ... 'funkload.xml'))\n True\n \nThe HTML report index will be updated to reflect the newly included\nresults and reports.\n\n >>> print open(os.path.join(reports_dir, 'index.html')).read()\n <...\n \n collective.funkload label matrix report\n ...\n

\n \n collective.funkload label matrix report\n \n

\n

python-2.4, python-2.5, python-2.6 vs python-2.4, python-2.5, python-2.6

\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
 \n Label\n
LabelTestpython-2.4python-2.5python-2.6
python-2.4\n \n \"foo.sampletests.FooTestCase.test_add\"\n\n
test_add
\n
\n
\n \n \n \n \"diff\n
python-2.4 vs python-2.6
\n
\n
\n \n \"foo.sampletests.FooTestCase.test_read\"\n\n
test_read
\n
\n
\n \n \n \"diff\n
python-2.4 vs python-2.5
\n
\n
\n \n \"diff\n
python-2.4 vs python-2.6
\n
\n
python-2.5\n \n \"foo.sampletests.FooTestCase.test_read\"\n\n
test_read
\n
\n
\n \n \"diff\n
python-2.4 vs python-2.5
\n
\n
\n \n \n \"diff\n
python-2.5 vs python-2.6
\n
\n
python-2.6\n \n \"foo.sampletests.FooTestCase.test_add\"\n\n
test_add
\n
\n
\n \n \"diff\n
python-2.4 vs python-2.6
\n
\n
\n \n
\n \n \"foo.sampletests.FooTestCase.test_read\"\n\n
test_read
\n
\n
\n \n \"diff\n
python-2.4 vs python-2.6
\n
\n
\n \n \"diff\n
python-2.5 vs python-2.6
\n
\n
\n
LabelTestpython-2.4python-2.5python-2.6
 \n Label\n
...\n\nThe \"fl-list\" script prints out the labeled XML bench result files,\nHTML report directories, and differential report directories that meet\nthe criteria of the given options. The \"--old\" option lists\neverything for which there is an equivalent with a newer time stamp.\n\n >>> from collective.funkload import report\n >>> options, _ = report.list_parser.parse_args(\n ... args=('-o %s --old' % reports_dir).split())\n >>> list(report.run(**options.__dict__))\n ['read-bench-20081210T071243.xml']\n\nChangelog\n=========\n\n0.4 (2014-06-20)\n----------------\n\n* make flake8 happy again\n [giacomos]\n\n* add method for creating dexterity content types\n [giacomos]\n\n* Fix label.py and tests.py is not working with current zope.testing\n [joka]\n\n* Fix imports. zope.testing.testrunner now lives in the zope.testrunner\n package.\n [nueces]\n\n\n\n0.4 (2014-06-20)\n----------------\n\n* add new TestCase class: PloneFLTestCase with two helper methods:\n plone_login and addContent. Please check collective.recipe.funkload\n for example usage\n [amleczko]\n\n0.2.1 - 2010-04-16 \n------------------\n\n* fix small typo in recorder\n [amleczko]\n\n0.2 - 2010-04-16 \n----------------\n\n* Add custom version of RecorderProgram to make usage\n of our custom Script tpl\n [amleczko]\n\n0.1.1 - 2009-08-09\n------------------\n\n* Only run funkload tests when invoking bench with -m, -s, -t [evilbungle]\n\n\n0.1 - 2009-08-09\n----------------\n\n* Initial release, mainly a snapshot from trunk to compliment the release of\n collective.recipe.funkload", "description_content_type": null, "docs_url": null, "download_url": "UNKNOWN", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "https://github.com/collective/collective.funkload", "keywords": "", "license": "GPL", "maintainer": null, "maintainer_email": null, "name": "collective.funkload", "package_url": "https://pypi.org/project/collective.funkload/", "platform": "UNKNOWN", "project_url": "https://pypi.org/project/collective.funkload/", "project_urls": { "Download": "UNKNOWN", "Homepage": "https://github.com/collective/collective.funkload" }, "release_url": "https://pypi.org/project/collective.funkload/0.4/", "requires_dist": null, "requires_python": null, "summary": "Zope and Plone focussed extensions to funkload", "version": "0.4" }, "last_serial": 1131720, "releases": { "0.1": [ { "comment_text": "", "digests": { "md5": "5e0d5d291adad1fc88366e7787640e01", "sha256": "712fcf593df426b01e8dd8a8e569bebbc945336fbfbc20f7eadc89034786cd04" }, "downloads": -1, "filename": "collective.funkload-0.1.zip", "has_sig": false, "md5_digest": "5e0d5d291adad1fc88366e7787640e01", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 18676, "upload_time": "2009-08-09T13:00:49", "url": "https://files.pythonhosted.org/packages/da/63/187adc7f6232727eb7b5613500c5349fd670ccd0d49c36244e82ef63e391/collective.funkload-0.1.zip" } ], "0.1.1": [ { "comment_text": "", "digests": { "md5": "55a7a74ece7760b2868a9875035570d7", "sha256": "17a36d3adba6def5b148fcbb734803692a490ab43e9d7ac81c86c843cb002e49" }, "downloads": -1, "filename": "collective.funkload-0.1.1.zip", "has_sig": false, "md5_digest": "55a7a74ece7760b2868a9875035570d7", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 18983, "upload_time": "2009-08-09T13:32:51", "url": "https://files.pythonhosted.org/packages/3b/a5/55838473a4693d1b20fb7f5996de57409cc16449c467273e1536c9d4e494/collective.funkload-0.1.1.zip" } ], "0.3": [ { "comment_text": "", "digests": { "md5": "02b57a729b3dca6dc558d5e17e02d349", "sha256": "abfa9099f8090cd5c6d4a300eafa7e10fd6bd1cacdbc79ba3e42d8e498f2e65d" }, "downloads": -1, "filename": "collective.funkload-0.3.tar.gz", "has_sig": false, "md5_digest": "02b57a729b3dca6dc558d5e17e02d349", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 32285, "upload_time": "2010-05-04T15:08:41", "url": "https://files.pythonhosted.org/packages/43/31/bc587570e82fb15b57384133d452fdbcdb7ac80a2c1ccb4b04b982ba77f4/collective.funkload-0.3.tar.gz" } ], "0.4": [ { "comment_text": "", "digests": { "md5": "6fb59551a2ed66e4477aac80c5180bb1", "sha256": "a277f7d2dd007e7868473ef916b96dc7e5150342f3cb8eaaa396f697f8b2e14a" }, "downloads": -1, "filename": "collective.funkload-0.4.zip", "has_sig": false, "md5_digest": "6fb59551a2ed66e4477aac80c5180bb1", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 45896, "upload_time": "2014-06-20T11:58:55", "url": "https://files.pythonhosted.org/packages/3b/5b/6363d70407f771d670ec2a59359dcaed6fe4c6b825d422105c300ec7cf47/collective.funkload-0.4.zip" } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "6fb59551a2ed66e4477aac80c5180bb1", "sha256": "a277f7d2dd007e7868473ef916b96dc7e5150342f3cb8eaaa396f697f8b2e14a" }, "downloads": -1, "filename": "collective.funkload-0.4.zip", "has_sig": false, "md5_digest": "6fb59551a2ed66e4477aac80c5180bb1", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 45896, "upload_time": "2014-06-20T11:58:55", "url": "https://files.pythonhosted.org/packages/3b/5b/6363d70407f771d670ec2a59359dcaed6fe4c6b825d422105c300ec7cf47/collective.funkload-0.4.zip" } ] }