{ "info": { "author": "Julian Coyne and Martin Aspeli", "author_email": "optilude@gmail.com", "bugtrack_url": null, "classifiers": [], "description": "The Bench Master 3000\r\n======================\r\n\r\nThe purpose of this package is to help you run huge, parallelised load tests\r\nusing `FunkLoad`_ across multiple servers and/or processors. If you require\r\nonly moderately grunty load tests you will probably be better off using\r\nFunkLoad on its own.\r\n\r\nThis document may still provide some helpful tips and insights, even if you\r\nend up using \"plain\" FunkLoad, but please read the FunkLoad documentation as\r\nwell. It is a very powerful and flexible tool. You certainly need to be\r\nfamiliar with its various scripts and services to get the most out of Bench\r\nMaster.\r\n\r\n.. contents:: Table of contents\r\n\r\nTheory of operation\r\n-------------------\r\n\r\nBench Master facilities the deployment of load tests to a number of servers\r\n(or a number of aliases for a single server, thus allowing better use of\r\nmulti-core or multi-processor systems), each with its own load test runner.\r\nWhen all load test runners have completed their benchmarks, the results are\r\ncollated back to the master, which will produce a consolidated report.\r\n\r\nThere is exactly one \"bench master\", and one or more \"bench nodes\". The master\r\ncommunicates with the nodes over SSH, using the `pssh`_ library to parallelise\r\noperations. The nodes can be different physical machines, although it is\r\npossible to use the same machine more than once (in which case multiple\r\nconcurrent SSH connections will be made). It is also possible for the same\r\nmachine - even the same installation of Bench Master - to serve as master and\r\none or more nodes. (In this case, the master will in effect establish one or\r\nmore SSH connections to ``localhost``.)\r\n\r\nLoad tests are created/recorded using FunkLoad as normal. This will result in\r\ntwo files, which must be kept in the same directory:\r\n\r\n* ``.conf``, which is used to configure the load test runner,\r\n the FunkLoad monitoring server, etc.\r\n* ``test_.py``, which contains the actual test as a subclass of\r\n the ``FunkLoadTestCase``, with a single test method ``test_``.\r\n\r\nThere is one extension available when using Bench Master: You can get the node\r\nnumber (an integer greater than or equal to 0, represented in the config as a\r\nstring) by doing::\r\n\r\n node_num = self.conf_get('bench', 'node_num', '0')\r\n\r\nThis is useful for disambiguating the nodes: Since all nodes execute the\r\ntest run in parallel, you need to be careful that two tests don't lay claim\r\nto the same resource (say, a login name) at the same time.\r\n\r\nNote that although it is possible to put FunkLoad tests in a package, FunkLoad\r\nonly really considers modules. It is perfectly feasible to dump your test\r\nand .conf file pair into any directory, even without an ``__init__.py``.\r\n\r\nPre-requisites\r\n--------------\r\n\r\nFirst, some requirements:\r\n\r\n* Bench Master only works on Unix-type operating systems.\r\n* You need at least FunkLoad 0.12 and PSSH 2.1.1 installed. These will get\r\n installed as dependencies of the ``benchmaster`` package.\r\n* You must have ``ssh`` and ``scp`` installed on the bench master\r\n* You must also have ``gnuplot`` installed on the bench master, so that it can\r\n generate its reports. It is normally best to install this using your\r\n operating system's package manager.\r\n* All bench nodes must have the SSH daemon running, accepting connections from\r\n the bench master.\r\n\r\n **Hint:** The SSH daemon on your bench node(s) may accept a limited\r\n number of inbound connections. If you are using many \"virtual\" nodes, you\r\n need to be careful that the latter connections will not be rejected.\r\n If you are using OpenSSH, you should set the ``MaxStartups`` and\r\n ``MaxSessions`` options in the ``sshd_config`` file to a sufficiently high\r\n number.\r\n\r\n* The bench master must be able to SSH to the node without entering a password\r\n or being prompted to accept the remote host's signature. In practice, that\r\n means that the user the bench master is running as must have its public key\r\n listed in the ``authorized_keys`` file of the user on the remote bench\r\n node(s), and that each remote node's signature must have been recorded in\r\n the ``known_hosts`` file on the bench master.\r\n* All bench nodes should be setup identically. In particular, the installation\r\n and working directories for the bench node runner should be in the same\r\n path on all hosts.\r\n\r\nOptionally, you can also configure the SSH daemon on the bench node(s) to\r\naccept various environment variables that will be sent by the bench master.\r\nThese are not required for normal operation, but can be useful in writing \r\nand debugging load tests. The variables are:\r\n\r\n``PSSH_NODENUM``\r\n The node's number. The first node will have number 0.\r\n``PSSH_BENCH_RUN``\r\n The name of the current bench run. You can set this when invoking the\r\n bench master. It defaults to a unique named based on the current date\r\n and time and the test name.\r\n``PSSH_BENCH_WORKING_DIR``\r\n The top level bench node working directory.\r\n\r\nTo use these, you need to add the following to the ``sshd_config`` file on\r\neach node::\r\n\r\n AcceptEnv PSSH_*\r\n\r\nInstallation\r\n------------\r\n\r\nBench Master can be installed using ``zc.buildout``, ``easy_install`` or\r\n``pip``. It will install two console scripts, ``bench-master`` and\r\n``bench-node``, which control its execution.\r\n\r\nHere is a ``buildout.cfg`` file that will create an isolated environment\r\nthat can be used as a bench master and/or node::\r\n \r\n [buildout]\r\n parts = bench-tools\r\n versions = versions\r\n \r\n # Note: Change versions as required or remove versions block to get\r\n # latest versions always\r\n \r\n [versions]\r\n benchmaster = 1.0b1\r\n funkload = 1.12.0\r\n pssh = 2.1.1\r\n \r\n [bench-tools]\r\n recipe = zc.recipe.egg\r\n eggs = \r\n funkload\r\n benchmaster\r\n\r\n**Note:** It is good practice to \"pin\" versions as shown above. However, you\r\nshould check which versions are appropriate at the time you read this.\r\nAlternatively, you can skip the ``[versions]`` block entirely to get the\r\nlatests versions of everything, but bear in mind that this could cause\r\nproblems in the future if an incompatible version of some dependency is\r\nreleased to PyPI.\r\n\r\nTo keep things simple, you are recommended to create identical installations\r\non all servers involved in the bench run. For example, you could put the\r\n``buildout.cfg`` file above in ``/opt/bench``, and then run::\r\n\r\n $ buildout bootstrap --distribute # or download a bootstrap.py and run that\r\n $ bin/buildout\r\n\r\n**Hint:** You should ensure that all bench node installations are owned by\r\nan appropriate user - usually the one that the bench master will use to log\r\nin remotely over SSH.\r\n\r\nYou should now have the scripts ``bench-master``, ``bench-node`` and the\r\nvarious ``fl-*`` scripts from FunkLoad in the ``bin`` directory.\r\n\r\nIf you prefer to use ``pip`` or ``easy_install``, you can just install the\r\n``benchmaster`` egg. You are recommended to do so in a `virtualenv`_ however::\r\n\r\n $ virtualenv --no-site-package bench\r\n $ cd bench\r\n $ bin/easy_install benchmaster\r\n\r\nRecording tests\r\n---------------\r\n\r\nTo record tests, you can use the ``fl-record`` script that's installed with\r\nFunkLoad. To use it, you also need to install `TCPWatch`_. The ``tcpwatch``\r\nbinary needs to be in the system path so that ``fl-record`` can find it.\r\nAlternatively, you can set the ``TCPWATCH`` environment variable to point to\r\nthe binary itself.\r\n\r\nFor example, you can use a buildout like this::\r\n\r\n [buildout]\r\n parts = bench-tools\r\n versions = versions\r\n \r\n # Note: Change versions as required or remove versions block to get\r\n # latest versions always\r\n \r\n [versions]\r\n benchmaster = 1.0b1\r\n funkload = 1.12.0\r\n pssh = 2.1.1\r\n tcpwatch = 1.3.1\r\n \r\n [bench-tools]\r\n recipe = zc.recipe.egg:scripts\r\n eggs = \r\n docutils\r\n funkload\r\n tcpwatch\r\n benchmaster\r\n initialization =\r\n import os\r\n os.environ['TCPWATCH'] = \"${buildout:bin-directory}/tcpwatch\"\r\n\r\nOnce TCPWatch is installed, you can start the recorder with::\r\n\r\n $ bin/fl-record TestName\r\n\r\n**Note:** You should use \"CamelCase\" for the test name. This ensures the\r\ngenerated code follows the conventions expected by the ``bench-master``\r\nprogram. If you use another naming convention, you may have to specify the\r\ntest name as an optional command line argument to the ``bench-master`` command\r\n- see below.\r\n\r\nNow open a web browser, and configure it to use an HTTP proxy of ``127.0.0.1``\r\nport ``8090`` (see the ``fl-record`` output and documentation for details).\r\n\r\nAt this point, you can visit the website you want to test and perform the\r\nactions you want to load test. It helps to plan your steps in advance, since\r\n\"stray\" clicks or reloads can make the test less repeatable or useful.\r\n\r\nWhen you're done, go back to the terminal running ``fl-record`` and press\r\n``Ctrl+C``. It should generate two files: ``test_TestName.py`` and\r\n``TestName.conf``.\r\n\r\nYou may want to move these out of the current directory, e.g::\r\n\r\n $ mkdir bench-tests\r\n $ mv test_* bench-tests/\r\n $ mv *.conf bench-tests/\r\n \r\nAt this point, you should edit the two generated files to make sure they will\r\nbe valid and useful as load tests. The ``.conf`` file contains comments where\r\nyou may want to fill in details, such as a more descriptive name.\r\n\r\nYou also need to consider what will happen if your test case is run over and\r\nover, in parallel, from multiple bench nodes. If it is simply fetching a few\r\npages, you will probably be fine with the recorded tests. However, if you are \r\ndoing anything user-specific (such as logging in or submitting details)\r\nyou need to ensure that the parallel tests will not interfere with one\r\nanother.\r\n\r\nTo disambiguate between tests, you can use the ``node_num`` configuration\r\nvariable. You can get this in a ``FunkLoadTestCase`` with::\r\n\r\n node_num = self.conf_get('bench', 'node_num', '0')\r\n\r\nNote also that Funkload comes with a credentials server (see the\r\n``fl-credential-ctl`` script), which you can use to maintain a single database\r\nof credentials across multiple threads and multiple test runs.\r\n\r\nExecuting tests\r\n---------------\r\n\r\nYou can use the ``fl-run-test`` script to execute all the tests in a given\r\nmodule. This is not so useful for load testing, but very useful when writing\r\nand debugging tests. It can also serve as a simple functional testing tool.::\r\n\r\n $ bin/fl-run-test test_Testname.py\r\n\r\nThis runs the test in the given module. Note that the test and the\r\ncorresponding ``.conf`` file needs to be in the current directory. If you've\r\nmoved the tests to a dedicated directory, e.g. ``bench-tests/`` as suggested\r\nabove, you can do::\r\n\r\n $ cd bench-tests\r\n $ ../bin/fl-run-test test_Testname.py\r\n\r\nSee the `FunkLoad`_ documentation for more details.\r\n\r\nTo run a simple bench test on one node, you can use the ``fl-run-bench``.\r\n(This is ultimately what the Bench Master does on each node)::\r\n\r\n $ bin/fl-bench-test test_Testname.py Testname.test_TestName\r\n\r\nNote that we have to specify the test case class (``Testname``) and method\r\n(``test_TestName``), since the bench runner will only run a single test, even\r\nif you've put more than one test in the module.\r\n\r\nAt this point, it's useful to start understanding the parameters that control\r\nthe bench runner.\r\n\r\nCycles\r\n The bench runner will run multiple cycles of the given test. Each cycle is\r\n run at a particular level of concurrency, which translates to a number of\r\n threads being used in that cycle. This is effectively a simulation of\r\n having multiple concurrent users executing the load test scenario.\r\n \r\n You can specify cycles with the ``-c`` command line argument. For example,\r\n ``-c 10:20:30`` will execute three cycles with 10, 20 and 30 threads,\r\n respectively. The default cycles are configured in the ``.conf`` file for\r\n the load test.\r\n \r\n To get useful load tests, you typically want to run multiple cycles with\r\n a fixed step increase. This makes the outputs from the test easier to\r\n compare.\r\nDuration\r\n The bench runner will run each cycle for a fixed duration, given in\r\n seconds. Again, the default is found in the ``.conf`` file, but can be\r\n overridden with the ``-D`` command line argument, e.g. ``-D 300`` to run\r\n each cycle for 5 minutes.\r\n \r\n Each thread in a given cycle will try to execute the test as many times as\r\n it can until the test comes to an end (i.e. ``duration`` seconds have\r\n passed). Incomplete tests are aborted, and not counted towards the load\r\n test results.\r\n\r\nIn addition, you can set a min and max sleep time between requests (the actual\r\nsleep time is a random value between the two), a sleep time between test\r\nexecutions, and a thread startup delay. These help simulate some degree of\r\nrandomness and delays, and can sometimes be a useful way to allow the server\r\nto reach a \"steady state\" before the real tests kick in. See the output of the\r\n``-h`` option to the ``fl-run-bench`` command, and the comments in the\r\n``.conf`` file, as well as the FunkLoad documentation for details.\r\n\r\n **Hint:** Experiment with different duration and concurrency settings.\r\n Since each thread will execute the test repeatedly within the test\r\n duration, you can increase the number of tests being executed by\r\n increasing either parameter. High concurrency with a low duration\r\n simulates short bursts of high load. Longer duration simulates sustained\r\n load. The latter is often more revealing, since a short duration may\r\n cause FunkLoad to omit tests that hadn't completed within the duration,\r\n and which would in fact never have completed due to a gateway timeout\r\n on the server.\r\n\r\nOnce you have tested that your bench test works in \"plain\" FunkLoad, you can\r\ndeploy it using Bench Master.\r\n\r\nTo illustrate such a deployment, we'll assume that:\r\n\r\n* The pre-requisites above are installed.\r\n* You have installed Bench Master on two machines, ``192.168.0.10`` and\r\n ``192.168.0.11``, using a buildout like the one suggested above, in\r\n ``/opt/bench``. Hence, the binaries are in ``/opt/bench/bin/``.\r\n* The bench master is running as the user ``bench``. This user exists on\r\n both hosts, and has ownership over ``/opt/bench`` and all children of\r\n this directory.\r\n* An SSH key has been set up so that ``bench`` can SSH from the master to\r\n the node and from the master to itself (we'll use ``bm.example.com`` as\r\n a bench node as well as a master) without requiring a password.\r\n \r\n **Hint:** Test this manually first, and ensure that the remote host is\r\n known, i.e. gets recorded in the ``known_hosts`` file.\r\n* The test is found in ``/opt/bench/bench-tests/test_Testname.py``, with\r\n the corresponding ``Testname.conf`` in the same directory.\r\n\r\nFirst, we need to create a nodes configuration file. This tells the bench\r\nmaster which nodes to deploy the tests to. We'll call this ``nodes.pssh``,\r\nand keep it in the top level ``/opt/bench`` directory. It should contain\r\nentries like::\r\n \r\n load-test-00 bench\r\n load-test-01 bench\r\n load-test-02 bench\r\n load-test-03 bench\r\n\r\nThe first token is the host name. The second is the username to use. Note\r\nthat we are using \"pseudo\" host names here. This is because we want to use\r\nthe same physical server more than once. These can be resolved to actual\r\nIP addresses in the ``/etc/hosts`` file, for example with::\r\n\r\n load-test-00 192.168.245.10\r\n load-test-01 192.168.245.11\r\n load-test-02 192.168.245.10\r\n load-test-03 192.168.245.11\r\n\r\nHere, we have opted to run two bench nodes on each physical machine. This type\r\nof setup is appropriate for dual core/processor machines, where one Python\r\nprocess can make full use of each core.\r\n\r\nWe can now execute the test::\r\n \r\n $ mkdir bench-output\r\n $ bin/bench-master -s /opt/bench/bin/bench-node -w bench-output -c 10:20:30 -D 300 nodes.pssh bench-tests/test_Testname.py\r\n\r\nIn this command:\r\n\r\n* The ``-s`` option gives the path to the ``bench-node`` script *on the remote\r\n bench node*. (It does not need to exist on the master, except when the\r\n master server is not also being used as a node.) It can be omitted if\r\n ``bench-node`` is in the ``PATH`` for the given user on the remote server.\r\n This is used in an ``ssh`` invocation to execute the ``bench-node`` script\r\n on each host.\r\n* The ``-w`` option is the working directory for the bench master. This will\r\n eventually contain the final bench report, as well as information recorded\r\n during the bench run. The information specific to each bench run is collated\r\n under an intermediary directory with the name of the bench run. The default\r\n is to use the current date/time and the test name.\r\n* The ``-c`` option overrides the concurrency setting from the ``.conf`` file\r\n for the bench test *on each node*. Here, we will run three cycles, with 10,\r\n 20 and 30 concurrent threads *on each node*, respectively. This means that,\r\n with four nodes, the true concurrency is 40, 80, and 120 threads,\r\n respectively.\r\n* The ``-D`` option overrides the duration in a similar manner. Hence, in\r\n each cycle, each thread on each node will attempt to execute the test as\r\n many times as it can for 5 minutes (300 seconds). Since the nodes run in\r\n parallel, this means that the overall duration for each cycle is 5 minutes.\r\n* The two positional arguments give the path to the nodes file and the test\r\n module. In this case, it is OK to give a relative or absolute path - it does\r\n not have to be in the current directory.\r\n\r\nIn addition to these options, you can specify:\r\n\r\n* ``-n``, to give a name to the bench run, which is used as the directory name\r\n under the bench master working directory. The default is to use a name built\r\n from the current date and time, plus the test name. Note that if you use\r\n the same name in more than one invocation, temporary files may be\r\n overwritten, but all final reports will be kept.\r\n* ``--num-nodes``, to limit the number of nodes being used. By default, all\r\n nodes in the nodes file are used. If you want to use only the first 2 nodes,\r\n say, you can specify ``--num-nodes=2``.\r\n* ``-x``, to indicate the path to the working directory on each node. The\r\n default is to use ``/tmp/bench-node``, which will be created if it does not\r\n exist.\r\n* ``-u``, to override the base URL for the test, found in the test ``.conf``\r\n file.\r\n* ``-v``, to see more output.\r\n\r\nFinally, you may need to give the test name (i.e. the name given to\r\n``fl-record`` when recording the test) as a third positional argument, after\r\nthe test module ``.py`` file. This is necessary if you did not use \"CamelCase\"\r\n(or all lower-case) when recording the test. For example, if the test method\r\nin a module ``test_FooBar.py`` is ``test_foo_bar()``, you can use::\r\n\r\n $ bin/bench-master -s /opt/bench/bin/bench-node -w bench-output -c 10:20:30 -D 300 nodes.pssh bench-tests/test_FooBar.py foo_bar\r\n\r\nSee the output of ``bin/bench-master -h`` for more details.\r\n\r\nObserving execution\r\n~~~~~~~~~~~~~~~~~~~\r\n\r\nWhile the bench master is running, it will:\r\n\r\n* Create a workspace on each node and remotely copy into it the test, plus a\r\n ``.conf`` file tailored to each node. The parent directory for the\r\n workspace on each node can be set with the ``-x`` command line option.\r\n Inside this directory, a subdirectory is created for each labelled test\r\n run.\r\n \r\n Note: You can set the label with the ``-n`` option. This is useful for\r\n grouping tests together. The default is to create a unique label based on\r\n the current date/time and the test name.\r\n \r\n Inside each test run subdirectory, a subdirectory is created specifically\r\n for each node. This allows multiple \"virtual\" nodes (i.e. multiple node\r\n processes) in the same installation on the same physical machine.\r\n* Execute the ``bench-node`` script on each node. This in turn runs the\r\n FunkLoad bench test runner.\r\n* Monitor the standard output and standard error on each node. These can\r\n be found in the bench master's workspace in the ``out/`` and ``err/``\r\n directories, in text files corresponding to each node. You can ``tail``\r\n these files to get near-realtime output from each node.\r\n \r\n The ``err/`` directory will help you understand if something has gone wrong\r\n on the remote node. The ``out/`` directory is useful for checking on the\r\n progress of each node.\r\n\r\nWhen the tests are running, you will see various progress indicators under\r\nthree headings: \"Starting threads\", \"Logging for seconds\", and\r\n\"Stopping threads\". For each, you may see a number of \".\", \"E\" or \"F\"\r\ncharacters. A \".\" means success, an \"E\" an error, and an \"F\" a failure.\r\n\r\nFailures in starting and stopping threads may indicate that FunkLoad can't\r\nlaunch enough threads to get the concurrency indicated in the cycle. Whether\r\nthis is a problem depends on what concurrency you really need to generate an\r\neffective load test. FunkLoad will continue with the test regardless.\r\n\r\nThe \"Logging for seconds\" line is the most important. This is the\r\ntime when FunkLoad is actually executing tests. An error here indicates a\r\nproblem in the test code itself, and should be rare. An \"F\" means that a page\r\ncould not be fetched, or that a test assertion failed. In load test scenarios,\r\nthis can mean that your server has stopped responding or is timing out. The\r\nreport (see below) will give you more details about what types of failures\r\nand errors were reported.\r\n\r\nViewing the report\r\n~~~~~~~~~~~~~~~~~~\r\n\r\nOnce the remote nodes are all completed, the bench master will:\r\n\r\n* Collate FunkLoad's output XML files from each node into the ``results/``\r\n subdirectory of the bench master workspace.\r\n* Merge these files into a single results file.\r\n* Build an HTML report from this result.\r\n\r\nThe final HTML report will appear in the ``report/`` directory of the bench\r\nmaster workspace. If this is on a remote server, you may want to serve this\r\nfrom a web server capable of serving static HTML files, to make it easy to\r\nview reports.\r\n\r\nFor example, you could start a simple HTTP server on port 8000 with::\r\n\r\n $ cd bench-output\r\n $ python -m SimpleHTTPServer 8000\r\n\r\nOf course, you could use Apache, nginx or any other web server as well, and\r\nyou can use the ``-w`` option to the ``bench-master`` script to put all\r\nresults in a directory already configured to be served as web content.\r\n\r\n **Hint:** If you are running several related load tests, e.g. testing the\r\n effect of a series of planned changes to your infrastructure, it can be\r\n useful to group these reports together. If you run several bench runs\r\n with the same name, as given by the ``-n`` option, Bench Master will\r\n group all reports together under a single top-level directory \r\n (``/reports/*``). Note that if you do this, the files in the \r\n ``out/`` and ``err/`` directories will be overwritten each time.\r\n\r\nThe first time you see a FunkLoad report, you may be a bit overwhelmed by\r\nthe amount of detail. Some of the more important things to look for are:\r\n\r\n* Look for any errors or failures. These are reported against each cycle,\r\n as well as for individual pages, and summarised at the end of the report.\r\n Bear in mind that an error - particular an HTTP error like 503 or 504 -\r\n at very high concurrency can simply indicate that you have \"maxed out\" your\r\n server. If this occurs at a concurrency that is much higher than what you\r\n would expect in practice, it may not be a problem.\r\n* Look at the overall throughput for the whole bench run, and see how it\r\n changes with the cycle concurrency. FunkLoad presents this in three metrics:\r\n \r\n Successful tests per second\r\n This is the number of complete tests that could be executed divided by\r\n the duration, aggregated across all cycles.\r\n \r\n If your test simulates a user completing a particular process (e.g.\r\n purchasing a product or creating some content) over a number of steps,\r\n this can be a useful \"business\" metric (e.g. \"we could sell X products in\r\n 10 minutes\").\r\n Successful pages per second\r\n This is the number of complete web pages that could be downloaded,\r\n including resources such as stylesheets and images, per second.\r\n \r\n Note that FunkLoad will cache resources in the same way as web browsers,\r\n so a given thread may only fetch certain resources once in a given cycle.\r\n \r\n This is a useful metric to understand how many \"page impressions\" your\r\n visitors could receive when your site is operating under load. It is most\r\n meaningful if your test only fetches one page, or fetches pages that are\r\n roughly equivalent in terms of server processing time.\r\n Successful requests per second\r\n This is a measure of raw throughput, i.e. how many requests were\r\n completed per second.\r\n \r\n This metric does not necessarily relate to how visitors experience your\r\n website, since it counts a web page and its dependent resources as\r\n separate requests.\r\n* Look for overall trends. How does performance degrade as concurrency is\r\n ramped up? Are there any \"hard\" limits where performance markedly drops\r\n off, or the site stops responding altogether? If so, try to think of what\r\n this may be, e.g. operating system limits on the number of threads or\r\n processors or open file descriptors.\r\n* Look at the individual pages being fetched. Are any pages particularly slow?\r\n How do the different pages degrade in performance as concurrency is\r\n increased?\r\n\r\nPlease refer to the `FunkLoad`_ documentation for more details.\r\n\r\nServer monitoring\r\n~~~~~~~~~~~~~~~~~\r\n\r\nIt is important to be able to understand what is going on during a load test\r\nrun. If the results are not what you expected or hoped for, you need to know\r\nwhere to start tweaking.\r\n\r\nSome important considerations include:\r\n\r\n* What is the CPU, memory and disk I/O load on the server(s) hosting the\r\n application? Where is the bottleneck?\r\n* What is the capacity of the networking infrastructure between the bench\r\n nodes and the server being tested? Are you constrained by bandwidth? Are\r\n there any firewalls or other equipment performing intensive scanning of\r\n requests, or even actively throttling throughput?\r\n* What is happening on the bench nodes? Are they failing to generate enough\r\n load, or failing to cope with the volume of data coming back from the\r\n server?\r\n\r\nIn all cases, you need monitoring. This can include:\r\n\r\n* Running ``top``, ``free`` and other \"point-in-time\" monitoring tools on the\r\n server.\r\n* Using tools like `monit`_ to observe multiple processes in real time.\r\n* Looking at log files from the various services involved in your application,\r\n both in real time as the tests are running, and after the test has\r\n completed.\r\n* Using FunkLoad's built in monitoring tools.\r\n\r\n **Note:** The FunkLoad monitoring server only works on Linux hosts.\r\n\r\nThe FunkLoad monitor is useful not at least because it is incorporated in the\r\nfinal report. You can see what happened to the load overage, memory and\r\nnetwork throughput on any number of servers as the load test progressed.\r\n\r\nFunkLoad monitoring requires that a monitoring server is running on each host\r\nto be monitored. The monitor control program, ``fl-monitor-ctl``, is installed\r\nwith FunkLoad. You can use the Bench Master buildout shown above to install\r\nit, or simply install FunkLoad in a ``virtualenv`` or buildout on each host.\r\n\r\nOnce you have installed ``fl-monitor-ctl``, you will need a configuration file\r\non each host. For example::\r\n\r\n [server]\r\n # configuration used by monitord\r\n host = localhost\r\n port = 8008\r\n\r\n # sleeptime between monitoring in second\r\n # note that load average is updated by the system only every 5s\r\n interval = .5\r\n\r\n # network interface to monitor lo, eth0\r\n interface = eth0\r\n\r\n [client]\r\n # configuration used by monitorctl\r\n host = localhost\r\n port = 8008\r\n\r\nYou should adjust the host name, network interface and port as required.\r\n\r\nYou can then start the monitor on each host with::\r\n\r\n $ bin/fl-monitor-ctl monitor.conf startd\r\n\r\nWith the monitors in place, you need to tell FunkLoad which hosts to monitor.\r\nThis is done in the ``.conf`` for the test, in the ``[monitor]`` section.\r\nFor example::\r\n\r\n [monitor]\r\n hosts = bench-master.example.com bench-node.example.com www.example.com\r\n\r\n [bench-master.example.com]\r\n description = Bench master\r\n port = 8008\r\n\r\n [bench-node.example.com]\r\n description = Bench node\r\n port = 8008\r\n \r\n [www.example.com]\r\n description = Web server\r\n port = 8008\r\n\r\nIn this example, we have opted to monitor the bench master and nodes, as well\r\nas the web server being tested. Monitoring the bench nodes is important, since\r\nit helps identify whether they are generating load as normal.\r\n\r\nWith this configuration in place, you should get monitoring in your FunkLoad\r\nreports.\r\n\r\n **Note:** If you are using ``bench-master`` to run the tests, it will\r\n ensure that only the first node collects monitoring statistics. It does\r\n not make sense to simultaneously monitor a number of hosts from all nodes.\r\n\r\nManually generating reports\r\n----------------------------\r\n\r\nWhen FunkLoad executes a bench run, it will write an XML file with all\r\nrelevant test results and statistics. Bench Master collects these XML files\r\nfrom each node and collates them into a single file. (The individual files are\r\nall kept in the ``results/`` directory under the bench run directory inside\r\nthe ``bench-master`` working directory.) It then uses the FunkLoad report\r\ngenerator to create an HTML report.\r\n\r\nIf you want to create a report manually, you can use the ``fl-build-report``\r\nscript.\r\n\r\nFor example, to get a plain-text version of the report, you can use::\r\n \r\n $ bin/fl-build-report bench-output//results/*.xml\r\n\r\nTo get an HTML version of the same, you can use the --html option::\r\n\r\n $ bin/fl-build-report --html bench-output//results/*.xml\r\n\r\nThe report will be placed in the current directory. Use the ``-o`` flag to\r\nindicate a different output directory.\r\n\r\nDifferential reporting\r\n~~~~~~~~~~~~~~~~~~~~~~\r\n\r\nYou can use load testing to understand the performance impact of your changes.\r\nTo make it easier to compare load test runs, FunkLoad provides differential\r\nreporting.\r\n\r\nFor example, let's say you have created a \"baseline\" report in\r\n``bench-output/baseline/reports/test_foo-20100101``. You may then add a new\r\nserver, and re-run the same test in ``bench-master``, this time with the\r\nname \"new-server\". The resulting report may be\r\n``bench-output/new-server/reports/test_foo-20100201``. You can create a\r\ndifferential between the two with::\r\n\r\n $ bin/fl-build-report --diff bench-output/baseline/reports/test_foo-20100101 bench-output/new-server/reports/test_foo-20100201\r\n\r\nOf course, this works on reports generated with \"plain\" FunkLoad as well.\r\n\r\n **Hint:** For differential reports to be meaningful, you need to have the\r\n same test, executing with the same number of cycles, the same cycle\r\n concurrency, and the same duration in each both reports.\r\n\r\nCredits\r\n-------\r\n\r\nOriginally created by Julian Coyne . Refactored\r\ninto a separate package by Martin Aspeli .\r\n\r\n.. _FunkLoad: http://pypi.python.org/pypi/funkload\r\n.. _pssh: http://pypi.python.org/pypi/pssh\r\n.. _virtualenv: http://pypi.python.org/pypi/virtualenv\r\n.. _TCPWatch: http://pypi.python.org/pypi/tcpwatch\r\n.. _monit: http://mmonit.com/monit/\r\n\r\n\r\nChangelog\r\n=========\r\n\r\n1.0b1 - 2010-07-26\r\n------------------\r\n\r\n* Initial release", "description_content_type": null, "docs_url": null, "download_url": "UNKNOWN", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "http://pypi.python.org/pypi/funkload-bench", "keywords": "funkload clustered load testing", "license": "GPL2", "maintainer": "", "maintainer_email": "", "name": "benchmaster", "package_url": "https://pypi.org/project/benchmaster/", "platform": "UNKNOWN", "project_url": "https://pypi.org/project/benchmaster/", "project_urls": { "Download": "UNKNOWN", "Homepage": "http://pypi.python.org/pypi/funkload-bench" }, "release_url": "https://pypi.org/project/benchmaster/1.0b1/", "requires_dist": null, "requires_python": null, "summary": "Tools for running and automating distributed loadtests with Funkload", "version": "1.0b1" }, "last_serial": 786797, "releases": { "1.0b1": [ { "comment_text": "", "digests": { "md5": "8f0248c5b15cbe3e0defd1e0e39cd270", "sha256": "1614164bccd0ca6c6e4f6357640fa205cf052ab5f2b15252d5f1f593eeb58333" }, "downloads": -1, "filename": "benchmaster-1.0b1.tar.gz", "has_sig": false, "md5_digest": "8f0248c5b15cbe3e0defd1e0e39cd270", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 37057, "upload_time": "2010-07-26T10:31:02", "url": "https://files.pythonhosted.org/packages/9c/31/3a569157db980c378b8c6d3ce648ef79e89da94f6ec6afed395da6edf1de/benchmaster-1.0b1.tar.gz" } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "8f0248c5b15cbe3e0defd1e0e39cd270", "sha256": "1614164bccd0ca6c6e4f6357640fa205cf052ab5f2b15252d5f1f593eeb58333" }, "downloads": -1, "filename": "benchmaster-1.0b1.tar.gz", "has_sig": false, "md5_digest": "8f0248c5b15cbe3e0defd1e0e39cd270", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 37057, "upload_time": "2010-07-26T10:31:02", "url": "https://files.pythonhosted.org/packages/9c/31/3a569157db980c378b8c6d3ce648ef79e89da94f6ec6afed395da6edf1de/benchmaster-1.0b1.tar.gz" } ] }