{ "info": { "author": "idle-man", "author_email": "i@idleman.club", "bugtrack_url": null, "classifiers": [ "Development Status :: 3 - Alpha", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3" ], "description": "# Parrot - Automated test solution for http requests based on recording and playback\n\n## 1. Design Idea: Test == Traffic Playback\n\nClassic definition of software testing: \n\n> The process of using a **manual** or **automated** means to run or measure a software system, the purpose of which is to verify that it meets **specified requirements** or to clarify the difference between **expected** and **actual** results.\n>\n> -- *Software engineering terminology from IEEE in 1983*\n\nA simplified definition: \n> The process of running a system or application in accordance with defined requirements/steps, obtaining actual result, and comparing with the expected result\n\nLook at the process of traffic playback:\n\n- Recording: Get/Define specified requirement and **expected result**\n- Playback: Perform the recorded script to get the **actual result**\n- Verify: Compare the **expected** and **actual** results\n\n**Traffic playback** is a way to automate the realization of the **original definition of the test**.\n\n**This project is based on this idea to automatically test api, in Chapter 3 there will be a specific disassembly.**\n\n## 2. Instruction for use\n\n### 2.0 Install\n\nThe iParrot project has been submitted to PyPI, installation way:\n\n1. Run `pip install iParrot` command.\n2. Or download the source code pkg, and run `python setup.py` command.\n\nAfter installation, the `parrot` executable is generated, you could try `parrot help`.\n\n### 2.1 Usage\n#### View commands supported by Parrot: `parrot help`\n\nAmong them, the two core commands are: **record**\uff0c**replay**\n\n```\n$ parrot help\nAutomated test solution for http requests based on recording and playback\nVersion: 1.0.0\n\nUsage: parrot [-h] [-v] [command] []\n\ncommand:\n record - parse source file and generate test cases\n see detail usage: parrot help record\n replay - run the recorded test cases and do validations\n see detail usage: parrot help replay\n\noptional arguments:\n -h, --help show this help message and exit\n -v, -V, --version show version\n```\n\n#### View the usage of `record` command: `parrot help record`\n\nThe purpose of this step is to parse the user-specified source file (currently .har) into a standardized set of use cases.\n\n```\n$ parrot help record\nAutomated test solution for http requests based on recording and playback\nVersion: 1.0.1\n\nUsage: parrot record []\n\nArguments:\n -s, --source SOURCE source file with path, *.har [required]\n -t, --target TARGET target output path, 'ParrotProject' as default\n -i, --include INCLUDE include filter on url, separated by ',' if multiple\n -e, --exclude EXCLUDE exclude filter on url, separated by ',' if multiple\n -vi, --validation-include V_INCLUDE\n include filter on response validation, separated by ',' if multiple\n -ve, --validation-exclude V_EXCLUDE \n exclude filter on response validation, separated by ',' if multiple\n\n --log-level LOG_LEVEL log level: debug, info, warn, error, info as default\n --log-mode LOG_MODE log mode : 1-on screen, 2-in log file, 3-1&2, 1 as default\n --log-path LOG_PATH log path : as default\n --log-name LOG_NAME log name : parrot.log as default\n\n```\n\n#### View the usage of `replay` command: `parrot help replay`\n\nThis step is to execute the specified set of test cases and generate a test report.\n\n```\n$ parrot help replay\nAutomated test solution for http requests based on recording and playback\nVersion: 1.0.1\n\nUsage: parrot replay []\n\nArguments:\n -s, --suite, -c, --case SUITE_OR_CASE\n test suite or case with path, *.yml or folder [required]\n -o, --output OUTPUT output path for report and log, 'ParrotProject' as default\n -i, --interval INTERVAL\n interval time(ms) between each step, use the recorded interval as default\n -env, --environment ENVIRONMENT\n environment tag, defined in project/environments/*.yml\n -reset, --reset-after-case\n exclude filter on url, separated by ',' if multiple\n\n --fail-stop FAIL_STOP stop or not when a test step failed on validation, False as default\n --fail-retry-times FAIL_RETRY_TIMES\n max retry times when a test step failed on validation, 0 as default\n --fail-retry-interval FAIL_RETRY_INTERVAL \n retry interval(ms) when a test step failed on validation, 100 as default\n\n --log-level LOG_LEVEL log level: debug, info, warn, error, info as default\n --log-mode LOG_MODE log mode : 1-on screen, 2-in log file, 3-1&2, 1 as default\n --log-path LOG_PATH log path : as default\n --log-name LOG_NAME log name : parrot.log as default\n\n```\n\n### 2.2 Framework Structure\n```\nparrot/\n \u251c\u2500\u2500 modules\n \u2502\u00a0\u00a0 \u251c\u2500\u2500 helper.py : A collection of commonly used methods in which the Function can be used in other modules, also supporting the use of ${{function(params)}} in the cases.\n \u2502\u00a0\u00a0 \u251c\u2500\u2500 request.py : Execute HTTP(S) request based on `requests` and get the result\n \u2502\u00a0\u00a0 \u251c\u2500\u2500 validator.py : The verification engine for request's response information, which supports multiple verification rules, as detailed in Validator.UNIFORM_COMPARATOR\n \u2502\u00a0\u00a0 \u251c\u2500\u2500 logger.py : Formatted log printing, support for output to screen or log files\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 reportor.py : Standardized report printing, support for views of summary results and use case execution details\n \u251c\u2500\u2500 extension\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 helper.py : A collection of common methods that can be customized by the user, where the Function supports use as ${{function(params)}} in the cases\n \u251c\u2500\u2500 parser.py : Parse the source file, and automatically generate formatted use cases; parse the specified use case set, load into memory\n \u251c\u2500\u2500 player.py : Play back the specified set of use cases, execute them in levels, and finally generate a test report\n \u2514\u2500\u2500 parrot.py : The main script, you can run `python parrot.py help` to see the specific usage\n\n```\n\n## 3. Specific design ideas\n\n### 3.1.1 Recording - How to define the requirement/steps\n***\n\n#### Mode One(recommended): Automatic generation of HAR(HTTP Archive Resource) files exported from packet capture tools\n\nHAR is a common standardized format for storing HTTP requests and responses\n\n- Its versatility: can be exported with consistent format from Charles, Fiddler, Chrome, etc\n- Its standardization: JSON format and UTF-8 coding\n\nBased on the capture source file, you can automatically parse and generate test cases that meet certain formats. The formats can be aligned to the following Mode Two.\n\n> In daily project testing and regression, people all have the opportunity to \"save\" the capture record, which includes complete and real user scenarios and interface call scenes, better than manual \"draw up\" use cases. \n> \n> The files processed by Parrot in the first phase are Charles trace and Fiddler txt. The format is quite different, and the parsing of plain text is cumbersome.\n\n\n#### Mode Two: Customize according to uniform specifications\n\nAutomated use cases are layered and standardized, for easy automatic generation, manual editing, and flexible assembly (partial reference to Postman and HttpRunner, mentioned later)\n\n```\nproject/\n \u251c\u2500\u2500 environments\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 *env*.yml: Project-level environment variable configuration, can configure multiple sets of common variables, easy to switch in step, case, suite, reduce modification cost\n \u251c\u2500\u2500 test_steps\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 *case_name*.yml: The minimum execution unit, a http request is a step, which can be independently configured with variables, pre-steps, post-steps, etc.\n \u251c\u2500\u2500 test_cases\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 *case_name*.yml: Independent closed-loop unit, consisting of one or more steps, which can be independently configured with variables, pre-steps, post-steps, etc.\n \u2514\u2500\u2500 test_suites\n \u00a0\u00a0 \u2514\u2500\u2500 *suite_name*.yml: Test case set, consisting of one or more cases, no strong dependencies between cases, independent configuration variables, pre-steps, post-steps, etc.\n```\nThe above use case organization structure can be automatically constructed in the HAR file parsing of Mode One, or it can be constructed and edited in strict accordance with the standardized format.\n\n**Specific format:**\n\n- **environment**\n\n\t```yaml\n\tglobal: {}\n\tproduction: {}\n\tdevelopment: {}\n\ttest: {}\n\t```\n- **test_step**\n\n\t```yaml\n\tconfig:\n\t environment: \n\t import: \n\t name: step name\n\t variables:\n\t p1: ABC\n\t p2: 123\n\trequest:\n\t method: POST\n\t protocol: http\n\t host: x.x.x.x:8000\n\t url: /path/of/api\n\t params: {}\n\t data:\n\t param1: ${p1}\n\t param2: ${p2}\n\t headers:\n\t Content-Type: application/json; charset=UTF-8\n\t cookies: {}\n\t time.start: 1568757525027\n\tresponse:\n\t extract: {}\n\tsetup_hooks: []\n\tteardown_hooks: []\n\tvalidations:\n\t- eq:\n\t status.code: 200\n\t- exists:\n\t headers.token\n\t- is_json:\n\t content\n\t- eq:\n\t content.code: 100\n\t```\n- **test_case**\n\n\t```yaml\n\tconfig:\n\t environment: \n\t import: \n\t name: case name\n\t variables: {}\n\tsetup_hooks: []\n\tteardown_hooks: []\n\ttest_steps:\n\t - \n\t - \n\t```\n- **test_suite**\n\n\t```yaml\n\tconfig:\n\t environment: \n\t import: \n\t name: suite name\n\t variables: {}\n\tsetup_hooks: []\n\tteardown_hooks: []\n\ttest_cases: \n\t - \n\t - \n\t```\n\n#### Mode Three: Based on standardized production logs\nThe log information should contain enough information and be defined in a uniform format.\n\n> Limited by the log specification differences and information completeness of specific projects, this project will not consider this mode for the time being.\n> \n> Interested users can refer to the format definition of Mode Two, and implement the script development of **recording**.\n\n### 3.1.2 Recording - How to define the expected result\n***\n\n#### Mode One(Recommended): Automatic generation of HAR(HTTP Archive Resource) files exported from packet capture tools\n\nA basic idea of \u200b\u200btraffic playback is that the recorded traffic is reliable, then the response information at that time can be used as an important reference for our expected results.\n\nThe important information we can use:\n\n```\n- Status Code: The most basic availability verification, usually placed in the first step\n- Content text: The core verification part, usually in json format, to further break down the more detailed key/value\n- Headers: Some projects will return some custom keys in the header, which requires separate verification\n- Timing: The time-consuming of the request, which is not needed to strongly verify, but can be used to make certain comparisons\n```\n\nDuring the recording phase of Parrot, the default response information is extracted from the recorded samples as the expected result (supporting the filtering of `--include``--exclude`). \n\nFor the format, see the definition of Mode Two below.\n\n#### Mode Two: Customize according to uniform specifications\n\nIn Mode Two of Chapter 3.1.1, there are examples of validations in the definition of test_step, which users can customize with reference to this format:\n\n```\nvalidations:\n- :\n : \n```\n\nThe `check`: According to the important information in the above Mode One, the unified format is: \\.\\\n\n- Available PREFIX: `status`, `content`, `headers`, `cookies`, in lower case\n- KEYS in `status`: `code`\n- KEYS in `headers` and `cookies`: Currently only extracting the outer keys\n- KEYS in `content`(json format): `content.a.b[1].c`\n\nThe `expected result`: The value in the recorded sample is used by default when automatically generated, which can be edited manually.\n\nThe `comparator`: The default is `eq` when it is automatically generated, which can be edited manually. \n\nCurrently, Parrot supports below comparators:\n\n- **eq(equals)**\n\t- Example: `1 eq 1`, `'a' eq 'a'`, `[1, 2] eq [1, 2]`, `{'a': 1 } eq {'a': 1}`, `status.code eq 200`\n\t- Related comparators: `neq`, `lt`, `gt`, `le`, `ge`\n- **len_eq(length equals)**\n\t- Example: `'ab' len_eq 2`, `[1, 2] len_eq 2`, `{'a': 1} len_eq 1`\n\t- Related comparators: `len_neq`, `len_lt`, `len_gt`\n- **contains**\n\t- Example: `'abc' contain 'ab', ['a', 'b'] contain 'a', {'a': 1, 'b': 2} contain {'a': 1}`\n\t- Related comparators: `not_contains`\n- **in**\n\t- Example: `'a' in 'ab'`, `'a' in ['a', 'b']`, `'a' in {'a': 1, 'b': 2}`\n\t- Related comparators: `not_in`\n- **is_false**\n\t- Example: `0 is_false`, `'' is_false`, `[] is_false`, `{} is_false`\n\t- Related comparators: `is_true`, `exists`, `is_instance`, `is_json`\n- **re(regex)**\n\t- Example: `'1900-01-01' re r'\\d+-\\d+-\\d+'`\n\t- Related comparators: `not_re`\n\nFor more comparators, please refer to `iparrot.modules.validator.Validator.UNIFORM_COMPARATOR`\n\n### 3.2.1 Playback - The order of execution\n***\nParrot's request execution is based on `requests` module and currently only supports HTTP(S) requests.\n\n#### Execution order: executed in the order of cases and steps defined in *test_suite*.yaml / *test_case*.yaml, currently only supports serial execution mode\n\n> When the use case is automatically generated, the order of the steps defaults to the order of appearance in the recorded sample, which can be edited manually.\n\nThe detailed execution process:\n\n```\ntest_suite1\n |-> suite1.setup\n |-> test_case1\n |-> case1.setup\n |-> test_step1\n |-> step1.setup\n |-> request\n |-> validation\n |-> extract\n |-> step1.teardown\n |-> test_step2\n ...\n |-> case1.teardown\n |-> test_case2\n ...\n |-> suite1.teardown\ntest_suite2\n ...\n```\n\n#### Execution interval: The value of `interval` argument is firstly used, otherwise the value `time.start` in step defination\n\nIf the playback parameter `interval` is specified, it will be executed according to the interval; otherwise, if the `time.start` is defined in the steps, it will be executed according to the interval of each step's `time.start`; otherwise, execute the steps one by one\n\n> When the use case is automatically generated, the actual execution time would be recorded as `time.start` in the step defination.\n\n### 3.2.2 Playback - How to support real-time parameters\n***\n\n#### Some parameters need to be generated in real time\n\nTake the query request as an example. The requirement of the interface is to query the data of tomorrow. The recorded parameters are kept with static values. If the script is runned in the next day, it will not meet the requirements. In this case, the parameter needs to be generated in real time.\n\nThe Parrot solution is: use `${{function(params)}}` to generate real time value, where `function` is provided by iparrot.modules.helper or self-defined iparrot.extension.helper.\n\nExample:\n\n```yaml\nconfig:\n ...\n variables:\n p1: ABC\n p2: 123\n p3: ${{days_later(1)}}\nrequest:\n method: GET\n ...\n params:\n param1: ${p1}\n param2: ${p2}\n date: ${p3}\n ...\n```\n\n#### Some parameters depend on the response of pre-order request\n\nTaking the order type interface as an example, the order id of the order detail interface depends on the real-time return of the order creation interface. \n\nThe Parrot solution is: Configure `extract` in the `response` defination of order creation interface step to extract specific order id, and use `${variable}` format to call the order id in order detail interface step.\n\nExample:\n\nDefinition of order creation step:\n\n```yaml\nconfig:\n ...\nrequest:\n ...\nresponse:\n extract:\n oid: content.data.orderId\n...\n```\n\nDefination of order detail step:\n\n```yaml\nconfig:\n ...\n variables:\n p1: ABC\n p2: 123\nrequest:\n method: GET\n ...\n params:\n param1: ${p1}\n param2: ${p2}\n orderId: ${oid}\n ...\n```\n\n\n### 3.3.1 Validation - How to compate the expected and actual results\n***\n\nThe definition of `expected result` is mentioned in chapter 3.1.2, including the `check` object, the `comparator` method, and the `expected result`.\n\nIn the process of request playback in chapter 3.2.1, you can get the `actual result` in real time, which you can check and see if the value of each `check` object conforms to the `comparator` rule. **If there is a failure, the entire step fails**\n\nAfter a single step fails, the current Parrot does not terminate the execution of the playback by default, but the user can perform some intervention by running the parameters:\n\n- --fail_stop: If specified, the operation will be terminated after a step verification fails\n- --fail\\_retry_times: The number of retries after a step failed, 0 as default\n- --fail\\_retry_interval: retry interval after a step failure\n\n\n## 4. External reference, thanks\n### 4.1 [Postman](https://learning.getpostman.com/)\n\n#### 4.1.1 Environments management\nThe mechanism is referenced in the `environment` of the Parrot use case structure.\n\n```\nA project can be configured with multiple sets of environments to hold some common environment variables.\n\nVariable names are consistent between different environments, and values \u200b\u200bcan vary.\n\nIn the use case, you can refer to the variable by means of ${variable}, reducing manual modification.\n\nThe switching of the operating environment can be specified in the replay phase by the --env parameter.\n```\n\n#### 4.1.2 Use case layering mode\n - Collection => test_suite\n - Folder => test_case\n - Request => test_step\n\n#### 4.1.3 Pre and post actions\n - Pre-request Script => setup_hooks\n - Tests => teardown_hooks & validations\n\n### 4.2 [HttpRunner](https://github.com/httprunner/httprunner)\n\n#### 4.2.1 [HAR2Case](https://github.com/HttpRunner/har2case)\n\nThe files processed by Parrot in the first phase are Charles trace and Fiddler txt. The format is quite different, and the parsing of plain text is cumbersome.\n\nLater, in the course of HttpRunner's ideas, I used HAR to reconstruct the record part. At the same time, I made some changes in the parameters.\n\nInspired by HttpRunner's ideas, the record part is rebuilt, and some paramters are updated.\n\nFor details, to see `parrot help record` and `iparrot.parser`\n\n#### 4.2.2 Use case layering mode\n\nThe use case layering mode of HttpRunner, TestSuite>TestCase>TestStep, is clear and a good reference.\n\nWhen Parrot automatically generates use cases, it directly implements the layering mode on the directory structure and changes the specific use case structure.\n\n#### 4.2.3 setup hooks & teardown hooks\n\nParrot reuses this naming scheme, which supports `set variable`, `call function`, `exec code`.\n\n#### 4.2.4 extract variable\n\nParrot in the first phase uses the mode of `store` and `replace`, which is intended to keep all changes in a configuration file, and does not invade the use case at all. \n\nIn actual use, it is found that the usability is not good and the configuration is slightly cumbersome.\n\nRefer to HttpRunner, return the initiative to the user, and the variable can be extracted according to `extract` defination and used as `${variable}`.\n\n#### 4.2.5 comparator\n\nThe first version of Parrot diffs results refer to a configuration file, only supports `eq` and simple `re`, and the method set is limited.\n\nNow refer to the HttpRunner, automatically generate `eq` comparator when recording, and support a variety of comparator customization.\n\nComparators in Parrot combines with the common verification methods of HttpRunner and Postman, and a certain supplement.\n\n#### 4.2.6 report\n\nParrot's test report template directly reuses HttpRunner's report style.\n\n\n\n", "description_content_type": "text/markdown", "docs_url": null, "download_url": "", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "https://github.com/idle-man/iParrot", "keywords": "record replay playback parrot automation http", "license": "MIT", "maintainer": "", "maintainer_email": "", "name": "iParrot", "package_url": "https://pypi.org/project/iParrot/", "platform": "", "project_url": "https://pypi.org/project/iParrot/", "project_urls": { "Bug Tracker": "https://github.com/idle-man/iParrot/issues", "Documentation": "https://github.com/idle-man/iParrot/blob/master/README.md", "Homepage": "https://github.com/idle-man/iParrot", "Source Code": "https://github.com/idle-man/iParrot" }, "release_url": "https://pypi.org/project/iParrot/1.0.2/", "requires_dist": [ "requests", "PyYAML" ], "requires_python": "", "summary": "Automated test solution for http requests based on recording and playback", "version": "1.0.2" }, "last_serial": 5901374, "releases": { "1.0.1": [ { "comment_text": "", "digests": { "md5": "56382ae0240c3b8203b72892fa8714f5", "sha256": "a98e6746d5f2ccb18593cbd6b0bb7fff006d4111b6227734ce7be4ec216cf917" }, "downloads": -1, "filename": "iParrot-1.0.1-py3-none-any.whl", "has_sig": false, "md5_digest": "56382ae0240c3b8203b72892fa8714f5", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 34096, "upload_time": "2019-09-27T07:27:41", "url": "https://files.pythonhosted.org/packages/54/7a/576d6bf4cb07331982861f8b599f6ec0fc0f51bf1e4654555d17b4c242dc/iParrot-1.0.1-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "c9f8f040aa7be41b2724116cb2c262e9", "sha256": "927d036bced8a549b6bd782cf0a64ad4f2b29b567e46e379e9ba2dfd63e52581" }, "downloads": -1, "filename": "iParrot-1.0.1.tar.gz", "has_sig": false, "md5_digest": "c9f8f040aa7be41b2724116cb2c262e9", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 32992, "upload_time": "2019-09-27T07:27:44", "url": "https://files.pythonhosted.org/packages/66/26/c31d9e71141fe9db8cba5a5e534547d994b7a20809a975283e54d773d59e/iParrot-1.0.1.tar.gz" } ], "1.0.2": [ { "comment_text": "", "digests": { "md5": "df10268736fc0551d2a33e044a79a841", "sha256": "3fe1835eb667c9ef25258724f6cd1068685a4cf53d5fc9ea893a8f76ee3ef968" }, "downloads": -1, "filename": "iParrot-1.0.2-py3-none-any.whl", "has_sig": false, "md5_digest": "df10268736fc0551d2a33e044a79a841", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 34129, "upload_time": "2019-09-29T03:32:00", "url": "https://files.pythonhosted.org/packages/e5/9b/d4697d8712749d45c4d23d4132d990cc5aeffe2e73fc9e7e45359da33d47/iParrot-1.0.2-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "3db41069979cef4087583fbbc7c1a1b6", "sha256": "c8f344219b6fd78f1fee284f17abf09570feb97d85c9350b7c8539fc663ea51a" }, "downloads": -1, "filename": "iParrot-1.0.2.tar.gz", "has_sig": false, "md5_digest": "3db41069979cef4087583fbbc7c1a1b6", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 32954, "upload_time": "2019-09-29T03:32:02", "url": "https://files.pythonhosted.org/packages/ac/ad/9aa9554100feccbd3c05b62365af4188fb7ff2a1fa6d054af84d8c94e823/iParrot-1.0.2.tar.gz" } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "df10268736fc0551d2a33e044a79a841", "sha256": "3fe1835eb667c9ef25258724f6cd1068685a4cf53d5fc9ea893a8f76ee3ef968" }, "downloads": -1, "filename": "iParrot-1.0.2-py3-none-any.whl", "has_sig": false, "md5_digest": "df10268736fc0551d2a33e044a79a841", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": null, "size": 34129, "upload_time": "2019-09-29T03:32:00", "url": "https://files.pythonhosted.org/packages/e5/9b/d4697d8712749d45c4d23d4132d990cc5aeffe2e73fc9e7e45359da33d47/iParrot-1.0.2-py3-none-any.whl" }, { "comment_text": "", "digests": { "md5": "3db41069979cef4087583fbbc7c1a1b6", "sha256": "c8f344219b6fd78f1fee284f17abf09570feb97d85c9350b7c8539fc663ea51a" }, "downloads": -1, "filename": "iParrot-1.0.2.tar.gz", "has_sig": false, "md5_digest": "3db41069979cef4087583fbbc7c1a1b6", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 32954, "upload_time": "2019-09-29T03:32:02", "url": "https://files.pythonhosted.org/packages/ac/ad/9aa9554100feccbd3c05b62365af4188fb7ff2a1fa6d054af84d8c94e823/iParrot-1.0.2.tar.gz" } ] }