{ "info": { "author": "Brian Sutherland", "author_email": "brian@vanguardistas.net", "bugtrack_url": null, "classifiers": [ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: BSD License", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 2.6", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3.3", "Programming Language :: Python :: 3.4", "Programming Language :: Python :: 3.5", "Topic :: Internet :: WWW/HTTP", "Topic :: Internet :: WWW/HTTP :: WSGI", "Topic :: Internet :: WWW/HTTP :: WSGI :: Middleware" ], "description": "`wesgi` implements an ESI Processor as a WSGI middeware. It is primarily aimed\nat development environments to simulate the production ESI Processor. Under\ncertain conditions it may be used in production as well.\n\nCompleteness\n============\n\nThis implementation currently only implements ```` and\n```` comments. The relevant specifications and documents are:\n\n- http://www.w3.org/TR/esi-lang\n- http://www.akamai.com/dl/technical_publications/esi_faq.pdf\n\nPerformance\n===========\n\nAn ESI processor generally makes a lot of network calls to other services in\nthe process of putting together a page. So, in general, to reach very high\nlevels of performance it should be asynchronous. Standard Python and WSGI is\nsynchronous, placing an upper limit on performance which depends on the\nfollowing:\n\n- How many threads are used\n- How many ESI includes used per page\n- The speed of the servers serving the ESI Includes\n- Whether `wesgi` uses a cache and if the ESI includes come with Cache-Control\n headers\n\nDepending on the situation, `wesgi` may be performant enough for you.\n\nThere are also a number of ways to run WSGI applications asynchronously, with\nvarying definitions of \"asynchronous\".\n\nUsage\n=====\n\nConfiguration via Python\n------------------------\n\n >>> from wesgi import MiddleWare\n >>> from wsgiref.simple_server import demo_app\n\nTo use it in it's default configuration for a development server:\n\n >>> app = MiddleWare(demo_app)\n\nTo simulate an Akamai Production environment:\n \n >>> from wesgi import AkamaiPolicy\n >>> policy = AkamaiPolicy()\n >>> app = MiddleWare(demo_app, policy=policy)\n\nTo simulate an Akamai Production environment with \"chase redirect\" turned on:\n \n >>> policy.chase_redirect = True\n >>> app = MiddleWare(demo_app, policy=policy)\n\nIf you wish to use it for a production server, it's advisable to turn debug\nmode off and enable some kind of cache:\n \n >>> from wesgi import LRUCache\n >>> from wesgi import Policy\n >>> policy.cache = LRUCache()\n >>> app = MiddleWare(demo_app, debug=False, policy=policy)\n\nThe ``LRUCache`` is a memory based cache using an approximation of the LRU\nalgorithm. The good parts of it were inspired by Raymond Hettinger's\n``lru_cache`` recipe.\n\nOther available caches that can be easily integrated are ``httplib2``'s\n``FileCache`` or ``memcache``. See the ``httplib2`` documentation for details.\n\nConfiguration via paste.ini\n---------------------------\n\nThe ``wesgi.filter_app_factory`` function lets you configure ``wesgi`` in your\npaste.ini file. For example::\n\n [filter-app:wesgi]\n paste.filter_app_factory = wesgi:filter_app_factory\n cache=lru_memory\n cache_maxsize=10\n policy=akamai\n policy_chase_redirect=True\n next = myapp\n\nDevelopment\n===========\n\nDevelopment on `wesgi` is centered around this github branch:\n\n https://github.com/jinty/wesgi\n\nCHANGES\n=======\n\n0.12 (2016-10-06)\n----------------\n\nFixes\n+++++\n\n- fix dictionary changed size during iteration errors on Python 3\n\n0.11 (2016-05-25)\n----------------\n\nFeatures\n++++++++\n\n- Configuration via paste, rescued from missing 0.9 release.\n\n0.10 (2016-05-25)\n----------------\n\nFeatures\n++++++++\n\n- Python 3 support, drop Python 2.5 support.\n- Request header forwarding by default.\n- Turn relative links in