{ "info": { "author": "Roger Ineichen, Projekt01 GmbH", "author_email": "dev@projekt01.ch", "bugtrack_url": null, "classifiers": [ "Development Status :: 4 - Beta", "Environment :: Web Environment", "Framework :: Zope3", "Intended Audience :: Developers", "License :: OSI Approved :: Zope Public License", "Natural Language :: English", "Operating System :: OS Independent", "Programming Language :: Python", "Topic :: Internet :: WWW/HTTP" ], "description": "This package provides an elasticsearch client for Zope3.\n\n\n======\nREADME\n======\n\nThis package provides an elasticsearch client. Note we use a different port\nwithin our elasticsearch server stub (45299 instead of 9200). See\nelasticsearch/config for more info:\n\n >>> from pprint import pprint\n >>> from p01.elasticsearch import interfaces\n >>> from p01.elasticsearch.pool import ServerPool\n >>> from p01.elasticsearch.pool import ElasticSearchConnectionPool\n\n >>> servers = ['localhost:45299']\n >>> serverPool = ServerPool(servers, retryDelay=10, timeout=5)\n\n >>> import p01.elasticsearch.testing\n >>> statusRENormalizer = p01.elasticsearch.testing.statusRENormalizer\n\n\nElasticSearchConnectionPool\n---------------------------\n\nWe need to setup a elasticsearch connection pool:\n\n >>> connectionPool = ElasticSearchConnectionPool(serverPool)\n\nThe connection pool stores the connection in threading local. You can set the\nre-connection time which is by default set to 60 seconds:\n\n >>> connectionPool\n \n\n >>> connectionPool.reConnectIntervall\n 60\n\n >>> connectionPool.reConnectIntervall = 30\n >>> connectionPool.reConnectIntervall\n 30\n\n\nElasticSearchConnection\n-----------------------\n\nNow we are able to get a connection which is persistent and observed by a \nthread local from the pool:\n\n >>> conn = connectionPool.connection\n >>> conn\n \n\nSuch a connection provides a server pool which de connection can choose from.\nIf a server goes down, another server get used. The Connection is also\nbalancing http connections between all servers:\n\n >>> conn.serverPool\n \n\n >>> conn.serverPool.info\n 'localhost:45299'\n\nAlso a maxRetries value is provided. If by default None is given the connection\nwill choose a max retry of alive server e.g. len(self.serverPool.aliveServers):\n\n >>> conn.maxRetries is None\n True\n\nAnother property called autoRefresh is responsible for call refresh implicit\nif a previous connection call changes the search index e.g. as the index call\nwhould do:\n\n >>> conn.autoRefresh\n False\n\nAnd there is a marker for bulk size. This means if we use the bulk marker which\nsome methods provide. The bulkMaxSize value makes sure that not more then the\ngiven amount of items get cached in the connection before sent to the server:\n\n >>> conn.bulkMaxSize\n 400\n\n\nMapping Configuration\n---------------------\n\nOur test setup uses a predefined mapping configuration. This, I guess, is the\ncommon use case in most projects. I'm not really a friend of dynamic mapping\nat least if compes to migration and legacy data handling. Bbut of corse for\nsome use case dynamic mapping is a nice feature. At least if you have to index\ncawled data and offer a search over all (_all) fields. Let's test our\npredefined mappings:\n\nUp till Elasticsearch version 19.1, this would return {}, but now it returns\nstatus 404, so our code raises an exception. This will be fixed in\nelasticsearch 19.5.\n\n >>> conn.getMapping()\n {}\n\nAs you can see, we don't get a default mapping yet. First we need to index at\nleast one item. Let's index a fisrt job\n\n >>> job = {'title': u'Wir suchen einen Marketingplaner',\n ... 'description': u'Wir bieten eine gute Anstellung'}\n\n >>> pprint(conn.index(job, 'testing', 'job', 1))\n {u'_id': u'1',\n u'_index': u'testing',\n u'_type': u'job',\n u'_version': 1,\n u'ok': True}\n\n >>> statusRENormalizer.pprint(conn.getMapping())\n {u'testing': {u'job': {u'_all': {u'store': u'yes'},\n u'_id': {u'store': u'yes'},\n u'_index': {u'enabled': True},\n u'_type': {u'store': u'yes'},\n u'properties': {u'__name__': {u'boost': 2.0,\n u'include_in_all': False,\n u'null_value': u'na',\n u'type': u'string'},\n u'contact': {u'include_in_all': False,\n u'properties': {u'firstname': {u'include_in_all': False,\n u'type': u'string'},\n u'lastname': {u'include_in_all': False,\n u'type': u'string'}}},\n u'description': {u'include_in_all': True,\n u'null_value': u'na',\n u'type': u'string'},\n u'location': {u'geohash': True,\n u'lat_lon': True,\n u'type': u'geo_point'},\n u'published': {u'format': u'date_optional_time',\n u'type': u'date'},\n u'requirements': {u'properties': {u'description': {u'type': u'string'},\n u'name': {u'type': u'string'}}},\n u'tags': {u'index_name': u'tag',\n u'type': u'string'},\n u'title': {u'boost': 2.0,\n u'include_in_all': True,\n u'null_value': u'na',\n u'type': u'string'}}}}}\n\nLet's define another item with more data and index them:\n\n >>> import datetime\n >>> job = {'title': u'Wir suchen einen Buchhalter',\n ... 'description': u'Wir bieten Ihnen eine gute Anstellung',\n ... 'requirements': [\n ... {'name': u'MBA', 'description': u'MBA Abschluss'}\n ... ],\n ... 'tags': [u'MBA', u'certified'],\n ... 'published': datetime.datetime(2011, 02, 24, 12, 0, 0),\n ... 'contact': {\n ... 'firstname': u'Jessy',\n ... 'lastname': u'Ineichen',\n ... },\n ... 'location': [-71.34, 41.12]}\n >>> pprint(conn.index(job, 'testing', 'job', 2))\n {u'_id': u'2',\n u'_index': u'testing',\n u'_type': u'job',\n u'_version': 1,\n u'ok': True}\n\n\n >>> import time\n >>> time.sleep(1)\n\nget\n---\n\nNow let's get the job from our index by it's id. But first refresh our index:\n\n >>> statusRENormalizer.pprint(conn.get(2, \"testing\", \"job\"))\n {u'_id': u'2',\n u'_index': u'testing',\n u'_source': {u'contact': {u'firstname': u'Jessy', u'lastname': u'Ineichen'},\n u'description': u'Wir bieten Ihnen eine gute Anstellung',\n u'location': [..., ...],\n u'published': datetime.datetime(2011, 2, 24, 12, 0),\n u'requirements': [{u'description': u'MBA Abschluss',\n u'name': u'MBA'}],\n u'tags': [u'MBA', u'certified'],\n u'title': u'Wir suchen einen Buchhalter'},\n u'_type': u'job',\n u'_version': 1,\n u'exists': True}\n\nsearch\n------\n\nNow also let's try to search:\n\n >>> response = conn.search(\"title:Buchhalter\", 'testing', 'job')\n >>> response\n \n\n >>> statusRENormalizer.pprint(response.data)\n {u'_shards': {u'failed': 0, u'successful': 5, u'total': 5},\n u'hits': {u'hits': [{u'_id': u'2',\n u'_index': u'testing',\n u'_score': ...,\n u'_source': {u'contact': {u'firstname': u'Jessy',\n u'lastname': u'Ineichen'},\n u'description': u'Wir bieten Ihnen eine gute Anstellung',\n u'location': [..., ...],\n u'published': datetime.datetime(2011, 2, 24, 12, 0),\n u'requirements': [{u'description': u'MBA Abschluss',\n u'name': u'MBA'}],\n u'tags': [u'MBA', u'certified'],\n u'title': u'Wir suchen einen Buchhalter'},\n u'_type': u'job'}],\n u'max_score': ...,\n u'total': 1},\n u'timed_out': False,\n u'took': ...}\n\nAs you can see, our search response wrapper knows about some important\nvalues:\n\n >>> response.start\n 0\n\n >>> response.size\n 0\n\n >>> response.total\n 1\n\n >>> response.pages\n 1\n\n >>> pprint(response.hits)\n [{u'_id': u'2',\n u'_index': u'testing',\n u'_score': ...,\n u'_source': {u'contact': {u'firstname': u'Jessy',\n u'lastname': u'Ineichen'},\n u'description': u'Wir bieten Ihnen eine gute Anstellung',\n u'location': [..., ...],\n u'published': datetime.datetime(2011, 2, 24, 12, 0),\n u'requirements': [{u'description': u'MBA Abschluss',\n u'name': u'MBA'}],\n u'tags': [u'MBA', u'certified'],\n u'title': u'Wir suchen einen Buchhalter'},\n u'_type': u'job'}]\n\nNow let's search for more then one job:\n\n >>> response = conn.search(\"Anstellung\", 'testing', 'job')\n >>> pprint(response.data)\n {u'_shards': {u'failed': 0, u'successful': 5, u'total': 5},\n u'hits': {u'hits': [{u'_id': u'1',\n u'_index': u'testing',\n u'_score': ...,\n u'_source': {u'description': u'Wir bieten eine gute Anstellung',\n u'title': u'Wir suchen einen Marketingplaner'},\n u'_type': u'job'},\n {u'_id': u'2',\n u'_index': u'testing',\n u'_score': ...,\n u'_source': {u'contact': {u'firstname': u'Jessy',\n u'lastname': u'Ineichen'},\n u'description': u'Wir bieten Ihnen eine gute Anstellung',\n u'location': [..., ...],\n u'published': datetime.datetime(2011, 2, 24, 12, 0),\n u'requirements': [{u'description': u'MBA Abschluss',\n u'name': u'MBA'}],\n u'tags': [u'MBA', u'certified'],\n u'title': u'Wir suchen einen Buchhalter'},\n u'_type': u'job'}],\n u'max_score': ...,\n u'total': 2},\n u'timed_out': False,\n u'took': ...}\n\nNow try to limit the search result using form and size parameters:\n\n >>> params = {'from': 0, 'size': 1}\n >>> response = conn.search(\"Anstellung\", 'testing', 'job', **params)\n >>> pprint(response.data)\n {u'_shards': {u'failed': 0, u'successful': 5, u'total': 5},\n u'hits': {u'hits': [{u'_id': u'1',\n u'_index': u'testing',\n u'_score': ...,\n u'_source': {u'description': u'Wir bieten eine gute Anstellung',\n u'title': u'Wir suchen einen Marketingplaner'},\n u'_type': u'job'}],\n u'max_score': ...,\n u'total': 2},\n u'timed_out': False,\n u'took': ...}\n\n >>> response.start\n 0\n\n >>> response.size\n 1\n\n >>> response.total\n 2\n\n >>> response.pages\n 2\n\n >>> params = {'from': 1, 'size': 1}\n >>> response = conn.search(\"Anstellung\", 'testing', 'job', **params)\n >>> pprint(response.data)\n {u'_shards': {u'failed': 0, u'successful': 5, u'total': 5},\n u'hits': {u'hits': [{u'_id': u'2',\n u'_index': u'testing',\n u'_score': ...,\n u'_source': {u'contact': {u'firstname': u'Jessy',\n u'lastname': u'Ineichen'},\n u'description': u'Wir bieten Ihnen eine gute Anstellung',\n u'location': [..., ...],\n u'published': datetime.datetime(2011, 2, 24, 12, 0),\n u'requirements': [{u'description': u'MBA Abschluss',\n u'name': u'MBA'}],\n u'tags': [u'MBA', u'certified'],\n u'title': u'Wir suchen einen Buchhalter'},\n u'_type': u'job'}],\n u'max_score': ...,\n u'total': 2},\n u'timed_out': False,\n u'took': ...}\n\n >>> response.start\n 1\n\n >>> response.size\n 1\n\n >>> response.total\n 2\n\n >>> response.pages\n 2\n\nAs you can see in the above sample, we have got only one hit in each query \nbeacuse of our size=1 parameter and both search results show the total of 2\nwhich we could get from the server without using size and from.\n\n\n=====\nIndex\n=====\n\nThis test will setup some sample data in our test setup method. After that a\nnew elasticsearch instance in another sandbox is started for this test. Check\nthe p01/elasticsearch/test.py file for more info about the sample data and\nelasticsearch server setup.\n\nWe will test if we can delete an existing index and create them with the same\nmapping again:\n\n >>> import json\n >>> from pprint import pprint\n >>> import p01.elasticsearch.testing\n >>> statusRENormalizer = p01.elasticsearch.testing.statusRENormalizer\n\nNow let's define a new elasticsearch connection based on our server pool:\n\n >>> conn = p01.elasticsearch.testing.getTestConnection()\n\nNow we are ready to access the elasticsearch server. Check the status:\n\n >>> statusRENormalizer.pprint(conn.status())\n {u'_shards': {u'failed': 0, u'successful': 1, u'total': 1},\n u'indices': {u'companies': {u'docs': {u'deleted_docs': 0,\n u'max_doc': ...,\n u'num_docs': ...},\n u'flush': {u'total': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'index': {u'primary_size': u'...',\n u'primary_size_in_bytes': ...,\n u'size': u'...',\n u'size_in_bytes': ...},\n u'merges': {u'current': 0,\n u'current_docs': 0,\n u'current_size': u'0b',\n u'current_size_in_bytes': 0,\n u'total': 0,\n u'total_docs': 0,\n u'total_size': u'0b',\n u'total_size_in_bytes': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'refresh': {u'total': ...,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'shards': {u'0': [{u'docs': {u'deleted_docs': 0,\n u'max_doc': ...,\n u'num_docs': ...},\n u'flush': {u'total': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'index': {u'size': u'...',\n u'size_in_bytes': ...},\n u'merges': {u'current': 0,\n u'current_docs': 0,\n u'current_size': u'0b',\n u'current_size_in_bytes': 0,\n u'total': 0,\n u'total_docs': 0,\n u'total_size': u'0b',\n u'total_size_in_bytes': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'refresh': {u'total': ...,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'routing': {u'index': u'companies',\n u'node': u'...',\n u'primary': True,\n u'relocating_node': None,\n u'shard': 0,\n u'state': u'STARTED'},\n u'state': u'STARTED',\n u'translog': {u'id': ...,\n u'operations': 0}}]},\n u'translog': {u'operations': 0}}},\n u'ok': True}\n\n\nAs you can see, we can test our sample data created mapping:\n\n >>> pprint(conn.getMapping('companies', 'company'))\n {u'company': {u'properties': {u'__name__': {u'type': u'string'},\n u'city': {u'type': u'string'},\n u'number': {u'ignore_malformed': False,\n u'type': u'long'},\n u'street': {u'type': u'string'},\n u'text': {u'type': u'string'},\n u'zip': {u'type': u'string'}}}}\n\nAnd search for our sample data where we added within our sample data generator\nin our test setup:\n\n >>> pprint(conn.search('street').total)\n 100\n\n\ndeleteIndex\n-----------\n\nNow we will delete the index:\n\n >>> conn.deleteIndex('companies')\n {u'acknowledged': True, u'ok': True}\n\nAs you can see there is no index anymore:\n\n >>> statusRENormalizer.pprint(conn.status())\n {u'_shards': {u'failed': 0, u'successful': 0, u'total': 0},\n u'indices': {},\n u'ok': True}\n\n\ncreateIndex\n-----------\n\nNow we can create the index again. Let's get our sample data mapping:\n\n >>> import os.path\n >>> import json\n >>> import p01.elasticsearch\n >>> mFile = os.path.join(os.path.dirname(p01.elasticsearch.__file__),\n ... 'sample', 'config', 'companies', 'company.json')\n\n >>> f = open(mFile)\n >>> data = f.read()\n >>> f.close()\n >>> mappings = json.loads(data)\n >>> pprint(mappings)\n {u'company': {u'_all': {u'enabled': True, u'store': u'yes'},\n u'_id': {u'store': u'yes'},\n u'_index': {u'enabled': True},\n u'_source': {u'enabled': False},\n u'_type': {u'store': u'yes'},\n u'properties': {u'__name__': {u'include_in_all': False,\n u'index': u'not_analyzed',\n u'store': u'yes',\n u'type': u'string'},\n u'_id': {u'include_in_all': False,\n u'index': u'no',\n u'store': u'yes',\n u'type': u'string'},\n u'city': {u'boost': 1.0,\n u'include_in_all': True,\n u'index': u'not_analyzed',\n u'null_value': u'na',\n u'store': u'yes',\n u'type': u'string'},\n u'street': {u'boost': 1.0,\n u'include_in_all': True,\n u'index': u'not_analyzed',\n u'null_value': u'na',\n u'store': u'yes',\n u'type': u'string'},\n u'text': {u'boost': 1.0,\n u'include_in_all': True,\n u'index': u'not_analyzed',\n u'null_value': u'na',\n u'store': u'yes',\n u'type': u'string'},\n u'zip': {u'boost': 1.0,\n u'include_in_all': True,\n u'index': u'not_analyzed',\n u'null_value': u'na',\n u'store': u'yes',\n u'type': u'string'}}}}\n\nNow we can create an new index with the given mapping:\n\n >>> conn.createIndex('companies', mappings=mappings)\n {u'acknowledged': True, u'ok': True}\n\nAs you can see, our index and mapping is back again:\n\n >>> statusRENormalizer.pprint(conn.status())\n {u'_shards': {u'failed': 0, u'successful': 1, u'total': 1},\n u'indices': {u'companies': {u'docs': {u'deleted_docs': 0,\n u'max_doc': ...,\n u'num_docs': ...},\n u'flush': {u'total': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'index': {u'primary_size': u'...',\n u'primary_size_in_bytes': ...,\n u'size': u'...',\n u'size_in_bytes': ...},\n u'merges': {u'current': 0,\n u'current_docs': 0,\n u'current_size': u'0b',\n u'current_size_in_bytes': 0,\n u'total': 0,\n u'total_docs': 0,\n u'total_size': u'0b',\n u'total_size_in_bytes': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'refresh': {u'total': ...,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'shards': {u'0': [{u'docs': {u'deleted_docs': 0,\n u'max_doc': ...,\n u'num_docs': ...},\n u'flush': {u'total': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'index': {u'size': u'...',\n u'size_in_bytes': ...},\n u'merges': {u'current': 0,\n u'current_docs': 0,\n u'current_size': u'0b',\n u'current_size_in_bytes': 0,\n u'total': 0,\n u'total_docs': 0,\n u'total_size': u'0b',\n u'total_size_in_bytes': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'refresh': {u'total': ...,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'routing': {u'index': u'companies',\n u'node': u'...',\n u'primary': True,\n u'relocating_node': None,\n u'shard': 0,\n u'state': u'STARTED'},\n u'state': u'STARTED',\n u'translog': {u'id': ...,\n u'operations': 0}}]},\n u'translog': {u'operations': 0}}},\n u'ok': True}\n\n >>> pprint(conn.getMapping('companies', 'company'))\n {u'company': {u'_all': {u'store': u'yes'},\n u'_id': {u'store': u'yes'},\n u'_index': {u'enabled': True},\n u'_source': {u'enabled': False},\n u'_type': {u'store': u'yes'},\n u'properties': {u'__name__': {u'include_in_all': False,\n u'index': u'not_analyzed',\n u'store': u'yes',\n u'type': u'string'},\n u'city': {u'include_in_all': True,\n u'index': u'not_analyzed',\n u'null_value': u'na',\n u'store': u'yes',\n u'type': u'string'},\n u'street': {u'include_in_all': True,\n u'index': u'not_analyzed',\n u'null_value': u'na',\n u'store': u'yes',\n u'type': u'string'},\n u'text': {u'include_in_all': True,\n u'index': u'not_analyzed',\n u'null_value': u'na',\n u'store': u'yes',\n u'type': u'string'},\n u'zip': {u'include_in_all': True,\n u'index': u'not_analyzed',\n u'null_value': u'na',\n u'store': u'yes',\n u'type': u'string'}}}}\n\nAs you can see the index is empty:\n\n >>> pprint(conn.search('street').total)\n 0\n\n\n=======\nMapping\n=======\n\nNote: this test will start and run an elasticsearch server on port 45299!\n\nThis test experiments with some mapping configurations. Since the elasitcsearch\ndocumentation is not very clear to me. I try to find out how the mapping part\nhas to be done here.\n\n >>> from pprint import pprint\n >>> from p01.elasticsearch import interfaces\n >>> from p01.elasticsearch.pool import ServerPool\n >>> from p01.elasticsearch.pool import ElasticSearchConnectionPool\n\nSetup a conncetion:\n\n >>> servers = ['localhost:45299']\n >>> serverPool = ServerPool(servers)\n >>> connectionPool = ElasticSearchConnectionPool(serverPool)\n >>> conn = connectionPool.connection\n\nLet's setup a mapping definition:\n\n >>> mapping = {\n ... 'item': {\n ... 'properties': {\n ... 'boolean': {\n ... 'type': 'boolean'\n ... },\n ... 'date': {\n ... 'type': 'date'\n ... },\n ... 'datetime': {\n ... 'type': 'date'\n ... },\n ... 'double': {\n ... 'type': 'double'\n ... },\n ... 'float': {\n ... 'type': 'float'\n ... },\n ... 'integer': {\n ... 'type': 'integer'\n ... },\n ... 'long': {\n ... 'type': 'long'\n ... },\n ... 'string': {\n ... 'type': 'string',\n ... 'null_value' : 'nada'\n ... },\n ... }\n ... }\n ... }\n\nNo let's add the mapping using our putMapping method and call refresh: \n\n >>> conn.putMapping(mapping, 'test-mapping', 'item')\n Traceback (most recent call last):\n ...\n IndexMissingException: [test-mapping] missing\n\nas you can see there was an exception because our index doesn't exist yet.\nLet's add our test-mapping index and try again:\n\n >>> conn.createIndex('test-mapping')\n {u'acknowledged': True, u'ok': True}\n\n >>> pprint(conn.refresh('test-mapping', 4))\n {u'_shards': {u'failed': 0, u'successful': ..., u'total': 10}, u'ok': True}\n\n >>> conn.putMapping(mapping, 'test-mapping', 'item')\n {u'acknowledged': True, u'ok': True}\n\n >>> pprint(conn.refresh('test-mapping', 4))\n {u'_shards': {u'failed': 0, u'successful': ..., u'total': 10}, u'ok': True}\n\nAnd get our mapping:\n\n >>> pprint(conn.getMapping('test-mapping', 'item'), width=60)\n {u'item': {u'properties': {u'boolean': {u'type': u'boolean'},\n u'date': {u'format': u'dateOptionalTime',\n u'type': u'date'},\n u'datetime': {u'format': u'dateOptionalTime',\n u'type': u'date'},\n u'double': {u'type': u'double'},\n u'float': {u'type': u'float'},\n u'integer': {u'type': u'integer'},\n u'long': {u'type': u'long'},\n u'string': {u'null_value': u'nada',\n u'type': u'string'}}}}\n\nNow let's index a new item:\n\n >>> import datetime\n >>> doc = {'boolean': True,\n ... 'datetime': datetime.datetime(2011, 02, 24, 12, 0, 0),\n ... 'date': datetime.date(2011, 02, 24),\n ... 'float': float(42),\n ... 'integer': int(42),\n ... 'long': long(42*10000000000000000),\n ... 'string': 'string'}\n >>> conn.index(doc, 'test-mapping', 'item', 1)\n {u'_type': u'item', u'_id': u'1', u'ok': True, u'_version': 1, u'_index': u'test-mapping'}\n\nrefresh index:\n\n >>> pprint(conn.refresh('test-mapping', 4))\n {u'_shards': {u'failed': 0, u'successful': 5, u'total': 10}, u'ok': True}\n\nand search for our index items:\n\n >>> response = conn.search('string', 'test-mapping', 'item')\n >>> data = response.data\n >>> pprint(data)\n {u'_shards': {u'failed': 0, u'successful': 5, u'total': 5},\n u'hits': {u'hits': [{u'_id': u'1',\n u'_index': u'test-mapping',\n u'_score': ...,\n u'_source': {u'boolean': True,\n u'date': datetime.datetime(2011, 2, 24, 0, 0),\n u'datetime': datetime.datetime(2011, 2, 24, 12, 0),\n u'float': 42.0,\n u'integer': 42,\n u'long': 420000000000000000L,\n u'string': u'string'},\n u'_type': u'item'}],\n u'max_score': ...,\n u'total': 1},\n u'timed_out': False,\n u'took': ...}\n\nNow check our values:\n\n >>> source = data['hits']['hits'][0]['_source']\n >>> pprint(source)\n {u'boolean': True,\n u'date': datetime.datetime(2011, 2, 24, 0, 0),\n u'datetime': datetime.datetime(2011, 2, 24, 12, 0),\n u'float': 42.0,\n u'integer': 42,\n u'long': 420000000000000000L,\n u'string': u'string'}\n\n >>> isinstance(source['boolean'], bool)\n True\n\n >>> isinstance(source['datetime'], datetime.datetime)\n True\n\n >>> isinstance(source['date'], datetime.date)\n True\n\n >>> isinstance(source['float'], float)\n True\n\n >>> isinstance(source['integer'], int)\n True\n\n >>> isinstance(source['long'], long)\n True\n\n >>> isinstance(source['string'], basestring)\n True\n\n >>> isinstance(source['string'], unicode)\n True\n\nNote, the datetime and date are also datetime and date items:\n\n >>> isinstance(source['date'], datetime.datetime)\n True\n\n >>> isinstance(source['datetime'], datetime.date)\n True\n\n\n================\nScan Search Type\n================\n\nNote: this test will start and run an elasticsearch server on port 45299!\n\nLet's just do some simple tests without to use a connection pool.\n\n >>> from pprint import pprint\n >>> from p01.elasticsearch.connection import ElasticSearchConnection\n >>> from p01.elasticsearch.exceptions import ElasticSearchServerException\n >>> from p01.elasticsearch.pool import ServerPool\n\n >>> servers = ['localhost:45299']\n >>> serverPool = ServerPool(servers)\n\nNow we are able to get a connection which is persistent and observed by a \nthread local. \n\n >>> conn = ElasticSearchConnection(serverPool)\n\nSetup a test mapping and add a few documents:\n\n >>> conn.createIndex('scanning')\n {u'acknowledged': True, u'ok': True}\n\n >>> for i in range(1000):\n ... _id = unicode(i)\n ... doc = {'_id': _id, 'dummy': u'dummy'}\n ... ignored = conn.index(doc, 'scanning', 'doc')\n\n >>> conn.refresh('scanning')\n {u'ok': True, u'_shards': {u'successful': 5, u'failed': 0, u'total': 10}}\n\nLet's show how we can batch large search results with our scan method.\n\n >>> pprint(conn.search('dummy', 'scanning').total)\n 1000\n\n >>> result = list(conn.scan('dummy', 'scanning'))\n >>> len(result)\n 1000\n\n >>> pprint(sorted(result)[:5])\n [{u'_id': u'0',\n u'_index': u'scanning',\n u'_score': 0.0,\n u'_source': {u'_id': u'0', u'dummy': u'dummy'},\n u'_type': u'doc'},\n {u'_id': u'1',\n u'_index': u'scanning',\n u'_score': 0.0,\n u'_source': {u'_id': u'1', u'dummy': u'dummy'},\n u'_type': u'doc'},\n {u'_id': u'10',\n u'_index': u'scanning',\n u'_score': 0.0,\n u'_source': {u'_id': u'10', u'dummy': u'dummy'},\n u'_type': u'doc'},\n {u'_id': u'100',\n u'_index': u'scanning',\n u'_score': 0.0,\n u'_source': {u'_id': u'100', u'dummy': u'dummy'},\n u'_type': u'doc'},\n {u'_id': u'101',\n u'_index': u'scanning',\n u'_score': 0.0,\n u'_source': {u'_id': u'101', u'dummy': u'dummy'},\n u'_type': u'doc'}]\n\n\n====\nBulk\n====\n\nNote: this test will start and run an elasticsearch server on port 45299!\n\nThis test shows how to index items using the bulk concept.\n\n >>> from pprint import pprint\n >>> from p01.elasticsearch import interfaces\n >>> from p01.elasticsearch.pool import ServerPool\n >>> from p01.elasticsearch.pool import ElasticSearchConnectionPool\n\n >>> servers = ['localhost:45299']\n >>> serverPool = ServerPool(servers)\n\nNow we are able to get a connection which is persistent and observed by a \nthread local from the pool:\n\n >>> connectionPool = ElasticSearchConnectionPool(serverPool)\n >>> conn = connectionPool.connection\n >>> conn\n \n\nLet's set the bulkMaxSize to 5. This means if we index 5 items the index method\nwill implicit send a index request to the server\n\n >>> conn.bulkMaxSize = 5\n\n >>> conn.bulkMaxSize\n 5\n\nLet's bulk index some items:\n\n >>> doc = {'title': u'Wir suchen einen Marketingplaner',\n ... 'description': u'Wir bieten Ihnen eine gute Anstellung'}\n >>> conn.bulkIndex(doc, 'testing', 'job', 1)\n\n >>> doc = {'title': u'Wir suchen einen Buchhalter',\n ... 'description': u'Wir bieten Ihnen eine gute Anstellung'}\n >>> conn.bulkIndex(doc, 'testing', 'job', 2)\n\nNow commit our bulk data. even if we not indexed the full amount of bulkMaxSize:\n\n >>> pprint(conn.bulkCommit())\n {u'items': [{u'index': {u'_id': u'1',\n u'_index': u'testing',\n u'_type': u'job',\n u'_version': 1,\n u'ok': True}},\n {u'index': {u'_id': u'2',\n u'_index': u'testing',\n u'_type': u'job',\n u'_version': 1,\n u'ok': True}}],\n u'took': ...}\n\n >>> conn.bulkCounter\n 0\n\nNow we search the items:\n\n >>> response = conn.search(\"Anstellung\", 'testing', 'job')\n >>> pprint(response.data)\n {u'_shards': {u'failed': 0, u'successful': 5, u'total': 5},\n u'hits': {u'hits': [], u'max_score': None, u'total': 0},\n u'timed_out': False,\n u'took': ...}\n\nAs you can see, we didn't comit the data because we didn't use the refresh\nparameter. Let's call refresh now:\n\n >>> conn.refresh('testing')\n {u'ok': True, u'_shards': {u'successful': 5, u'failed': 0, u'total': 10}}\n\nand search again:\n >>> response = conn.search(\"Anstellung\", 'testing', 'job')\n >>> pprint(response.data)\n {u'_shards': {u'failed': 0, u'successful': 5, u'total': 5},\n u'hits': {u'hits': [{u'_id': u'1',\n u'_index': u'testing',\n u'_score': ...,\n u'_source': {u'description': u'Wir bieten Ihnen eine gute Anstellung',\n u'title': u'Wir suchen einen Marketingplaner'},\n u'_type': u'job'},\n {u'_id': u'2',\n u'_index': u'testing',\n u'_score': ...,\n u'_source': {u'description': u'Wir bieten Ihnen eine gute Anstellung',\n u'title': u'Wir suchen einen Buchhalter'},\n u'_type': u'job'}],\n u'max_score': ...,\n u'total': 2},\n u'timed_out': False,\n u'took': ...}\n\nLet's index more items till we reach the bulkMaxSize:\n\n >>> len(conn.bulkItems)\n 0\n\n >>> doc = {'title': u'Wir suchen einen Koch',\n ... 'description': u'Wir bieten Ihnen eine gute Anstellung'}\n >>> conn.bulkIndex(doc, 'testing', 'job', 3)\n\n >>> conn.bulkCounter\n 1\n\n >>> doc = {'title': u'Wir suchen einen Sachbearbeiter',\n ... 'description': u'Wir bieten Ihnen eine gute Anstellung'}\n >>> conn.bulkIndex(doc, 'testing', 'job', 4)\n\n >>> conn.bulkCounter\n 2\n\n >>> doc = {'title': u'Wir suchen einen Mechaniker',\n ... 'description': u'Wir bieten Ihnen eine gute Anstellung'}\n >>> conn.bulkIndex(doc, 'testing', 'job', 5)\n\n >>> conn.bulkCounter\n 3\n\n >>> doc = {'title': u'Wir suchen einen Exportfachmann',\n ... 'description': u'Wir bieten Ihnen eine gute Anstellung'}\n >>> conn.bulkIndex(doc, 'testing', 'job', 6)\n\n >>> conn.bulkCounter\n 4\n\nNow, our bulkMaxSize forces to commit data:\n\n >>> doc = {'title': u'Wir suchen einen Entwickler',\n ... 'description': u'Wir bieten Ihnen eine gute Anstellung'}\n >>> pprint(conn.bulkIndex(doc, 'testing', 'job', 7))\n {u'items': [{u'index': {u'_id': u'3',\n u'_index': u'testing',\n u'_type': u'job',\n u'_version': 1,\n u'ok': True}},\n {u'index': {u'_id': u'4',\n u'_index': u'testing',\n u'_type': u'job',\n u'_version': 1,\n u'ok': True}},\n {u'index': {u'_id': u'5',\n u'_index': u'testing',\n u'_type': u'job',\n u'_version': 1,\n u'ok': True}},\n {u'index': {u'_id': u'6',\n u'_index': u'testing',\n u'_type': u'job',\n u'_version': 1,\n u'ok': True}},\n {u'index': {u'_id': u'7',\n u'_index': u'testing',\n u'_type': u'job',\n u'_version': 1,\n u'ok': True}}],\n u'took': ...}\n\njust wait till the server calls refresh by itself every second by default:\n\n >>> import time\n >>> time.sleep(1)\n\n >>> len(conn.bulkItems)\n 0\n\nAs you can see, we have all 7 items indexed:\n\n >>> response = conn.search(\"Anstellung\", 'testing', 'job')\n >>> pprint(response.data)\n {u'_shards': {u'failed': 0, u'successful': 5, u'total': 5},\n u'hits': {u'hits': [{u'_id': u'1',\n u'_index': u'testing',\n u'_score': ...,\n u'_source': {u'description': u'Wir bieten Ihnen eine gute Anstellung',\n u'title': u'Wir suchen einen Marketingplaner'},\n u'_type': u'job'},\n {u'_id': u'6',\n u'_index': u'testing',\n u'_score': ...,\n u'_source': {u'description': u'Wir bieten Ihnen eine gute Anstellung',\n u'title': u'Wir suchen einen Exportfachmann'},\n u'_type': u'job'},\n {u'_id': u'2',\n u'_index': u'testing',\n u'_score': ...,\n u'_source': {u'description': u'Wir bieten Ihnen eine gute Anstellung',\n u'title': u'Wir suchen einen Buchhalter'},\n u'_type': u'job'},\n {u'_id': u'7',\n u'_index': u'testing',\n u'_score': ...,\n u'_source': {u'description': u'Wir bieten Ihnen eine gute Anstellung',\n u'title': u'Wir suchen einen Entwickler'},\n u'_type': u'job'},\n {u'_id': u'4',\n u'_index': u'testing',\n u'_score': ...,\n u'_source': {u'description': u'Wir bieten Ihnen eine gute Anstellung',\n u'title': u'Wir suchen einen Sachbearbeiter'},\n u'_type': u'job'},\n {u'_id': u'5',\n u'_index': u'testing',\n u'_score': ...,\n u'_source': {u'description': u'Wir bieten Ihnen eine gute Anstellung',\n u'title': u'Wir suchen einen Mechaniker'},\n u'_type': u'job'},\n {u'_id': u'3',\n u'_index': u'testing',\n u'_score': ...,\n u'_source': {u'description': u'Wir bieten Ihnen eine gute Anstellung',\n u'title': u'Wir suchen einen Koch'},\n u'_type': u'job'}],\n u'max_score': ...,\n u'total': 7},\n u'timed_out': False,\n u'took': ...}\n\n\n==========================\nSimple indexing and search\n==========================\n\nNote: this test will start and run an elasticsearch server on port 45299!\n\nThis test just uses non predefined mappings. Let's just do some simple tests\nwithout to use a connection pool.\n\n >>> from pprint import pprint\n >>> from p01.elasticsearch.connection import ElasticSearchConnection\n >>> from p01.elasticsearch.exceptions import ElasticSearchServerException\n >>> from p01.elasticsearch.pool import ServerPool\n\n >>> import p01.elasticsearch.testing\n >>> statusRENormalizer = p01.elasticsearch.testing.statusRENormalizer\n\n >>> servers = ['localhost:45299']\n >>> serverPool = ServerPool(servers)\n\nNow we are able to get a connection which is persistent and observed by a \nthread local. \n\n >>> conn = ElasticSearchConnection(serverPool)\n\nAdd a few documents:\n\n >>> pprint(conn.index({\"name\":\"Document One\"}, \"testdocs\", \"doc\", 1))\n {u'_id': u'1',\n u'_index': u'testdocs',\n u'_type': u'doc',\n u'_version': 1,\n u'ok': True}\n\n >>> pprint(conn.index({\"name\":\"Document Two\"}, \"testdocs\", \"doc\", 2))\n {u'_id': u'2',\n u'_index': u'testdocs',\n u'_type': u'doc',\n u'_version': 1,\n u'ok': True}\n\nNote, we call refresh here which will ensure that the document get indexed at\nthe server side. Normaly this should not be dine explicit in a production\nsetup. The elasticsearch server is by default configured that each second the\nrefresh happens at the server side:\n\n >>> pprint(conn.refresh(\"testdocs\"))\n {u'_shards': {u'failed': 0, u'successful': 5, u'total': 10}, u'ok': True}\n\nGet one:\n\n >>> pprint(conn.get(1, \"testdocs\", \"doc\"))\n {u'_id': u'1',\n u'_index': u'testdocs',\n u'_source': {u'name': u'Document One'},\n u'_type': u'doc',\n u'_version': 1,\n u'exists': True}\n\nCount the documents:\n\n >>> pprint(conn.count(\"name:Document One\"))\n {u'_shards': {u'failed': 0, u'successful': 5, u'total': 5}, u'count': 2}\n\n >>> pprint(conn.count(\"name:Document\"))\n {u'_shards': {u'failed': 0, u'successful': 5, u'total': 5}, u'count': 2}\n\nSearch a document:\n\n >>> response = conn.search(\"name:Document One\")\n >>> pprint(response.data)\n {u'_shards': {u'failed': 0, u'successful': 5, u'total': 5},\n u'hits': {u'hits': [{u'_id': u'1',\n u'_index': u'testdocs',\n u'_score': 0.2712221,\n u'_source': {u'name': u'Document One'},\n u'_type': u'doc'},\n {u'_id': u'2',\n u'_index': u'testdocs',\n u'_score': 0.028130025,\n u'_source': {u'name': u'Document Two'},\n u'_type': u'doc'}],\n u'max_score': 0.2712221,\n u'total': 2},\n u'timed_out': False,\n u'took': ...}\n\n\nMore like this:\n\n >>> pprint(conn.index({\"name\":\"Document Three\"}, \"testdocs\", \"doc\", 3))\n {u'_id': u'3',\n u'_index': u'testdocs',\n u'_type': u'doc',\n u'_version': 1,\n u'ok': True}\n\n >>> pprint(conn.refresh(\"testdocs\"))\n {u'_shards': {u'failed': 0, u'successful': 5, u'total': 10}, u'ok': True}\n\n >>> pprint(conn.moreLikeThis(1, \"testdocs\", \"doc\",\n ... fields='name', min_term_freq=1, min_doc_freq=1))\n {u'_shards': {u'failed': 0, u'successful': 5, u'total': 5},\n u'hits': {u'hits': [{u'_id': u'2',\n u'_index': u'testdocs',\n u'_score': 0.19178301,\n u'_source': {u'name': u'Document Two'},\n u'_type': u'doc'},\n {u'_id': u'3',\n u'_index': u'testdocs',\n u'_score': 0.19178301,\n u'_source': {u'name': u'Document Three'},\n u'_type': u'doc'}],\n u'max_score': 0.19178301,\n u'total': 2},\n u'timed_out': False,\n u'took': ...}\n\nDelete Document Two:\n\n >>> pprint(conn.delete('2', \"testdocs\", \"doc\"))\n {u'_id': u'2',\n u'_index': u'testdocs',\n u'_type': u'doc',\n u'_version': 2,\n u'found': True,\n u'ok': True}\n\nDelete Document Three:\n\n >>> pprint(conn.delete('3', \"testdocs\", \"doc\"))\n {u'_id': u'3',\n u'_index': u'testdocs',\n u'_type': u'doc',\n u'_version': 2,\n u'found': True,\n u'ok': True}\n\nDelete the index:\n\n >>> pprint(conn.deleteIndex(\"testdocs\"))\n {u'acknowledged': True, u'ok': True}\n\nCreate the index a new index:\n\n >>> pprint(conn.createIndex(\"testdocs\"))\n {u'acknowledged': True, u'ok': True}\n\nTry to create the index again which will fail:\n\n >>> conn.createIndex(\"testdocs\")\n Traceback (most recent call last):\n ...\n IndexAlreadyExistsException: Already exists\n\nAs you can see, the error provides an error message:\n\n >>> try:\n ... conn.createIndex(\"testdocs\")\n ... except ElasticSearchServerException, e:\n ... e.args[0]\n 'Already exists'\n\nAdd a new mapping:\n\n >>> mapping = {\"doc\" : {\"properties\" :\n ... {\"name\" : {\"type\" : \"string\", \"store\" : \"yes\"}}}}\n >>> pprint(conn.putMapping(mapping, 'testdocs', 'doc'))\n {u'acknowledged': True, u'ok': True}\n\nGet the status:\n\n >>> statusRENormalizer.pprint(conn.status(\"testdocs\"))\n {u'_shards': {u'failed': 0, u'successful': 5, u'total': 10},\n u'indices': {u'testdocs': {u'docs': {u'deleted_docs': 0,\n u'max_doc': ...,\n u'num_docs': ...},\n u'flush': {u'total': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'index': {u'primary_size': u'...',\n u'primary_size_in_bytes': ...,\n u'size': u'...',\n u'size_in_bytes': ...},\n u'merges': {u'current': 0,\n u'current_docs': 0,\n u'current_size': u'0b',\n u'current_size_in_bytes': 0,\n u'total': 0,\n u'total_docs': 0,\n u'total_size': u'0b',\n u'total_size_in_bytes': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'refresh': {u'total': ...,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'shards': {u'0': [{u'docs': {u'deleted_docs': 0,\n u'max_doc': ...,\n u'num_docs': ...},\n u'flush': {u'total': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'index': {u'size': u'...',\n u'size_in_bytes': ...},\n u'merges': {u'current': 0,\n u'current_docs': 0,\n u'current_size': u'0b',\n u'current_size_in_bytes': 0,\n u'total': 0,\n u'total_docs': 0,\n u'total_size': u'0b',\n u'total_size_in_bytes': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'refresh': {u'total': ...,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'routing': {u'index': u'testdocs',\n u'node': u'...',\n u'primary': True,\n u'relocating_node': None,\n u'shard': 0,\n u'state': u'STARTED'},\n u'state': u'STARTED',\n u'translog': {u'id': ...,\n u'operations': 0}}],\n u'1': [{u'docs': {u'deleted_docs': 0,\n u'max_doc': ...,\n u'num_docs': ...},\n u'flush': {u'total': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'index': {u'size': u'...',\n u'size_in_bytes': ...},\n u'merges': {u'current': 0,\n u'current_docs': 0,\n u'current_size': u'0b',\n u'current_size_in_bytes': 0,\n u'total': 0,\n u'total_docs': 0,\n u'total_size': u'0b',\n u'total_size_in_bytes': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'refresh': {u'total': ...,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'routing': {u'index': u'testdocs',\n u'node': u'...',\n u'primary': True,\n u'relocating_node': None,\n u'shard': 1,\n u'state': u'STARTED'},\n u'state': u'STARTED',\n u'translog': {u'id': ...,\n u'operations': 0}}],\n u'2': [{u'docs': {u'deleted_docs': 0,\n u'max_doc': ...,\n u'num_docs': ...},\n u'flush': {u'total': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'index': {u'size': u'...',\n u'size_in_bytes': ...},\n u'merges': {u'current': 0,\n u'current_docs': 0,\n u'current_size': u'0b',\n u'current_size_in_bytes': 0,\n u'total': 0,\n u'total_docs': 0,\n u'total_size': u'0b',\n u'total_size_in_bytes': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'refresh': {u'total': ...,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'routing': {u'index': u'testdocs',\n u'node': u'...',\n u'primary': True,\n u'relocating_node': None,\n u'shard': 2,\n u'state': u'STARTED'},\n u'state': u'STARTED',\n u'translog': {u'id': ...,\n u'operations': 0}}],\n u'3': [{u'docs': {u'deleted_docs': 0,\n u'max_doc': ...,\n u'num_docs': ...},\n u'flush': {u'total': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'index': {u'size': u'...',\n u'size_in_bytes': ...},\n u'merges': {u'current': 0,\n u'current_docs': 0,\n u'current_size': u'0b',\n u'current_size_in_bytes': 0,\n u'total': 0,\n u'total_docs': 0,\n u'total_size': u'0b',\n u'total_size_in_bytes': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'refresh': {u'total': ...,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'routing': {u'index': u'testdocs',\n u'node': u'...',\n u'primary': True,\n u'relocating_node': None,\n u'shard': 3,\n u'state': u'STARTED'},\n u'state': u'STARTED',\n u'translog': {u'id': ...,\n u'operations': 0}}],\n u'4': [{u'docs': {u'deleted_docs': 0,\n u'max_doc': ...,\n u'num_docs': ...},\n u'flush': {u'total': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'index': {u'size': u'...',\n u'size_in_bytes': ...},\n u'merges': {u'current': 0,\n u'current_docs': 0,\n u'current_size': u'0b',\n u'current_size_in_bytes': 0,\n u'total': 0,\n u'total_docs': 0,\n u'total_size': u'0b',\n u'total_size_in_bytes': 0,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'refresh': {u'total': ...,\n u'total_time': u'...',\n u'total_time_in_millis': ...},\n u'routing': {u'index': u'testdocs',\n u'node': u'...',\n u'primary': True,\n u'relocating_node': None,\n u'shard': 4,\n u'state': u'STARTED'},\n u'state': u'STARTED',\n u'translog': {u'id': ...,\n u'operations': 0}}]},\n u'translog': {u'operations': 0}}},\n u'ok': True}\n\nTest adding with automatic id generation.\n\n >>> pprint(conn.index({\"name\":\"Document Four\"}, \"testdocs\", \"doc\"))\n Traceback (most recent call last):\n ...\n ValueError: You must explicit define id=None without doc['_id']\n\nAs you can see, this requires that we set explicit id=None:\n\n >>> pprint(conn.index({\"name\":\"Document Four\"}, \"testdocs\", \"doc\", id=None))\n {u'_id': u'...',\n u'_index': u'testdocs',\n u'_type': u'doc',\n u'_version': 1,\n u'ok': True}\n\n\nThe reason for setting explicit id=None is that we also support doc['_id']\nas id:\n\n >>> pprint(conn.index({\"name\":\"Document Five\", \"_id\":\"5\"}, \"testdocs\", \"doc\"))\n {u'_id': u'...',\n u'_index': u'testdocs',\n u'_type': u'doc',\n u'_version': 1,\n u'ok': True}\n\n\n=======\nCHANGES\n=======\n\n0.6.0 (2014-03-24)\n------------------\n\n- feature: implemented putTemplate method using a PUT request at the _template\n endpoint\n\n\n0.5.2 (2013-06-28)\n------------------\n\n- bugfix: improve error handling. Use json response string if no error message\n is given.\n\n\n0.5.1 (2012-12-22)\n------------------\n\n- implemented put settings (putSettings) method\n\n- fix tests based on changed elasticsearch 0.20.1 output\n\n- switch to p01.recipe.setup:importchecker\n\n\n0.5.0 (2012-11-18)\n------------------\n\n- initial release", "description_content_type": null, "docs_url": null, "download_url": "UNKNOWN", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "http://pypi.python.org/pypi/p01.elasticsearch", "keywords": "Zope3 z3c p01 elasticsearch client", "license": "ZPL 2.1", "maintainer": null, "maintainer_email": null, "name": "p01.elasticsearch", "package_url": "https://pypi.org/project/p01.elasticsearch/", "platform": "UNKNOWN", "project_url": "https://pypi.org/project/p01.elasticsearch/", "project_urls": { "Download": "UNKNOWN", "Homepage": "http://pypi.python.org/pypi/p01.elasticsearch" }, "release_url": "https://pypi.org/project/p01.elasticsearch/0.6.0/", "requires_dist": null, "requires_python": null, "summary": "Elasticsearch client for Zope3", "version": "0.6.0" }, "last_serial": 1039177, "releases": { "0.5.0": [ { "comment_text": "", "digests": { "md5": "36bb1d2de0e2acc25b7f33864015d066", "sha256": "f162df110ca02925b6d45d1060715875913f366b715e419ec80f17f76f17da68" }, "downloads": -1, "filename": "p01.elasticsearch-0.5.0.zip", "has_sig": false, "md5_digest": "36bb1d2de0e2acc25b7f33864015d066", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 60287, "upload_time": "2012-11-18T23:00:41", "url": "https://files.pythonhosted.org/packages/df/85/cc474954871972354ca1e0557b6160e9431772f78e7b2ca11a6a0214ea88/p01.elasticsearch-0.5.0.zip" } ], "0.5.1": [ { "comment_text": "", "digests": { "md5": "313bba6757bcd21e372cb286d2d9ecc1", "sha256": "e87aa9398bc8dd770a141d4b125ce3db02e7d82baa8643894e0f29253391805d" }, "downloads": -1, "filename": "p01.elasticsearch-0.5.1.zip", "has_sig": false, "md5_digest": "313bba6757bcd21e372cb286d2d9ecc1", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 60595, "upload_time": "2012-12-22T07:38:42", "url": "https://files.pythonhosted.org/packages/c9/02/08caf088da58dbc1ad66c46e95ef3a4a18c6ccf330a0633946b8628c8f55/p01.elasticsearch-0.5.1.zip" } ], "0.5.2": [ { "comment_text": "", "digests": { "md5": "c17863b4771dec0da4b8ce3e604c3838", "sha256": "dc179ce9f7f7f4b37776488b5bbac8dea2f7b55ae348f82685a106683d7232ed" }, "downloads": -1, "filename": "p01.elasticsearch-0.5.2.zip", "has_sig": false, "md5_digest": "c17863b4771dec0da4b8ce3e604c3838", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 76464, "upload_time": "2013-06-28T16:00:01", "url": "https://files.pythonhosted.org/packages/cc/ff/f99a9acf63176717f863818c7e3c73379b08781cf1d7b30a0c8881ebc93d/p01.elasticsearch-0.5.2.zip" } ], "0.6.0": [ { "comment_text": "", "digests": { "md5": "33140050e4bd8a4c3a18fd554dd4c442", "sha256": "70198462a8b1fe2ec212bfeb214fe1bff8b166c35ae35095dd0ed148415b9915" }, "downloads": -1, "filename": "p01.elasticsearch-0.6.0.zip", "has_sig": false, "md5_digest": "33140050e4bd8a4c3a18fd554dd4c442", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 76679, "upload_time": "2014-03-24T10:08:59", "url": "https://files.pythonhosted.org/packages/6b/21/541459ab93f9efe72b2b7471f23c3f07e8d9ab315884e90940d148fbf27c/p01.elasticsearch-0.6.0.zip" } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "33140050e4bd8a4c3a18fd554dd4c442", "sha256": "70198462a8b1fe2ec212bfeb214fe1bff8b166c35ae35095dd0ed148415b9915" }, "downloads": -1, "filename": "p01.elasticsearch-0.6.0.zip", "has_sig": false, "md5_digest": "33140050e4bd8a4c3a18fd554dd4c442", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 76679, "upload_time": "2014-03-24T10:08:59", "url": "https://files.pythonhosted.org/packages/6b/21/541459ab93f9efe72b2b7471f23c3f07e8d9ab315884e90940d148fbf27c/p01.elasticsearch-0.6.0.zip" } ] }