PK¯s;“×2EGG-INFO/dependency_links.txt PK¯s;…WnNNEGG-INFO/PKG-INFOMetadata-Version: 1.0 Name: pymongo Version: 0.15 Summary: Python driver for MongoDB Home-page: http://github.com/mongodb/mongo-python-driver Author: 10gen Author-email: mongodb-user@googlegroups.com License: Apache License, Version 2.0 Description: ======= PyMongo ======= :Info: See `the mongo site `_ for more information. See `github `_ for the latest source. :Author: Mike Dirolf About ===== The PyMongo distribution contains tools for interacting with the Mongo database from Python. The ``pymongo`` package is a native Python driver for the Mongo database. The ``gridfs`` package is a `gridfs `_ implementation on top of ``pymongo``. Installation ============ If you have `setuptools `_ installed you should be able to do **easy_install pymongo** to install PyMongo. Otherwise you can download the project source and do **python setup.py install** to install. Dependencies ============ The PyMongo distribution has been tested on Python 2.x, where x >= 3. On Python 2.3 the optional C extension will not be built. This will negatively affect performance, but everything should still work. Additional dependencies are: - `ElementTree `_ (this is included with Python >= 2.5) - (to generate documentation) `epydoc `_ - (to auto-discover tests) `nose `_ Examples ======== Here's a basic example (for more see the *examples/* directory): >>> from pymongo.connection import Connection >>> connection = Connection("localhost", 27017) >>> db = connection.test >>> db.name() u'test' >>> db.my_collection Collection(Database(Connection('localhost', 27017), u'test'), u'my_collection') >>> db.my_collection.save({"x": 10}) ObjectId('D\x87\xdd\xe8\xd6\x0f\x89\xfc\xab\x06\xac\x8e') >>> db.my_collection.save({"x": 8}) ObjectId('\xde\x0b\xec^\xdc\x11`\x12\xf8\xeb/\xcf') >>> db.my_collection.save({"x": 11}) ObjectId('\t6\xc6\x07\xb3\xfc\x87\xc4\x82\x04\x0f\\') >>> db.my_collection.find_one() {u'x': 10, u'_id': ObjectId('D\x87\xdd\xe8\xd6\x0f\x89\xfc\xab\x06\xac\x8e')} >>> for item in db.my_collection.find(): ... print item["x"] ... 10 8 11 >>> from pymongo import ASCENDING >>> db.my_collection.create_index("x", ASCENDING) u'x_1' >>> for item in db.my_collection.find().sort("x", ASCENDING): ... print item["x"] ... 8 10 11 >>> [item["x"] for item in db.my_collection.find().limit(2).skip(1)] [8, 11] Documentation ============= You will need `epydoc `_ installed to generate the documentation. Documentation can be generated by running **python setup.py doc**. Generated documentation can be found in the *doc/* directory. Testing ======= The easiest way to run the tests is to install `nose `_ (**easy_install nose**) and run **nosetests** or **python setup.py test** in the root of the distribution. Tests are located in the *test/* directory. Credits ======= Thanks to (in no particular order) (if you belong here and are missing please let us know): - moe at mbox dot bz: turn off nagle - Michael Stephens (mikejs): seek and tell for read mode GridFile Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: Microsoft :: Windows Classifier: Operating System :: POSIX Classifier: Programming Language :: Python Classifier: Topic :: Database PK¯s;NÔ­ EGG-INFO/requires.txtelementtreePK¯s;g$«ÄÄEGG-INFO/SOURCES.txtLICENSE MANIFEST.in README.rst epydoc-config ez_setup.py setup.py examples/auto_reference.py examples/custom_type.py examples/gridfs_demo.py examples/quick_tour.py examples/simple_demo.py gridfs/__init__.py gridfs/errors.py gridfs/grid_file.py pymongo/__init__.py pymongo/_cbsonmodule.c pymongo/binary.py pymongo/bson.py pymongo/code.py pymongo/collection.py pymongo/connection.py pymongo/cursor.py pymongo/cursor_manager.py pymongo/database.py pymongo/dbref.py pymongo/errors.py pymongo/master_slave_connection.py pymongo/objectid.py pymongo/son.py pymongo/son_manipulator.py pymongo/thread_util.py pymongo.egg-info/PKG-INFO pymongo.egg-info/SOURCES.txt pymongo.egg-info/dependency_links.txt pymongo.egg-info/requires.txt pymongo.egg-info/top_level.txt test/__init__.py test/autoreconnect.py test/gridfs15.py test/gridfs16.py test/qcheck.py test/test_binary.py test/test_bson.py test/test_code.py test/test_collection.py test/test_connection.py test/test_cursor.py test/test_database.py test/test_dbref.py test/test_grid_file.py test/test_gridfs.py test/test_master_slave_connection.py test/test_objectid.py test/test_paired.py test/test_pooling.py test/test_pymongo.py test/test_son.py test/test_son_manipulator.py test/test_thread_util.py test/test_threads.py tools/README.rst tools/auto_reconnect_test.py tools/benchmark.py tools/bson_benchmark.py tools/clean.py tools/driver_tests.py tools/fail_if_no_c.py tools/mongodb_benchmark_tools.py tools/validate tools/validate.pyPK¯s;íó,%EGG-INFO/top_level.txtpymongo gridfs PK°s;“×2EGG-INFO/zip-safe PK¢`è:|dz½gridfs/__init__.py# Copyright 2009 10gen, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """GridFS is a specification for storing large objects in Mongo. The `gridfs` package is an implementation of GridFS on top of `pymongo`, exposing a file-like interface. """ import types from grid_file import GridFile from pymongo.database import Database class GridFS(object): """An instance of GridFS on top of a single `pymongo.database.Database`. """ def __init__(self, database): """Create a new instance of GridFS. Raises TypeError if database is not an instance of `pymongo.database.Database`. :Parameters: - `database`: database to use """ if not isinstance(database, Database): raise TypeError("database must be an instance of Database") self.__database = database def open(self, filename, mode="r", collection="fs"): """Open a GridFile for reading or writing. Shorthand method for creating / opening a GridFile from a filename. mode must be a mode supported by `gridfs.grid_file.GridFile`. Only a single opened GridFile instance may exist for a file in gridfs at any time. Care must be taken to close GridFile instances when done using them. GridFiles support the context manager protocol (the "with" statement). :Parameters: - `filename`: name of the GridFile to open - `mode` (optional): mode to open the file in - `collection` (optional): root collection to use for this file """ return GridFile({"filename": filename}, self.__database, mode, collection) def remove(self, filename_or_spec, collection="fs"): """Remove one or more GridFile(s). Can remove by filename, or by an entire file spec (see `gridfs.grid_file.GridFile` for documentation on valid fields. Delete all GridFiles that match filename_or_spec. Raises TypeError if filename_or_spec is not an instance of (str, unicode, dict, SON) or collection is not an instance of (str, unicode). :Parameters: - `filename_or_spec`: identifier of file(s) to remove - `collection` (optional): root collection where this file is located """ spec = filename_or_spec if isinstance(filename_or_spec, types.StringTypes): spec = {"filename": filename_or_spec} if not isinstance(collection, types.StringTypes): raise TypeError("collection must be an instance of (str, unicode)") # convert to _id's so we can uniquely create GridFile instances ids = [] for grid_file in self.__database[collection].files.find(spec): ids.append(grid_file["_id"]) # open for writing to remove the chunks for these files for file_id in ids: f = GridFile({"_id": file_id}, self.__database, "w", collection) f.close() self.__database[collection].files.remove(spec) def list(self, collection="fs"): """List the names of all GridFiles stored in this instance of GridFS. Raises TypeError if collection is not an instance of (str, unicode). :Parameters: - `collection` (optional): root collection to list files from """ if not isinstance(collection, types.StringTypes): raise TypeError("collection must be an instance of (str, unicode)") names = [] for grid_file in self.__database[collection].files.find(): names.append(grid_file["filename"]) return names PK°s;Z\ù[ÊÊgridfs/__init__.pyc;ò °ÃTJc@sCdZdkZdklZdklZdefd„ƒYZdS(s¨GridFS is a specification for storing large objects in Mongo. The `gridfs` package is an implementation of GridFS on top of `pymongo`, exposing a file-like interface. N(sGridFile(sDatabasesGridFScBs>tZdZd„Zddd„Zdd„Zdd„ZRS(sJAn instance of GridFS on top of a single `pymongo.database.Database`. cCs.t|tƒ otdƒ‚n||_dS(sÈCreate a new instance of GridFS. Raises TypeError if database is not an instance of `pymongo.database.Database`. :Parameters: - `database`: database to use s(database must be an instance of DatabaseN(s isinstancesdatabasesDatabases TypeErrorsselfs_GridFS__database(sselfsdatabase((s4build/bdist.darwin-9.8.0-i386/egg/gridfs/__init__.pys__init__ssrsfscCs#thd|<|i||ƒSdS(sOpen a GridFile for reading or writing. Shorthand method for creating / opening a GridFile from a filename. mode must be a mode supported by `gridfs.grid_file.GridFile`. Only a single opened GridFile instance may exist for a file in gridfs at any time. Care must be taken to close GridFile instances when done using them. GridFiles support the context manager protocol (the "with" statement). :Parameters: - `filename`: name of the GridFile to open - `mode` (optional): mode to open the file in - `collection` (optional): root collection to use for this file sfilenameN(sGridFilesfilenamesselfs_GridFS__databasesmodes collection(sselfsfilenamesmodes collection((s4build/bdist.darwin-9.8.0-i386/egg/gridfs/__init__.pysopen+scCsâ|}t|tiƒohd|<}nt|tiƒ otdƒ‚ng}x2|i |i i |ƒD]}|i |dƒqpWx9|D]1}thd|<|i d|ƒ}|iƒq’W|i |i i|ƒdS(s(Remove one or more GridFile(s). Can remove by filename, or by an entire file spec (see `gridfs.grid_file.GridFile` for documentation on valid fields. Delete all GridFiles that match filename_or_spec. Raises TypeError if filename_or_spec is not an instance of (str, unicode, dict, SON) or collection is not an instance of (str, unicode). :Parameters: - `filename_or_spec`: identifier of file(s) to remove - `collection` (optional): root collection where this file is located sfilenames0collection must be an instance of (str, unicode)s_idswN(sfilename_or_specsspecs isinstancestypess StringTypess collections TypeErrorsidssselfs_GridFS__databasesfilessfinds grid_filesappendsfile_idsGridFilesfsclosesremove(sselfsfilename_or_specs collectionsfsidss grid_filesfile_idsspec((s4build/bdist.darwin-9.8.0-i386/egg/gridfs/__init__.pysremove=s !cCsdt|tiƒ otdƒ‚ng}x/|i|ii ƒD]}|i |dƒqAW|SdS(s÷List the names of all GridFiles stored in this instance of GridFS. Raises TypeError if collection is not an instance of (str, unicode). :Parameters: - `collection` (optional): root collection to list files from s0collection must be an instance of (str, unicode)sfilenameN( s isinstances collectionstypess StringTypess TypeErrorsnamessselfs_GridFS__databasesfilessfinds grid_filesappend(sselfs collections grid_filesnames((s4build/bdist.darwin-9.8.0-i386/egg/gridfs/__init__.pyslist\s(s__name__s __module__s__doc__s__init__sopensremoveslist(((s4build/bdist.darwin-9.8.0-i386/egg/gridfs/__init__.pysGridFSs   (s__doc__stypess grid_filesGridFilespymongo.databasesDatabasesobjectsGridFS(sGridFilesGridFSstypessDatabase((s4build/bdist.darwin-9.8.0-i386/egg/gridfs/__init__.pys?s   PKqC:n7à¡ÈÈgridfs/errors.py# Copyright 2009 10gen, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Exceptions raised by the `gridfs` package""" class CorruptGridFile(Exception): """Raised when a GridFS "file" is malformed. """ PK°s;9 9gridfs/errors.pyc;ò B–ˆIc@s dZdefd„ƒYZdS(s)Exceptions raised by the `gridfs` packagesCorruptGridFilecBstZdZRS(s.Raised when a GridFS "file" is malformed. (s__name__s __module__s__doc__(((s2build/bdist.darwin-9.8.0-i386/egg/gridfs/errors.pysCorruptGridFiles N(s__doc__s ExceptionsCorruptGridFile(sCorruptGridFile((s2build/bdist.darwin-9.8.0-i386/egg/gridfs/errors.pys?sPK'`;ä\`•3•3gridfs/grid_file.py# Copyright 2009 10gen, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """File-like object used for reading from and writing to GridFS""" import types import datetime import math import os from threading import Condition from pymongo import _SEEK_SET from pymongo import _SEEK_CUR from pymongo import _SEEK_END from pymongo.son import SON from pymongo.database import Database from pymongo.objectid import ObjectId from pymongo.dbref import DBRef from pymongo.binary import Binary from errors import CorruptGridFile from pymongo import ASCENDING # TODO we should use per-file reader-writer locks here instead, # for performance. Unfortunately they aren't in the Python standard library. _files_lock = Condition() _open_files = {} class GridFile(object): """A "file" stored in GridFS. """ # TODO should be able to create a GridFile given a Collection object instead # of a database and collection name? # TODO this whole file_spec thing is over-engineered. ought to be just # filename. def __init__(self, file_spec, database, mode="r", collection="fs"): """Open a "file" in GridFS. Application developers should generally not need to instantiate this class directly - instead see the `gridfs.GridFS.open` method. Only a single opened GridFile instance may exist for a file in gridfs at any time. Care must be taken to close GridFile instances when done using them. GridFiles support the context manager protocol (the "with" statement). Raises TypeError if file_spec is not an instance of dict, database is not an instance of `pymongo.database.Database`, or collection is not an instance of (str, unicode). The file_spec argument must be a SON query specifier for the file to open. The *first* file matching the specifier will be opened. If no such files exist, a new file is created using the metadata in file_spec. The valid fields in a file_spec are as follows: - "_id": unique ID for this file * default: `pymongo.objectid.ObjectId()` - "filename": human name for the file - "contentType": valid mime-type for the file - "length": size of the file, in bytes * only used for querying, automatically set for inserts - "chunkSize": size of each of the chunks, in bytes * default: 256 kb - "uploadDate": date when the object was first stored * only used for querying, automatically set for inserts - "aliases": array of alias strings - "metadata": a SON document containing arbitrary data :Parameters: - `file_spec`: query specifier as described above - `database`: the database to store/retrieve this file in - `mode` (optional): the mode to open this file with, one of ("r", "w") - `collection` (optional): the collection in which to store/retrieve this file """ if not isinstance(file_spec, types.DictType): raise TypeError("file_spec must be an instance of (dict, SON)") if not isinstance(database, Database): raise TypeError("database must be an instance of database") if not isinstance(collection, types.StringTypes): raise TypeError("collection must be an instance of (str, unicode)") if not isinstance(mode, types.StringTypes): raise TypeError("mode must be an instance of (str, unicode)") if mode not in ("r", "w"): raise ValueError("mode must be one of ('r', 'w')") self.__collection = database[collection] self.__collection.chunks.ensure_index([("files_id", ASCENDING), ("n", ASCENDING)]) _files_lock.acquire() grid_file = self.__collection.files.find_one(file_spec) if grid_file: self.__id = grid_file["_id"] else: if mode == "r": _files_lock.release() raise IOError("No such file: %r" % file_spec) file_spec["length"] = 0 file_spec["uploadDate"] = datetime.datetime.utcnow() file_spec.setdefault("chunkSize", 256000) self.__id = self.__collection.files.insert(file_spec) # we use repr(self.__id) here because we need it to be string and # filename gets tricky with renaming. this is a hack. while repr(self.__id) in _open_files: _files_lock.wait() _open_files[repr(self.__id)] = True _files_lock.release() self.__mode = mode if mode == "w": self.__erase() self.__buffer = "" self.__position = 0 self.__chunk_number = 0 self.__closed = False def __erase(self): """Erase all of the data stored in this GridFile. """ grid_file = self.__collection.files.find_one({"_id": self.__id}) grid_file["next"] = None grid_file["length"] = 0 self.__collection.files.save(grid_file) self.__collection.chunks.remove({"files_id": self.__id}) def closed(self): return self.__closed closed = property(closed) def mode(self): return self.__mode mode = property(mode) def __create_property(field_name, read_only=False): def getter(self): return self.__collection.files.find_one({"_id": self.__id}).get(field_name, None) def setter(self, value): grid_file = self.__collection.files.find_one({"_id": self.__id}) grid_file[field_name] = value self.__collection.files.save(grid_file) if not read_only: return property(getter, setter) return property(getter) name = __create_property("filename", True) content_type = __create_property("contentType") length = __create_property("length", True) chunk_size = __create_property("chunkSize", True) upload_date = __create_property("uploadDate", True) aliases = __create_property("aliases") metadata = __create_property("metadata") md5 = __create_property("md5", True) def rename(self, filename): """Rename this GridFile. Due to buffering, the rename might not actually occur until `flush()` or `close()` is called. :Parameters: - `filename`: the new name for this GridFile """ grid_file = self.__collection.files.find_one({"_id": self.__id}) grid_file["filename"] = filename self.__collection.files.save(grid_file) def __max_chunk(self): return self.__collection.chunks.find_one({"files_id": self.__id, "n": self.__chunk_number}) def __new_chunk(self, n): chunk = {"files_id": self.__id, "n": n, "data": ""} self.__collection.chunks.insert(chunk) return chunk def __write_buffer_to_chunks(self): """Write the buffer contents out to chunks. """ while len(self.__buffer): max_chunk = self.__max_chunk() if not max_chunk: max_chunk = self.__new_chunk(self.__chunk_number) space = (self.__chunk_number + 1) * self.chunk_size - self.__position if not space: self.__chunk_number += 1 max_chunk = self.__new_chunk(self.__chunk_number) space = self.chunk_size to_write = len(self.__buffer) > space and space or len(self.__buffer) max_chunk["data"] = Binary(max_chunk["data"] + self.__buffer[:to_write]) self.__collection.chunks.save(max_chunk) self.__buffer = self.__buffer[to_write:] self.__position += to_write def flush(self): """Flush the GridFile to the database. """ self.__assert_open() if self.mode != "w": return self.__write_buffer_to_chunks() md5 = self.__collection.database()._command(SON([("filemd5", self.__id), ("root", self.__collection.name())]))["md5"] grid_file = self.__collection.files.find_one({"_id": self.__id}) grid_file["md5"] = md5 grid_file["length"] = self.__position + len(self.__buffer) self.__collection.files.save(grid_file) def close(self): """Close the GridFile. A closed GridFile cannot be read or written any more. Calling `close()` more than once is allowed. """ if not self.__closed: self.flush() self.__closed = True _files_lock.acquire() if repr(self.__id) in _open_files: del _open_files[repr(self.__id)] _files_lock.notifyAll() _files_lock.release() def __assert_open(self, mode=None): if mode and self.mode != mode: raise ValueError("file must be open in mode %r" % mode) if self.closed: raise ValueError("operation cannot be performed on a closed GridFile") def read(self, size=-1): """Read at most size bytes from the file (less if there isn't enough data). The bytes are returned as a string object. If size is negative or omitted all data is read. Raises ValueError if this GridFile is already closed. :Parameters: - `size` (optional): the number of bytes to read """ self.__assert_open("r") if size == 0: return "" remainder = int(self.length) - self.__position if size < 0 or size > remainder: size = remainder bytes = self.__buffer chunk_number = math.floor(self.__position / self.chunk_size) while len(bytes) < size: chunk = self.__collection.chunks.find_one({"files_id": self.__id, "n": chunk_number}) if not chunk: raise CorruptGridFile("no chunk for n = " + chunk_number) if not bytes: bytes += chunk["data"][self.__position % self.chunk_size:] else: bytes += chunk["data"] chunk_number += 1 self.__position += size to_return = bytes[:size] self.__buffer = bytes[size:] return to_return # TODO should support writing unicode to a file. this means that files will # need to have an encoding attribute. def write(self, str): """Write a string to the GridFile. There is no return value. Due to buffering, the string may not actually show up in the database until the `flush()` or `close()` method is called. Raises ValueError if this GridFile is already closed. Raises TypeErrer if str is not an instance of str. :Parameters: - `str`: string of bytes to be written to the file """ self.__assert_open("w") if not isinstance(str, types.StringType): raise TypeError("can only write strings") if not len(str): return self.__buffer += str def tell(self): """Return the GridFile's current position (read-mode files only). """ self.__assert_open("r") return self.__position def seek(self, pos, whence=_SEEK_SET): """Set the current position of the GridFile (read-mode files only). :Parameters: - `pos`: the position (or offset if using relative positioning) to seek to - `whence` (optional): where to seek from. os.SEEK_SET (0) for absolute file positioning, os.SEEK_CUR (1) to seek relative to the current position, os.SEEK_END (2) to seek relative to the file's end. """ self.__assert_open("r") if whence == _SEEK_SET: new_pos = pos elif whence == _SEEK_CUR: new_pos = self.__position + pos elif whence == _SEEK_END: new_pos = int(self.length) + pos else: raise IOError(22, "Invalid argument") if new_pos < 0: raise IOError(22, "Invalid argument") self.__position = new_pos self.__buffer = "" def writelines(self, sequence): """Write a sequence of strings to the file. Does not add seperators. """ for line in sequence: self.write(line) def __enter__(self): """Support for the context manager protocol. """ return self def __exit__(self, exc_type, exc_val, exc_tb): """Support for the context manager protocol. Close the file and allow exceptions to propogate. """ self.close() return False # propogate exceptions PK°s;Lj;8¾>¾>gridfs/grid_file.pyc;ò K¹’Jc@sâdZdkZdkZdkZdkZdklZdklZdkl Z dkl Z dk l Z dk lZdklZd klZd klZd klZd klZeƒZhZd efd„ƒYZdS(s<File-like object used for reading from and writing to GridFSN(s Condition(s _SEEK_SET(s _SEEK_CUR(s _SEEK_END(sSON(sDatabase(sObjectId(sDBRef(sBinary(sCorruptGridFile(s ASCENDINGsGridFilecBsRtZdZddd„Zd„Zd„ZeeƒZd„ZeeƒZed„Z e de ƒZ e d ƒZ e d e ƒZ e d e ƒZe d e ƒZe d ƒZe dƒZe de ƒZd„Zd„Zd„Zd„Zd„Zd„Zed„Zdd„Zd„Zd„Zed„Zd„Z d„Z!d„Z"RS(sA "file" stored in GridFS. srsfscCst|tiƒ otdƒ‚nt|tƒ otdƒ‚nt|tiƒ otdƒ‚nt|tiƒ otdƒ‚n|ddfjot dƒ‚n|||_ |i i idtfd tfgƒtiƒ|i ii|ƒ}|o|d |_nq|djotiƒtd |ƒ‚nd |d s       cCs"x|D]}|i|ƒqWdS(sSWrite a sequence of strings to the file. Does not add seperators. N(ssequenceslinesselfswrite(sselfssequencesline((s5build/bdist.darwin-9.8.0-i386/egg/gridfs/grid_file.pys writelinesWscCs|SdS(s2Support for the context manager protocol. N(sself(sself((s5build/bdist.darwin-9.8.0-i386/egg/gridfs/grid_file.pys __enter___scCs|iƒtSdS(smSupport for the context manager protocol. Close the file and allow exceptions to propogate. N(sselfsclosesFalse(sselfsexc_typesexc_valsexc_tb((s5build/bdist.darwin-9.8.0-i386/egg/gridfs/grid_file.pys__exit__ds (#s__name__s __module__s__doc__s__init__s_GridFile__erasesclosedspropertysmodesFalses_GridFile__create_propertysTruesnames content_typeslengths chunk_sizes upload_datesaliasessmetadatasmd5srenames_GridFile__max_chunks_GridFile__new_chunks!_GridFile__write_buffer_to_chunkssflushsclosesNones_GridFile__assert_opensreadswritestells _SEEK_SETsseeks writeliness __enter__s__exit__(((s5build/bdist.darwin-9.8.0-i386/egg/gridfs/grid_file.pysGridFile(s< T               )     (s__doc__stypessdatetimesmathsoss threadings Conditionspymongos _SEEK_SETs _SEEK_CURs _SEEK_ENDs pymongo.sonsSONspymongo.databasesDatabasespymongo.objectidsObjectIds pymongo.dbrefsDBRefspymongo.binarysBinaryserrorssCorruptGridFiles ASCENDINGs _files_locks _open_filessobjectsGridFile(sSONsBinarysDBRefsObjectIdsDatabases _SEEK_CURsGridFiles _SEEK_ENDs _files_locks _SEEK_SETs ASCENDINGsmathstypess _open_filessCorruptGridFilesdatetimesoss Condition((s5build/bdist.darwin-9.8.0-i386/egg/gridfs/grid_file.pys?s$                PK&s;„P¡j pymongo/__init__.py# Copyright 2009 10gen, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """A Mongo driver for Python.""" import types import sys import os from pymongo.connection import Connection as PyMongo_Connection from pymongo.son import SON ASCENDING = 1 """Ascending sort order.""" DESCENDING = -1 """Descending sort order.""" OFF = 0 """Turn off database profiling.""" SLOW_ONLY = 1 """Only profile slow operations.""" ALL = 2 """Profile all operations.""" version = "0.15" """Current version of PyMongo.""" Connection = PyMongo_Connection """Alias for pymongo.connection.Connection.""" try: _SEEK_SET = os.SEEK_SET _SEEK_CUR = os.SEEK_CUR _SEEK_END = os.SEEK_END except AttributeError: # before 2.5 _SEEK_SET = 0 _SEEK_CUR = 1 _SEEK_END = 2 def _index_list(key_or_list, direction): """Helper to generate a list of (key, direction) pairs. Takes such a list, or a single key and direction. """ if direction is not None: return [(key_or_list, direction)] else: if isinstance(key_or_list, types.StringTypes): raise TypeError("must specify a direction if using a string key") return key_or_list def _index_document(index_list): """Helper to generate an index specifying document. Takes a list of (key, direction) pairs. """ if not isinstance(index_list, types.ListType): raise TypeError("if no direction is specified, key_or_list must be an" "instance of list") if not len(index_list): raise ValueError("key_or_list must not be the empty list") index = SON() for (key, value) in index_list: if not isinstance(key, types.StringTypes): raise TypeError("first item in each key pair must be a string") if not isinstance(value, types.IntType): raise TypeError("second item in each key pair must be ASCENDING or" "DESCENDING") index[key] = value return index def _reversed(l): """A version of the `reversed()` built-in for Python 2.3. """ i = len(l) while i > 0: i -= 1 yield l[i] if sys.version_info[:3] >= (2, 4, 0): _reversed = reversed PK°s;˜÷F  pymongo/__init__.pyc;ò ~•Jc@sòdZdkZdkZdkZdklZdklZdZ dZ dZ dZ dZ dZeZyeiZeiZeiZWn%ej odZdZdZnXd „Zd „Zd „Zeid dd dfjo eZndS(sA Mongo driver for Python.N(s Connection(sSONiiÿÿÿÿiis0.15cCsI|tj o||fgSn(t|tiƒotdƒ‚n|SdS(spHelper to generate a list of (key, direction) pairs. Takes such a list, or a single key and direction. s.must specify a direction if using a string keyN(s directionsNones key_or_lists isinstancestypess StringTypess TypeError(s key_or_lists direction((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/__init__.pys _index_list3s  cCs¼t|tiƒ otdƒ‚nt|ƒ otdƒ‚ntƒ}xf|D]^\}}t|ti ƒ otdƒ‚nt|ti ƒ otdƒ‚n|||= 256 or subtype < 0: raise ValueError("subtype must be contained in [0, 256)") self = str.__new__(cls, data) self.__subtype = subtype return self def subtype(self): """Get the subtype of this binary data. """ return self.__subtype subtype = property(subtype) def __eq__(self, other): if isinstance(other, Binary): return (self.__subtype, str(self)) == (other.__subtype, str(other)) return NotImplemented def __repr__(self): return "Binary(%s, %s)" % (str.__repr__(self), self.__subtype) PK°s;Š<®G  pymongo/binary.pyc;ò Pk:Jc@s)dZdkZdefd„ƒYZdS(s"Representation of binary data to be stored in or retrieved from Mongo. This is necessary because we want to store normal strings as the Mongo string type. We need to wrap binary so we can tell the difference between what should be considered binary and what should be considered a string. NsBinarycBsAtZdZdd„Zd„ZeeƒZd„Zd„ZRS(s3Binary data stored in or retrieved from Mongo. icCsddt|tiƒ otdƒ‚nt|tiƒ otdƒ‚n|djp |djotdƒ‚nti ||ƒ}||_ |SdS(NsÔInitialize a new binary object. `subtype` is a binary subtype for this data. For more information on subtypes, see the Mongo wiki_. .. _wiki: %s Raises TypeError if `data` is not an instance of str or `subtype` is not an instance of int. Raises ValueError if `subtype` not in [0, 256). :Parameters: - `data`: the binary data to represent - `subtype` (optional): the binary subtype to use s>http://www.mongodb.org/display/DOCS/BSON#BSON-noteondatabinarysdata must be an instance of strs"subtype must be an instance of intiis%subtype must be contained in [0, 256)( s isinstancesdatastypess StringTypes TypeErrorssubtypesIntTypes ValueErrorsstrs__new__sclssselfs_Binary__subtype(sclssdatassubtypesself((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/binary.pys__new__s cCs |iSdS(s-Get the subtype of this binary data. N(sselfs_Binary__subtype(sself((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/binary.pyssubtype6scCsDt|tƒo,|it|ƒf|it|ƒfjSntSdS(N(s isinstancesothersBinarysselfs_Binary__subtypesstrsNotImplemented(sselfsother((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/binary.pys__eq__<s,cCsdti|ƒ|ifSdS(NsBinary(%s, %s)(sstrs__repr__sselfs_Binary__subtype(sself((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/binary.pys__repr__As(s__name__s __module__s__doc__s__new__ssubtypespropertys__eq__s__repr__(((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/binary.pysBinarys     (s__doc__stypessstrsBinary(sBinarystypes((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/binary.pys?s PK!‹;Gߟ9×6×6pymongo/bson.py# Copyright 2009 10gen, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tools for dealing with Mongo's BSON data representation. Generally not needed to be used by application developers.""" import types import struct import random import re import datetime import calendar from binary import Binary from code import Code from objectid import ObjectId from dbref import DBRef from son import SON from errors import InvalidBSON, InvalidDocument, UnsupportedTag, InvalidName try: import _cbson _use_c = True except ImportError: _use_c = False def _get_int(data): try: value = struct.unpack("= 8 return data[8:] def _validate_string(data): (length, data) = _get_int(data) assert len(data) >= length assert data[length - 1] == "\x00" return data[length:] def _validate_object(data): return _validate_document(data, None) _valid_array_name = re.compile("^\d+$") def _validate_array(data): return _validate_document(data, _valid_array_name) def _validate_binary(data): (length, data) = _get_int(data) # + 1 for the subtype byte assert len(data) >= length + 1 return data[length + 1:] def _validate_undefined(data): return data _OID_SIZE = 12 def _validate_oid(data): assert len(data) >= _OID_SIZE return data[_OID_SIZE:] def _validate_boolean(data): assert len(data) >= 1 return data[1:] _DATE_SIZE = 8 def _validate_date(data): assert len(data) >= _DATE_SIZE return data[_DATE_SIZE:] _validate_null = _validate_undefined def _validate_regex(data): (regex, data) = _get_c_string(data) (options, data) = _get_c_string(data) return data def _validate_ref(data): data = _validate_string(data) return _validate_oid(data) _validate_code = _validate_string def _validate_code_w_scope(data): (length, data) = _get_int(data) assert len(data) >= length + 1 return data[length + 1:] _validate_symbol = _validate_string def _validate_number_int(data): assert len(data) >= 4 return data[4:] def _validate_timestamp(data): assert len(data) >= 8 return data[8:] def _validate_number_long(data): assert len(data) >= 8 return data[8:] _element_validator = { "\x01": _validate_number, "\x02": _validate_string, "\x03": _validate_object, "\x04": _validate_array, "\x05": _validate_binary, "\x06": _validate_undefined, "\x07": _validate_oid, "\x08": _validate_boolean, "\x09": _validate_date, "\x0A": _validate_null, "\x0B": _validate_regex, "\x0C": _validate_ref, "\x0D": _validate_code, "\x0E": _validate_symbol, "\x0F": _validate_code_w_scope, "\x10": _validate_number_int, "\x11": _validate_timestamp, "\x12": _validate_number_long} def _validate_element_data(type, data): try: return _element_validator[type](data) except KeyError: raise InvalidBSON("unrecognized type: %s" % type) def _validate_element(data, valid_name): element_type = data[0] (element_name, data) = _get_c_string(data[1:]) if valid_name: assert valid_name.match(element_name), "name is invalid" return _validate_element_data(element_type, data) def _validate_elements(data, valid_name): while data: data = _validate_element(data, valid_name) def _validate_document(data, valid_name=None): try: obj_size = struct.unpack(" 2**64 / 2 - 1 or value < -2**64 / 2: raise OverflowError("MongoDB can only handle up to 8-byte ints") if value > 2**32 / 2 - 1 or value < -2**32 / 2: return "\x12" + name + struct.pack("d8„Z?d9„Z@d:„ZAd;„ZBd<„ZCd=„ZDd>„ZEd?„ZFd@„ZGhde:<de;<de<<d e=<d!e><d"eC<d#e?<d$e@<d%eA<d&eC<d'eD<d(eE<d)e;<d*e;<d+eB<d,e<d-eF<d.eGs check_keysskeys startswiths InvalidNames_make_c_stringsnames isinstancesvaluesfloatsstructspacksBinaryssubtypeslenschrsCodescstrings _dict_to_bsonsscopesFalses full_lengthslengthsstrsunicodesdictsliststuplesSONszipsappends_[1]srangesisas_dictsObjectIdsTruesintslongs OverflowErrorsdatetimescalendarstimegms timetuples microsecondsmillissNones_RE_TYPEspatternsflagssres IGNORECASEsLOCALEs MULTILINEsDOTALLsUNICODEsVERBOSEsDBRefs_element_to_bsons collectionsidsInvalidDocumentstype(skeysvalues check_keysspatternscstringsmillissscopesas_dictsnamesis_[1]ssubtypeslengthsflagss full_length((s1build/bdist.darwin-9.8.0-i386/egg/pymongo/bson.pys_element_to_bsonlsŒ     0 &  H  //*    ! cCsy=d}x0|iƒD]"\}}|t|||ƒ7}qWWn#tj otd|ƒ‚nXt |ƒd}t i d|ƒ|dSdS(Nss+encoder expected a mapping type but got: %risRepresentation of JavaScript code to be evaluated by MongoDB. NsCodecBs8tZdZed„Zd„ZeeƒZd„ZRS(s0JavaScript code to be evaluated by MongoDB. cCs‚t|tiƒ otdƒ‚n|tjo h}nt|tiƒ otdƒ‚nti ||ƒ}||_ |SdS(s*Initialize a new code object. `code` is a string containing JavaScript code. `scope` is a dictionary representing the scope in which `code` should be evaluated. It should be a mapping from identifiers (as strings) to values. Raises TypeError if `code` is not an instance of (str, unicode) or `scope` is not an instance of dict. :Parameters: - `code`: JavaScript code to be evaluated - `scope` (optional): dictionary representing the scope for evaluation s*code must be an instance of (str, unicode)s!scope must be an instance of dictN( s isinstancescodestypess StringTypess TypeErrorsscopesNonesDictTypesstrs__new__sclssselfs _Code__scope(sclsscodesscopesself((s1build/bdist.darwin-9.8.0-i386/egg/pymongo/code.pys__new__s   cCs |iSdS(s"Get the scope dictionary. N(sselfs _Code__scope(sself((s1build/bdist.darwin-9.8.0-i386/egg/pymongo/code.pysscope6scCsdti|ƒ|ifSdS(Ns Code(%s, %r)(sstrs__repr__sselfs _Code__scope(sself((s1build/bdist.darwin-9.8.0-i386/egg/pymongo/code.pys__repr__<s(s__name__s __module__s__doc__sNones__new__sscopespropertys__repr__(((s1build/bdist.darwin-9.8.0-i386/egg/pymongo/code.pysCodes    (s__doc__stypessstrsCode(sCodestypes((s1build/bdist.darwin-9.8.0-i386/egg/pymongo/code.pys?s PKeR;Ò2ømwiwipymongo/collection.py# Copyright 2009 10gen, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Collection level utilities for Mongo.""" import types import pymongo import bson from objectid import ObjectId from cursor import Cursor from son import SON from errors import InvalidName, OperationFailure from code import Code _ZERO = "\x00\x00\x00\x00" _ONE = "\x01\x00\x00\x00" class Collection(object): """A Mongo collection. """ def __init__(self, database, name, options=None): """Get / create a Mongo collection. Raises TypeError if name is not an instance of (str, unicode). Raises InvalidName if name is not a valid collection name. Raises TypeError if options is not an instance of dict. If options is non-empty a create command will be sent to the database. Otherwise the collection will be created implicitly on first use. :Parameters: - `database`: the database to get a collection from - `name`: the name of the collection to get - `options`: dictionary of collection options. see `pymongo.database.Database.create_collection` for details. """ if not isinstance(name, types.StringTypes): raise TypeError("name must be an instance of (str, unicode)") if not isinstance(options, (types.DictType, types.NoneType)): raise TypeError("options must be an instance of dict") if not name or ".." in name: raise InvalidName("collection names cannot be empty") if "$" in name and not (name in ["$cmd"] or name.startswith("$cmd")): raise InvalidName("collection names must not contain '$'") if name[0] == "." or name[-1] == ".": raise InvalidName("collecion names must not start or end with '.'") self.__database = database self.__collection_name = unicode(name) if options is not None: self.__create(options) def __create(self, options): """Sends a create command with the given options. """ # Send size as a float, not an int/long. BSON can only handle 32-bit # ints which conflicts w/ max collection size of 10000000000. if "size" in options: options["size"] = float(options["size"]) command = SON({"create": self.__collection_name}) command.update(options) self.__database._command(command) def __getattr__(self, name): """Get a sub-collection of this collection by name. Raises InvalidName if an invalid collection name is used. :Parameters: - `name`: the name of the collection to get """ return Collection(self.__database, u"%s.%s" % (self.__collection_name, name)) def __getitem__(self, name): return self.__getattr__(name) def __repr__(self): return "Collection(%r, %r)" % (self.__database, self.__collection_name) def __cmp__(self, other): if isinstance(other, Collection): return cmp((self.__database, self.__collection_name), (other.__database, other.__collection_name)) return NotImplemented def full_name(self): """Get the full name of this collection. The full name is of the form database_name.collection_name. """ return u"%s.%s" % (self.__database.name(), self.__collection_name) def name(self): """Get the name of this collection. """ return self.__collection_name def _send_message(self, operation, data): """Wrap up a message and send it. """ # reserved int, full collection name, message data message = _ZERO message += bson._make_c_string(self.full_name()) message += data return self.__database.connection()._send_message(operation, message) def database(self): """Get the database that this collection is a part of. """ return self.__database def save(self, to_save, manipulate=True, safe=False): """Save a document in this collection. If `to_save` already has an '_id' then an update (upsert) operation is performed and any existing document with that _id is overwritten. Otherwise an insert operation is performed. Returns the _id of the saved document. Raises TypeError if to_save is not an instance of dict. If `safe` is True then the save will be checked for errors, raising OperationFailure if one occurred. Checking for safety requires an extra round-trip to the database. Returns the _id of the saved document. :Parameters: - `to_save`: the SON object to be saved - `manipulate` (optional): manipulate the SON object before saving it - `safe` (optional): check that the save succeeded? """ if not isinstance(to_save, types.DictType): raise TypeError("cannot save object of type %s" % type(to_save)) if "_id" not in to_save: return self.insert(to_save, manipulate, safe) else: self.update({"_id": to_save["_id"]}, to_save, True, manipulate, safe) return to_save.get("_id", None) def insert(self, doc_or_docs, manipulate=True, safe=False, check_keys=True): """Insert a document(s) into this collection. If manipulate is set the document(s) are manipulated using any SONManipulators that have been added to this database. Returns the _id of the inserted document or a list of _ids of the inserted documents. If `safe` is True then the insert will be checked for errors, raising OperationFailure if one occurred. Checking for safety requires an extra round-trip to the database. :Parameters: - `doc_or_docs`: a SON object or list of SON objects to be inserted - `manipulate` (optional): manipulate the documents before inserting? - `safe` (optional): check that the insert succeeded? - `check_keys` (optional): check if keys start with '$' or contain '.', raising `pymongo.errors.InvalidName` in either case """ docs = doc_or_docs if isinstance(docs, types.DictType): docs = [docs] if not isinstance(docs, types.ListType): raise TypeError("insert takes a document or list of documents") if manipulate: docs = [self.__database._fix_incoming(doc, self) for doc in docs] data = [bson.BSON.from_dict(doc, check_keys) for doc in docs] self._send_message(2002, "".join(data)) if safe: error = self.__database.error() if error: raise OperationFailure("insert failed: " + error["err"]) ids = [doc.get("_id", None) for doc in docs] return len(ids) == 1 and ids[0] or ids def update(self, spec, document, upsert=False, manipulate=False, safe=False): """Update a document(s) in this collection. Raises TypeError if either spec or document isn't an instance of dict or upsert isn't an instance of bool. If `safe` is True then the update will be checked for errors, raising OperationFailure if one occurred. Checking for safety requires an extra round-trip to the database. :Parameters: - `spec`: a SON object specifying elements which must be present for a document to be updated - `document`: a SON object specifying the fields to be changed in the selected document(s), or (in the case of an upsert) the document to be inserted - `upsert` (optional): perform an upsert operation - `manipulate` (optional): manipulate the document before updating? - `safe` (optional): check that the update succeeded? """ if not isinstance(spec, types.DictType): raise TypeError("spec must be an instance of dict") if not isinstance(document, types.DictType): raise TypeError("document must be an instance of dict") if not isinstance(upsert, types.BooleanType): raise TypeError("upsert must be an instance of bool") if upsert and manipulate: document = self.__database._fix_incoming(document, self) message = upsert and _ONE or _ZERO message += bson.BSON.from_dict(spec) message += bson.BSON.from_dict(document) self._send_message(2001, message) if safe: error = self.__database.error() if error: raise OperationFailure("update failed: " + error["err"]) def remove(self, spec_or_object_id): """Remove an object(s) from this collection. Raises TypeEror if the argument is not an instance of (dict, ObjectId). :Parameters: - `spec_or_object_id` (optional): a SON object specifying elements which must be present for a document to be removed OR an instance of ObjectId to be used as the value for an _id element """ spec = spec_or_object_id if isinstance(spec, ObjectId): spec = SON({"_id": spec}) if not isinstance(spec, types.DictType): raise TypeError("spec must be an instance of dict, not %s" % type(spec)) self._send_message(2006, _ZERO + bson.BSON.from_dict(spec)) def find_one(self, spec_or_object_id=None, fields=None, slave_okay=None, _sock=None): """Get a single object from the database. Raises TypeError if the argument is of an improper type. Returns a single SON object, or None if no result is found. :Parameters: - `spec_or_object_id` (optional): a SON object specifying elements which must be present for a document to be returned OR an instance of ObjectId to be used as the value for an _id query - `fields` (optional): a list of field names that should be included in the returned document ("_id" will always be included) - `slave_okay` (optional): if True, this query should be allowed to execute on a slave (by default, certain queries are not allowed to execute on mongod instances running in slave mode). If slave_okay is set to None the Connection level default will be used - see the slave_okay parameter to `pymongo.Connection.__init__`. """ spec = spec_or_object_id if spec is None: spec = SON() if isinstance(spec, ObjectId): spec = SON({"_id": spec}) for result in self.find(spec, limit=-1, fields=fields, slave_okay=slave_okay, _sock=_sock): return result return None def _fields_list_to_dict(self, fields): """Takes a list of field names and returns a matching dictionary. ["a", "b"] becomes {"a": 1, "b": 1} and ["a.b.c", "d", "a.c"] becomes {"a.b.c": 1, "d": 1, "a.c": 1} """ as_dict = {} for field in fields: if not isinstance(field, types.StringTypes): raise TypeError("fields must be a list of key names as " "(string, unicode)") as_dict[field] = 1 return as_dict def find(self, spec=None, fields=None, skip=0, limit=0, slave_okay=None, timeout=True, snapshot=False, _sock=None): """Query the database. The `spec` argument is a prototype document that all results must match. For example: >>> db.test.find({"hello": "world"}) only matches documents that have a key "hello" with value "world". Matches can have other keys *in addition* to "hello". The `fields` argument is used to specify a subset of fields that should be included in the result documents. By limiting results to a certain subset of fields you can cut down on network traffic and decoding time. Raises TypeError if any of the arguments are of improper type. Returns an instance of Cursor corresponding to this query. :Parameters: - `spec` (optional): a SON object specifying elements which must be present for a document to be included in the result set - `fields` (optional): a list of field names that should be returned in the result set ("_id" will always be included) - `skip` (optional): the number of documents to omit (from the start of the result set) when returning the results - `limit` (optional): the maximum number of results to return in the first reply message, or 0 for the default return size - `slave_okay` (optional): if True, this query should be allowed to execute on a slave (by default, certain queries are not allowed to execute on mongod instances running in slave mode). If slave_okay is set to None the Connection level default will be used - see the slave_okay parameter to `pymongo.Connection.__init__`. - `timeout` (optional): if True, any returned cursor will be subject to the normal timeout behavior of the mongod process. Otherwise, the returned cursor will never timeout at the server. Care should be taken to ensure that cursors with timeout turned off are properly closed. - `snapshot` (optional): if True, snapshot mode will be used for this query. Snapshot mode assures no duplicates are returned, or objects missed, which were present at both the start and end of the query's execution. For details, see the wiki_ .. _wiki: http://www.mongodb.org/display/DOCS/How+to+do+Snapshotting+in+the+Mongo+Database """ if spec is None: spec = SON() if slave_okay is None: slave_okay = self.__database.connection().slave_okay if not isinstance(spec, types.DictType): raise TypeError("spec must be an instance of dict") if not isinstance(fields, (types.ListType, types.NoneType)): raise TypeError("fields must be an instance of list") if not isinstance(skip, types.IntType): raise TypeError("skip must be an instance of int") if not isinstance(limit, types.IntType): raise TypeError("limit must be an instance of int") if not isinstance(slave_okay, types.BooleanType): raise TypeError("slave_okay must be an instance of bool") if not isinstance(timeout, types.BooleanType): raise TypeError("timeout must be an instance of bool") if not isinstance(snapshot, types.BooleanType): raise TypeError("snapshot must be an instance of bool") if fields is not None: if not fields: fields = ["_id"] fields = self._fields_list_to_dict(fields) return Cursor(self, spec, fields, skip, limit, slave_okay, timeout, snapshot, _sock=_sock) def count(self): """Get the number of documents in this collection. """ return self.find().count() def _gen_index_name(self, keys): """Generate an index name from the set of fields it is over. """ return u"_".join([u"%s_%s" % item for item in keys]) def create_index(self, key_or_list, direction=None, unique=False, ttl=300): """Creates an index on this collection. Takes either a single key and a direction, or a list of (key, direction) pairs. The key(s) must be an instance of (str, unicode), and the direction(s) must be one of (`pymongo.ASCENDING`, `pymongo.DESCENDING`). Returns the name of the created index. :Parameters: - `key_or_list`: a single key or a list of (key, direction) pairs specifying the index to create - `direction` (optional): must be included if key_or_list is a single key, otherwise must be None - `unique` (optional): should this index guarantee uniqueness? - `ttl` (optional): time window (in seconds) during which this index will be recognized by subsequent calls to `ensure_index` - see documentation for `ensure_index` for details """ to_save = SON() keys = pymongo._index_list(key_or_list, direction) name = self._gen_index_name(keys) to_save["name"] = name to_save["ns"] = self.full_name() to_save["key"] = pymongo._index_document(keys) to_save["unique"] = unique self.database().connection()._cache_index(self.__database.name(), self.name(), name, ttl) self.database().system.indexes.insert(to_save, manipulate=False, check_keys=False) return to_save["name"] def ensure_index(self, key_or_list, direction=None, unique=False, ttl=300): """Ensures that an index exists on this collection. Takes either a single key and a direction, or a list of (key, direction) pairs. The key(s) must be an instance of (str, unicode), and the direction(s) must be one of (`pymongo.ASCENDING`, `pymongo.DESCENDING`). Unlike `create_index`, which attempts to create an index unconditionally, `ensure_index` takes advantage of some caching within the driver such that it only attempts to create indexes that might not already exist. When an index is created (or ensured) by PyMongo it is "remembered" for `ttl` seconds. Repeated calls to `ensure_index` within that time limit will be lightweight - they will not attempt to actually create the index. Care must be taken when the database is being accessed through multiple connections at once. If an index is created using PyMongo and then deleted using another connection any call to `ensure_index` within the cache window will fail to re-create the missing index. Returns the name of the created index if an index is actually created. Returns None if the index already exists. :Parameters: - `key_or_list`: a single key or a list of (key, direction) pairs specifying the index to ensure - `direction` (optional): must be included if key_or_list is a single key, otherwise must be None - `unique` (optional): should this index guarantee uniqueness? - `ttl` (optional): time window (in seconds) during which this index will be recognized by subsequent calls to `ensure_index` """ keys = pymongo._index_list(key_or_list, direction) name = self._gen_index_name(keys) if self.database().connection()._cache_index(self.__database.name(), self.name(), name, ttl): return self.create_index(key_or_list, direction, unique, ttl) return None def drop_indexes(self): """Drops all indexes on this collection. Can be used on non-existant collections or collections with no indexes. Raises OperationFailure on an error. """ self.database().connection()._purge_index(self.database().name(), self.name()) self.drop_index(u"*") def drop_index(self, index_or_name): """Drops the specified index on this collection. Can be used on non-existant collections or collections with no indexes. Raises OperationFailure on an error. `index_or_name` can be either an index name (as returned by `create_index`), or an index specifier (as passed to `create_index`). Raises TypeError if index is not an instance of (str, unicode, list). :Parameters: - `index_or_name`: index (or name of index) to drop """ name = index_or_name if isinstance(index_or_name, types.ListType): name = self._gen_index_name(index_or_name) if not isinstance(name, types.StringTypes): raise TypeError("index_or_name must be an index name or list") self.database().connection()._purge_index(self.database().name(), self.name(), name) self.__database._command(SON([("deleteIndexes", self.__collection_name), ("index", name)]), ["ns not found"]) def index_information(self): """Get information on this collection's indexes. Returns a dictionary where the keys are index names (as returned by create_index()) and the values are lists of (key, direction) pairs specifying the index (as passed to create_index()). """ raw = self.__database.system.indexes.find({"ns": self.full_name()}) info = {} for index in raw: info[index["name"]] = index["key"].items() return info def options(self): """Get the options set on this collection. Returns a dictionary of options and their values - see `pymongo.database.Database.create_collection` for more information on the options dictionary. Returns an empty dictionary if the collection has not been created yet. """ result = self.__database.system.namespaces.find_one( {"name": self.full_name()}) if not result: return {} options = result.get("options", {}) if "create" in options: del options["create"] return options def group(self, keys, condition, initial, reduce, command=False): """Perform a query similar to an SQL group by operation. Returns an array of grouped items. :Parameters: - `keys`: list of fields to group by - `condition`: specification of rows to be considered (as a `find` query specification) - `initial`: initial value of the aggregation counter object - `reduce`: aggregation function as a JavaScript string - `command` (optional): if True, run the group as a command instead of in an eval - it is likely that this option will eventually be deprecated and all groups will be run as commands """ if command: if not isinstance(reduce, Code): reduce = Code(reduce) return self.__database._command({"group": {"ns": self.__collection_name, "$reduce": reduce, "key": self._fields_list_to_dict(keys), "cond": condition, "initial": initial}})["retval"] scope = {} if isinstance(reduce, Code): scope = reduce.scope scope.update({"ns": self.__collection_name, "keys": keys, "condition": condition, "initial": initial}) group_function = """function () { var c = db[ns].find(condition); var map = new Map(); var reduce_function = %s; while (c.hasNext()) { var obj = c.next(); var key = {}; for (var i = 0; i < keys.length; i++) { var k = keys[i]; key[k] = obj[k]; } var aggObj = map.get(key); if (aggObj == null) { var newObj = Object.extend({}, key); aggObj = Object.extend(newObj, initial); map.put(key, aggObj); } reduce_function(obj, aggObj); } return {"result": map.values()}; }""" % reduce return self.__database.eval(Code(group_function, scope))["result"] def rename(self, new_name): """Rename this collection. If operating in auth mode, client must be authorized as an admin to perform this operation. Raises TypeError if new_name is not an instance of (str, unicode). Raises InvalidName if new_name is not a valid collection name. :Parameters: - `new_name`: new name for this collection """ if not isinstance(new_name, types.StringTypes): raise TypeError("new_name must be an instance of (str, unicode)") if not new_name or ".." in new_name: raise InvalidName("collection names cannot be empty") if "$" in new_name: raise InvalidName("collection names must not contain '$'") if new_name[0] == "." or new_name[-1] == ".": raise InvalidName("collecion names must not start or end with '.'") rename_command = SON([("renameCollection", self.full_name()), ("to", "%s.%s" % (self.__database.name(), new_name))]) self.__database.connection().admin._command(rename_command) def __iter__(self): return self def next(self): raise TypeError("'Collection' object is not iterable") def __call__(self, *args, **kwargs): """This is only here so that some API misusages are easier to debug. """ if "." not in self.__collection_name: raise TypeError("'Collection' object is not callable. If you " "meant to call the '%s' method on a 'Database' " "object it is failing because no such method " "exists." % self.__collection_name) raise TypeError("'Collection' object is not callable. If you meant to " "call the '%s' method on a 'Collection' object it is " "failing because no such method exists." % self.__collection_name.split(".")[-1]) PK°s;Cû³ xxpymongo/collection.pyc;ò _D•Jc@sŽdZdkZdkZdkZdklZdklZdkl Z dk l Z l Z dk lZdZdZd efd „ƒYZdS( s%Collection level utilities for Mongo.N(sObjectId(sCursor(sSON(s InvalidNamesOperationFailure(sCodesss Collectionc BsptZdZed„Zd„Zd„Zd„Zd„Zd„Z d„Z d„Z d „Z d „Z eed „Zeeed „Zeeed „Zd„Zeeeed„Zd„Zeeddeeeed„Zd„Zd„Zeedd„Zeedd„Zd„Zd„Zd„Zd„Zed„Zd„Z d„Z!d„Z"d „Z#RS(!sA Mongo collection. cCs t|tiƒ otdƒ‚nt|titifƒ otdƒ‚n| p d|jotdƒ‚nd|jo|dgjp |i dƒ otdƒ‚n|dd jp|d d jotd ƒ‚n||_ t |ƒ|_|tj o|i|ƒnd S( s›Get / create a Mongo collection. Raises TypeError if name is not an instance of (str, unicode). Raises InvalidName if name is not a valid collection name. Raises TypeError if options is not an instance of dict. If options is non-empty a create command will be sent to the database. Otherwise the collection will be created implicitly on first use. :Parameters: - `database`: the database to get a collection from - `name`: the name of the collection to get - `options`: dictionary of collection options. see `pymongo.database.Database.create_collection` for details. s*name must be an instance of (str, unicode)s#options must be an instance of dicts..s collection names cannot be emptys$s$cmds%collection names must not contain '$'is.iÿÿÿÿs.collecion names must not start or end with '.'N(s isinstancesnamestypess StringTypess TypeErrorsoptionssDictTypesNoneTypes InvalidNames startswithsdatabasesselfs_Collection__databasesunicodes_Collection__collection_namesNones_Collection__create(sselfsdatabasesnamesoptions((s7build/bdist.darwin-9.8.0-i386/egg/pymongo/collection.pys__init__#s."  cCs^d|jot|dƒ|d>> db.test.find({"hello": "world"}) only matches documents that have a key "hello" with value "world". Matches can have other keys *in addition* to "hello". The `fields` argument is used to specify a subset of fields that should be included in the result documents. By limiting results to a certain subset of fields you can cut down on network traffic and decoding time. Raises TypeError if any of the arguments are of improper type. Returns an instance of Cursor corresponding to this query. :Parameters: - `spec` (optional): a SON object specifying elements which must be present for a document to be included in the result set - `fields` (optional): a list of field names that should be returned in the result set ("_id" will always be included) - `skip` (optional): the number of documents to omit (from the start of the result set) when returning the results - `limit` (optional): the maximum number of results to return in the first reply message, or 0 for the default return size - `slave_okay` (optional): if True, this query should be allowed to execute on a slave (by default, certain queries are not allowed to execute on mongod instances running in slave mode). If slave_okay is set to None the Connection level default will be used - see the slave_okay parameter to `pymongo.Connection.__init__`. - `timeout` (optional): if True, any returned cursor will be subject to the normal timeout behavior of the mongod process. Otherwise, the returned cursor will never timeout at the server. Care should be taken to ensure that cursors with timeout turned off are properly closed. - `snapshot` (optional): if True, snapshot mode will be used for this query. Snapshot mode assures no duplicates are returned, or objects missed, which were present at both the start and end of the query's execution. For details, see the wiki_ .. _wiki: http://www.mongodb.org/display/DOCS/How+to+do+Snapshotting+in+the+Mongo+Database s spec must be an instance of dicts"fields must be an instance of listsskip must be an instance of ints limit must be an instance of ints&slave_okay must be an instance of bools#timeout must be an instance of bools$snapshot must be an instance of bools_ids_sockN(sspecsNonesSONs slave_okaysselfs_Collection__databases connections isinstancestypessDictTypes TypeErrorsfieldssListTypesNoneTypesskipsIntTypeslimits BooleanTypestimeoutssnapshots_fields_list_to_dictsCursors_sock( sselfsspecsfieldssskipslimits slave_okaystimeoutssnapshots_sock((s7build/bdist.darwin-9.8.0-i386/egg/pymongo/collection.pysfind3s2,     cCs|iƒiƒSdS(s8Get the number of documents in this collection. N(sselfsfindscount(sself((s7build/bdist.darwin-9.8.0-i386/egg/pymongo/collection.pyscount{scCs6digi}|D]}|d|ƒq~ƒSdS(sBGenerate an index name from the set of fields it is over. u_u%s_%sN(sjoinsappends_[1]skeyssitem(sselfskeyss_[1]sitem((s7build/bdist.darwin-9.8.0-i386/egg/pymongo/collection.pys_gen_index_name€si,cCsÃtƒ}ti||ƒ}|i|ƒ}||d<|i ƒ|d  !       ')  !H  ")   :   (s__doc__stypesspymongosbsonsobjectidsObjectIdscursorsCursorssonsSONserrorss InvalidNamesOperationFailurescodesCodes_ZEROs_ONEsobjects Collection( sCodesObjectIds_ONEs InvalidNames_ZEROs CollectionsSONsCursorsbsonsOperationFailurespymongostypes((s7build/bdist.darwin-9.8.0-i386/egg/pymongo/collection.pys?s       PK€R;5²Céeéepymongo/connection.py# Copyright 2009 10gen, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Low level connection to Mongo.""" import sys import socket import struct import types import logging import threading import random import errno import datetime from errors import ConnectionFailure from errors import OperationFailure, ConfigurationError from errors import AutoReconnect from database import Database from cursor_manager import CursorManager from thread_util import TimeoutableLock _logger = logging.getLogger("pymongo.connection") _logger.addHandler(logging.StreamHandler()) _logger.setLevel(logging.INFO) _CONNECT_TIMEOUT = 20.0 class Connection(object): # TODO support auth for pooling """A connection to Mongo. """ # TODO consider deprecating these. or at least find a way for # the default args to __init__ and paired to be properly documented. HOST = "localhost" PORT = 27017 POOL_SIZE = 1 AUTO_START_REQUEST = True TIMEOUT = 1.0 def __init__(self, host=None, port=None, pool_size=None, auto_start_request=None, timeout=None, slave_okay=False, _connect=True): """Open a new connection to a Mongo instance at host:port. The resultant connection object has connection-pooling built in. It also performs auto-reconnection when necessary. If an operation fails because of a connection error, `pymongo.errors.ConnectionFailure` is raised. If auto-reconnection will be performed, `pymongo.errors.AutoReconnect` will be raised. Application code should handle this exception (recognizing that the operation failed) and then continue to execute. Raises TypeError if host is not an instance of string or port is not an instance of int. Raises ConnectionFailure if the connection cannot be made. Raises TypeError if `pool_size` is not an instance of int. Raises ValueError if `pool_size` is not greater than or equal to one. NOTE: Connection pooling is not compatible with auth (yet). Please do not set the "pool_size" to anything other than 1 if auth is in use. :Parameters: - `host` (optional): hostname or IPv4 address of the instance to connect to - `port` (optional): port number on which to connect - `pool_size` (optional): maximum size of the built in connection-pool - `auto_start_request` (optional): automatically start a request on every operation - see documentation for `start_request` - `slave_okay` (optional): is it okay to connect directly to and perform queries on a slave instance - `timeout` (optional): max time to wait when attempting to acquire a connection from the connection pool before raising an exception - can be set to -1 to wait indefinitely """ if host is None: host = self.HOST if port is None: port = self.PORT if pool_size is None: pool_size = self.POOL_SIZE if auto_start_request is None: auto_start_request = self.AUTO_START_REQUEST if timeout is None: timeout = self.TIMEOUT if timeout == -1: timeout = None if not isinstance(host, types.StringTypes): raise TypeError("host must be an instance of (str, unicode)") if not isinstance(port, types.IntType): raise TypeError("port must be an instance of int") if not isinstance(pool_size, types.IntType): raise TypeError("pool_size must be an instance of int") if pool_size <= 0: raise ValueError("pool_size must be positive") self.__host = None self.__port = None self.__nodes = [(host, port)] self.__slave_okay = slave_okay # current request_id self.__id = 1 self.__id_lock = threading.Lock() self.__cursor_manager = CursorManager(self) self.__pool_size = pool_size self.__auto_start_request = auto_start_request # map from threads to sockets self.__thread_map = {} # count of how many threads are mapped to each socket self.__thread_count = [0 for _ in range(self.__pool_size)] self.__acquire_timeout = timeout self.__locks = [TimeoutableLock() for _ in range(self.__pool_size)] self.__sockets = [None for _ in range(self.__pool_size)] self.__currently_resetting = False # cache of existing indexes used by ensure_index ops self.__index_cache = {} if _connect: self.__find_master() def __pair_with(self, host, port): """Pair this connection with a Mongo instance running on host:port. Raises TypeError if host is not an instance of string or port is not an instance of int. Raises ConnectionFailure if the connection cannot be made. :Parameters: - `host`: the hostname or IPv4 address of the instance to pair with - `port`: the port number on which to connect """ if not isinstance(host, types.StringType): raise TypeError("host must be an instance of str") if not isinstance(port, types.IntType): raise TypeError("port must be an instance of int") self.__nodes.append((host, port)) self.__find_master() def paired(cls, left, right=None, pool_size=None, auto_start_request=None): """Open a new paired connection to Mongo. Raises TypeError if either `left` or `right` is not a tuple of the form (host, port). Raises ConnectionFailure if the connection cannot be made. :Parameters: - `left`: (host, port) pair for the left Mongo instance - `right` (optional): (host, port) pair for the right Mongo instance - `pool_size` (optional): same as argument to `__init__` - `auto_start_request` (optional): same as argument to `__init__` """ if right is None: right = (cls.HOST, cls.PORT) if pool_size is None: pool_size = cls.POOL_SIZE if auto_start_request is None: auto_start_request = cls.AUTO_START_REQUEST connection = cls(left[0], left[1], pool_size, auto_start_request, _connect=False) connection.__pair_with(*right) return connection paired = classmethod(paired) def __increment_id(self): self.__id_lock.acquire() result = self.__id self.__id += 1 self.__id_lock.release() return result def __master(self, sock): """Get the hostname and port of the master Mongo instance. Return a tuple (host, port). """ result = self["admin"]._command({"ismaster": 1}, sock=sock) if result["ismaster"] == 1: return True else: if "remote" not in result: return False strings = result["remote"].split(":", 1) if len(strings) == 1: port = self.PORT else: port = int(strings[1]) return (strings[0], port) def _cache_index(self, database, collection, index, ttl): """Add an index to the index cache for ensure_index operations. Return True if the index has been newly cached or if the index had expired and is being re-cached. Return False if the index exists and is valid. """ now = datetime.datetime.utcnow() expire = datetime.timedelta(seconds=ttl) + now if database not in self.__index_cache: self.__index_cache[database] = {} self.__index_cache[database][collection] = {} self.__index_cache[database][collection][index] = expire return True if collection not in self.__index_cache[database]: self.__index_cache[database][collection] = {} self.__index_cache[database][collection][index] = expire return True if index in self.__index_cache[database][collection]: if now < self.__index_cache[database][collection][index]: return False self.__index_cache[database][collection][index] = expire return True def _purge_index(self, database_name, collection_name=None, index_name=None): """Purge an index from the index cache. If `index_name` is None purge an entire collection. If `collection_name` is None purge an entire database. """ if not database_name in self.__index_cache: return if collection_name is None: del self.__index_cache[database_name] return if not collection_name in self.__index_cache[database_name]: return if index_name is None: del self.__index_cache[database_name][collection_name] return if index_name in self.__index_cache[database_name][collection_name]: del self.__index_cache[database_name][collection_name][index_name] def host(self): """Get the connection's current host. """ return self.__host def port(self): """Get the connection's current port. """ return self.__port def slave_okay(self): """Is it okay for this connection to connect directly to a slave? """ return self.__slave_okay slave_okay = property(slave_okay) def __find_master(self): """Create a new socket and use it to figure out who the master is. Sets __host and __port so that `host()` and `port()` will return the address of the master. """ _logger.debug("finding master") self.__host = None self.__port = None sock = None for (host, port) in self.__nodes: _logger.debug("trying %r:%r" % (host, port)) try: try: sock = socket.socket() sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) sock.settimeout(_CONNECT_TIMEOUT) sock.connect((host, port)) sock.settimeout(None) master = self.__master(sock) if master is True: self.__host = host self.__port = port _logger.debug("found master") return if not master: if self.__slave_okay: self.__host = host self.__port = port _logger.debug("connecting to slave (slave_okay mode)") return raise ConfigurationError("trying to connect directly to" " slave %s:%r - must specify " "slave_okay to connect to " "slaves" % (host, port)) if master not in self.__nodes: raise ConfigurationError( "%r claims master is %r, " "but that's not configured" % ((host, port), master)) _logger.debug("not master, master is (%r, %r)" % master) except socket.error, e: exctype, value = sys.exc_info()[:2] _logger.debug("could not connect, got: %s %s" % (exctype, value)) if len(self.__nodes) == 1: raise ConnectionFailure(e) continue finally: if sock is not None: sock.close() raise AutoReconnect("could not find master") def __connect(self, socket_number): """(Re-)connect to Mongo. Connect to the master if this is a paired connection. """ if self.host() is None or self.port() is None: self.__find_master() _logger.debug("connecting socket %s..." % socket_number) assert self.__sockets[socket_number] is None try: self.__sockets[socket_number] = socket.socket() self.__sockets[socket_number].setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) sock = self.__sockets[socket_number] sock.settimeout(_CONNECT_TIMEOUT) sock.connect((self.host(), self.port())) sock.settimeout(None) _logger.debug("connected") return except socket.error: raise ConnectionFailure("could not connect to %r" % self.__nodes) def _reset(self): """Reset everything and start connecting again. Closes all open sockets and resets them to None. Re-finds the master. This should be done in case of a connection failure or a "not master" error. """ if self.__currently_resetting: return self.__currently_resetting = True for i in range(self.__pool_size): # prevent all operations during the reset if not self.__locks[i].acquire(timeout=self.__acquire_timeout): raise ConnectionFailure("timed out before acquiring " "a connection from the pool") if self.__sockets[i] is not None: self.__sockets[i].close() self.__sockets[i] = None try: self.__find_master() finally: self.__currently_resetting = False for i in range(self.__pool_size): self.__locks[i].release() def set_cursor_manager(self, manager_class): """Set this connection's cursor manager. Raises TypeError if manager_class is not a subclass of CursorManager. A cursor manager handles closing cursors. Different managers can implement different policies in terms of when to actually kill a cursor that has been closed. :Parameters: - `manager_class`: cursor manager to use """ manager = manager_class(self) if not isinstance(manager, CursorManager): raise TypeError("manager_class must be a subclass of " "CursorManager") self.__cursor_manager = manager def __pick_and_acquire_socket(self): """Acquire a socket to use for synchronous send and receive operations. """ choices = range(self.__pool_size) random.shuffle(choices) choices.sort(lambda x, y: cmp(self.__thread_count[x], self.__thread_count[y])) for choice in choices: if self.__locks[choice].acquire(False): return choice if not self.__locks[choices[0]].acquire(timeout= self.__acquire_timeout): raise ConnectionFailure("timed out before acquiring " "a connection from the pool") return choices[0] def __get_socket(self): thread = threading.currentThread() if self.__thread_map.get(thread, -1) >= 0: sock = self.__thread_map[thread] if not self.__locks[sock].acquire(timeout=self.__acquire_timeout): raise ConnectionFailure("timed out before acquiring " "a connection from the pool") else: sock = self.__pick_and_acquire_socket() if self.__auto_start_request or thread in self.__thread_map: self.__thread_map[thread] = sock self.__thread_count[sock] += 1 try: if not self.__sockets[sock]: self.__connect(sock) except ConnectionFailure, e: self.__locks[sock].release() self._reset() raise AutoReconnect(str(e)) return sock def __send_message_on_socket(self, operation, data, sock): # header request_id = self.__increment_id() to_send = struct.pack("= 0: sock_number = self.__thread_map.pop(thread) self.__thread_count[sock_number] -= 1 def __cmp__(self, other): if isinstance(other, Connection): return cmp((self.__host, self.__port), (other.__host, other.__port)) return NotImplemented def __repr__(self): if len(self.__nodes) == 1: return "Connection(%r, %r)" % (self.__host, self.__port) elif len(self.__nodes) == 2: return ("Connection.paired((%r, %r), (%r, %r))" % (self.__nodes[0][0], self.__nodes[0][1], self.__nodes[1][0], self.__nodes[1][1])) def __getattr__(self, name): """Get a database by name. Raises InvalidName if an invalid database name is used. :Parameters: - `name`: the name of the database to get """ return Database(self, name) def __getitem__(self, name): """Get a database by name. Raises InvalidName if an invalid database name is used. :Parameters: - `name`: the name of the database to get """ return self.__getattr__(name) def close_cursor(self, cursor_id): """Close a single database cursor. Raises TypeError if cursor_id is not an instance of (int, long). What closing the cursor actually means depends on this connection's cursor manager. :Parameters: - `cursor_id`: cursor id to close """ if not isinstance(cursor_id, (types.IntType, types.LongType)): raise TypeError("cursor_id must be an instance of (int, long)") self.__cursor_manager.close(cursor_id) def kill_cursors(self, cursor_ids): """Kill database cursors with the given ids. Raises TypeError if cursor_ids is not an instance of list. :Parameters: - `cursor_ids`: list of cursor ids to kill """ if not isinstance(cursor_ids, types.ListType): raise TypeError("cursor_ids must be a list") message = "\x00\x00\x00\x00" message += struct.pack("œs istimeouts5timed out before acquiring a connection from the poolN( srangesselfs_Connection__pool_sizeschoicessrandomsshufflessortschoices_Connection__lockssacquiresFalses_Connection__acquire_timeoutsConnectionFailure(sselfschoiceschoices((sselfs7build/bdist.darwin-9.8.0-i386/egg/pymongo/connection.pys__pick_and_acquire_socket—s  "cCs"tiƒ}|ii|dƒdjo?|i|}|i|id|i ƒ ot dƒ‚q±nK|i ƒ}|i p ||ijo$||i|<|i |cd7 self.__retrieved: limit = self.__limit - self.__retrieved else: self.__killed = True return 0 message += struct.pack(" self.__max_dying_cursors: self.__connection.kill_cursors(self.__dying_cursors) self.__dying_cursors = [] PK°s;¼=·pymongo/cursor_manager.pyc;ò ¨v)Jc@s?dZdkZdefd„ƒYZdefd„ƒYZdS(sõDifferent managers to handle when cursors are killed after they are closed. New cursor managers should be defined as subclasses of CursorManager and can be installed on a connection by calling `pymongo.connection.Connection.set_cursor_manager`.Ns CursorManagercBs tZdZd„Zd„ZRS(sfThe default cursor manager. This manager will kill cursors one at a time as they are closed. cCs ||_dS(sdInstantiate the manager. :Parameters: - `connection`: a Mongo Connection N(s connectionsselfs_CursorManager__connection(sselfs connection((s;build/bdist.darwin-9.8.0-i386/egg/pymongo/cursor_manager.pys__init__scCsDt|titifƒ otdƒ‚n|ii|gƒdS(s¾Close a cursor by killing it immediately. Raises TypeError if cursor_id is not an instance of (int, long). :Parameters: - `cursor_id`: cursor id to close s,cursor_id must be an instance of (int, long)N( s isinstances cursor_idstypessIntTypesLongTypes TypeErrorsselfs_CursorManager__connections kill_cursors(sselfs cursor_id((s;build/bdist.darwin-9.8.0-i386/egg/pymongo/cursor_manager.pysclose&s(s__name__s __module__s__doc__s__init__sclose(((s;build/bdist.darwin-9.8.0-i386/egg/pymongo/cursor_manager.pys CursorManagers  sBatchCursorManagercBs)tZdZd„Zd„Zd„ZRS(s4A cursor manager that kills cursors in batches. cCs/g|_d|_||_ti||ƒdS(sdInstantiate the manager. :Parameters: - `connection`: a Mongo Connection iN(sselfs"_BatchCursorManager__dying_cursorss&_BatchCursorManager__max_dying_cursorss connections_BatchCursorManager__connections CursorManagers__init__(sselfs connection((s;build/bdist.darwin-9.8.0-i386/egg/pymongo/cursor_manager.pys__init__8s    cCs|ii|iƒdS(s;Cleanup - be sure to kill any outstanding cursors. N(sselfs_BatchCursorManager__connections kill_cursorss"_BatchCursorManager__dying_cursors(sself((s;build/bdist.darwin-9.8.0-i386/egg/pymongo/cursor_manager.pys__del__DscCszt|titifƒ otdƒ‚n|ii|ƒt |iƒ|i jo |i i |iƒg|_ndS(s½Close a cursor by killing it in a batch. Raises TypeError if cursor_id is not an instance of (int, long). :Parameters: - `cursor_id`: cursor id to close s,cursor_id must be an instance of (int, long)N( s isinstances cursor_idstypessIntTypesLongTypes TypeErrorsselfs"_BatchCursorManager__dying_cursorssappendslens&_BatchCursorManager__max_dying_cursorss_BatchCursorManager__connections kill_cursors(sselfs cursor_id((s;build/bdist.darwin-9.8.0-i386/egg/pymongo/cursor_manager.pyscloseIs(s__name__s __module__s__doc__s__init__s__del__sclose(((s;build/bdist.darwin-9.8.0-i386/egg/pymongo/cursor_manager.pysBatchCursorManager4s  (s__doc__stypessobjects CursorManagersBatchCursorManager(sBatchCursorManagers CursorManagerstypes((s;build/bdist.darwin-9.8.0-i386/egg/pymongo/cursor_manager.pys?s PKN^Ö:B刨Ž9Ž9pymongo/database.py# Copyright 2009 10gen, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Database level operations.""" import types try: import hashlib _md5func = hashlib.md5 except: # for Python < 2.5 import md5 _md5func = md5.new from son import SON from dbref import DBRef from son_manipulator import ObjectIdInjector, ObjectIdShuffler from collection import Collection from errors import InvalidName, CollectionInvalid, OperationFailure from code import Code import pymongo class Database(object): """A Mongo database. """ def __init__(self, connection, name): """Get a database by connection and name. Raises TypeError if name is not an instance of (str, unicode). Raises InvalidName if name is not a valid database name. :Parameters: - `connection`: a connection to Mongo - `name`: database name """ if not isinstance(name, types.StringTypes): raise TypeError("name must be an instance of (str, unicode)") self.__check_name(name) self.__name = unicode(name) self.__connection = connection self.__manipulators = [] self.__copying_manipulators = [] self.add_son_manipulator(ObjectIdInjector()) self.add_son_manipulator(ObjectIdShuffler()) def __check_name(self, name): for invalid_char in [" ", ".", "$", "/", "\\"]: if invalid_char in name: raise InvalidName("database names cannot contain the " "character %r" % invalid_char) if not name: raise InvalidName("database name cannot be the empty string") def add_son_manipulator(self, manipulator): """Add a new son manipulator to this database. Newly added manipulators will be applied before existing ones. :Parameters: - `manipulator`: the manipulator to add """ if manipulator.will_copy(): self.__copying_manipulators.insert(0, manipulator) else: self.__manipulators.insert(0, manipulator) def connection(self): """Get the database connection. """ return self.__connection def name(self): """Get the database name. """ return self.__name def __cmp__(self, other): if isinstance(other, Database): return cmp((self.__connection, self.__name), (other.__connection, other.__name)) return NotImplemented def __repr__(self): return "Database(%r, %r)" % (self.__connection, self.__name) def __getattr__(self, name): """Get a collection of this database by name. Raises InvalidName if an invalid collection name is used. :Parameters: - `name`: the name of the collection to get """ return Collection(self, name) def __getitem__(self, name): """Get a collection of this database by name. Raises InvalidName if an invalid collection name is used. :Parameters: - `name`: the name of the collection to get """ return self.__getattr__(name) def create_collection(self, name, options={}): """Create a new collection in this database. Normally collection creation is automatic. This method should only if you want to specify options on creation. CollectionInvalid is raised if the collection already exists. Options should be a dictionary, with any of the following options: - "size": desired initial size for the collection (in bytes). must be less than or equal to 10000000000. For capped collections this size is the max size of the collection. - "capped": if True, this is a capped collection - "max": maximum number of objects if capped (optional) :Parameters: - `name`: the name of the collection to create - `options` (optional): options to use on the new collection """ if name in self.collection_names(): raise CollectionInvalid("collection %s already exists" % name) return Collection(self, name, options) def _fix_incoming(self, son, collection): """Apply manipulators to an incoming SON object before it gets stored. :Parameters: - `son`: the son object going into the database - `collection`: the collection the son object is being saved in """ for manipulator in self.__manipulators: son = manipulator.transform_incoming(son, collection) for manipulator in self.__copying_manipulators: son = manipulator.transform_incoming(son, collection) return son def _fix_outgoing(self, son, collection): """Apply manipulators to a SON object as it comes out of the database. :Parameters: - `son`: the son object coming out of the database - `collection`: the collection the son object was saved in """ for manipulator in pymongo._reversed(self.__manipulators): son = manipulator.transform_outgoing(son, collection) for manipulator in pymongo._reversed(self.__copying_manipulators): son = manipulator.transform_outgoing(son, collection) return son def _command(self, command, allowable_errors=[], check=True, sock=None): """Issue a DB command. """ result = self["$cmd"].find_one(command, _sock=sock) if check and result["ok"] != 1: if result["errmsg"] in allowable_errors: return result raise OperationFailure("command %r failed: %s" % (command, result["errmsg"])) return result def collection_names(self): """Get a list of all the collection names in this database. """ results = self["system.namespaces"].find() names = [r["name"] for r in results] names = [n[len(self.__name) + 1:] for n in names if n.startswith(self.__name + ".")] names = [n for n in names if "$" not in n] return names def drop_collection(self, name_or_collection): """Drop a collection. :Parameters: - `name_or_collection`: the name of a collection to drop or the collection object itself """ name = name_or_collection if isinstance(name, Collection): name = name.name() if not isinstance(name, types.StringTypes): raise TypeError("name_or_collection must be an instance of " "(Collection, str, unicode)") self.connection()._purge_index(self.name(), name) if name not in self.collection_names(): return self._command({"drop": unicode(name)}) def validate_collection(self, name_or_collection): """Validate a collection. Returns a string of validation info. Raises CollectionInvalid if validation fails. """ name = name_or_collection if isinstance(name, Collection): name = name.name() if not isinstance(name, types.StringTypes): raise TypeError("name_or_collection must be an instance of " "(Collection, str, unicode)") result = self._command({"validate": unicode(name)}) info = result["result"] if info.find("exception") != -1 or info.find("corrupt") != -1: raise CollectionInvalid("%s invalid: %s" % (name, info)) return info def profiling_level(self): """Get the database's current profiling level. Returns one of (`pymongo.OFF`, `pymongo.SLOW_ONLY`, `pymongo.ALL`). """ result = self._command({"profile": -1}) assert result["was"] >= 0 and result["was"] <= 2 return result["was"] def set_profiling_level(self, level): """Set the database's profiling level. Raises ValueError if level is not one of (`pymongo.OFF`, `pymongo.SLOW_ONLY`, `pymongo.ALL`). :Parameters: - `level`: the profiling level to use """ if not isinstance(level, types.IntType) or level < 0 or level > 2: raise ValueError("level must be one of (OFF, SLOW_ONLY, ALL)") self._command({"profile": level}) def profiling_info(self): """Returns a list containing current profiling information. """ return list(self["system.profile"].find()) def error(self): """Get a database error if one occured on the last operation. Return None if the last operation was error-free. Otherwise return the error that occurred. """ error = self._command({"getlasterror": 1}) if error.get("err", 0) is None: return None if error["err"] == "not master": self.__connection._reset() return error def last_status(self): """Get status information from the last operation. Returns a SON object with status information. """ return self._command({"getlasterror": 1}) def previous_error(self): """Get the most recent error to have occurred on this database. Only returns errors that have occurred since the last call to `Database.reset_error_history`. Returns None if no such errors have occurred. """ error = self._command({"getpreverror": 1}) if error.get("err", 0) is None: return None return error def reset_error_history(self): """Reset the error history of this database. Calls to `Database.previous_error` will only return errors that have occurred since the most recent call to this method. """ self._command({"reseterror": 1}) def __iter__(self): return self def next(self): raise TypeError("'Database' object is not iterable") def _password_digest(self, username, password): """Get a password digest to use for authentication. """ if not isinstance(password, types.StringTypes): raise TypeError("password must be an instance of (str, unicode)") if not isinstance(username, types.StringTypes): raise TypeError("username must be an instance of (str, unicode)") md5hash = _md5func() md5hash.update(username + ":mongo:" + password) return unicode(md5hash.hexdigest()) def authenticate(self, name, password): """Authenticate to use this database. Once authenticated, the user has full read and write access to this database. Raises TypeError if either name or password is not an instance of (str, unicode). Authentication lasts for the life of the database connection, or until `Database.logout` is called. The "admin" database is special. Authenticating on "admin" gives access to *all* databases. Effectively, "admin" access means root access to the database. :Parameters: - `name`: the name of the user to authenticate - `password`: the password of the user to authenticate """ if not isinstance(name, types.StringTypes): raise TypeError("name must be an instance of (str, unicode)") if not isinstance(password, types.StringTypes): raise TypeError("password must be an instance of (str, unicode)") result = self._command({"getnonce": 1}) nonce = result["nonce"] digest = self._password_digest(name, password) md5hash = _md5func() md5hash.update("%s%s%s" % (nonce, unicode(name), digest)) key = unicode(md5hash.hexdigest()) try: result = self._command(SON([("authenticate", 1), ("user", unicode(name)), ("nonce", nonce), ("key", key)])) return True except OperationFailure: return False def logout(self): """Deauthorize use of this database for this connection. Note that other databases may still be authorized. """ self._command({"logout": 1}) def dereference(self, dbref): """Dereference a DBRef, getting the SON object it points to. Raises TypeError if dbref is not an instance of DBRef. Returns a SON object or None if the reference does not point to a valid object. :Parameters: - `dbref`: the reference """ if not isinstance(dbref, DBRef): raise TypeError("cannot dereference a %s" % type(dbref)) return self[dbref.collection].find_one({"_id": dbref.id}) def eval(self, code, *args): """Evaluate a JavaScript expression on the Mongo server. Useful if you need to touch a lot of data lightly; in such a scenario the network transfer of the data could be a bottleneck. The `code` argument must be a JavaScript function. Additional positional arguments will be passed to that function when it is run on the server. Raises TypeError if `code` is not an instance of (str, unicode, `Code`). Raises OperationFailure if the eval fails. Returns the result of the evaluation. :Parameters: - `code`: string representation of JavaScript code to be evaluated - `args` (optional): additional positional arguments are passed to the `code` being evaluated """ if not isinstance(code, Code): code = Code(code) command = SON([("$eval", code), ("args", list(args))]) result = self._command(command) return result.get("retval", None) def __call__(self, *args, **kwargs): """This is only here so that some API misusages are easier to debug. """ raise TypeError("'Database' object is not callable. If you meant to " "call the '%s' method on a 'Collection' object it is " "failing because no such method exists." % self.__name) PK°s;νƒæœMœMpymongo/database.pyc;ò D¨?Jc@sÄdZdkZydkZeiZWndkZeiZnXdklZdkl Z dk l Z l Z dk lZdklZlZlZdklZdkZdefd „ƒYZdS( sDatabase level operations.N(sSON(sDBRef(sObjectIdInjectorsObjectIdShuffler(s Collection(s InvalidNamesCollectionInvalidsOperationFailure(sCodesDatabasecBs1tZdZd„Zd„Zd„Zd„Zd„Zd„Zd„Z d„Z d „Z hd „Z d „Z d „Zgeed „Zd„Zd„Zd„Zd„Zd„Zd„Zd„Zd„Zd„Zd„Zd„Zd„Zd„Zd„Zd„Z d„Z!d„Z"d„Z#RS( sA Mongo database. cCst|tiƒ otdƒ‚n|i|ƒt|ƒ|_||_ g|_ g|_ |i tƒƒ|i tƒƒdS(s Get a database by connection and name. Raises TypeError if name is not an instance of (str, unicode). Raises InvalidName if name is not a valid database name. :Parameters: - `connection`: a connection to Mongo - `name`: database name s*name must be an instance of (str, unicode)N(s isinstancesnamestypess StringTypess TypeErrorsselfs_Database__check_namesunicodes_Database__names connections_Database__connections_Database__manipulatorss_Database__copying_manipulatorssadd_son_manipulatorsObjectIdInjectorsObjectIdShuffler(sselfs connectionsname((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pys__init__&s     cCs]x>dddddgD]'}||jotd|ƒ‚qqW| otdƒ‚ndS(Ns s.s$s/s\s.database names cannot contain the character %rs(database name cannot be the empty string(s invalid_charsnames InvalidName(sselfsnames invalid_char((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pys __check_name<s  cCs;|iƒo|iid|ƒn|iid|ƒdS(sÄAdd a new son manipulator to this database. Newly added manipulators will be applied before existing ones. :Parameters: - `manipulator`: the manipulator to add iN(s manipulators will_copysselfs_Database__copying_manipulatorssinserts_Database__manipulators(sselfs manipulator((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pysadd_son_manipulatorDs cCs |iSdS(s%Get the database connection. N(sselfs_Database__connection(sself((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pys connectionQscCs |iSdS(sGet the database name. N(sselfs_Database__name(sself((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pysnameVscCsAt|tƒo)t|i|if|i|ifƒSntSdS(N(s isinstancesothersDatabasescmpsselfs_Database__connections_Database__namesNotImplemented(sselfsother((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pys__cmp__[scCsd|i|ifSdS(NsDatabase(%r, %r)(sselfs_Database__connections_Database__name(sself((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pys__repr__ascCst||ƒSdS(sÂGet a collection of this database by name. Raises InvalidName if an invalid collection name is used. :Parameters: - `name`: the name of the collection to get N(s Collectionsselfsname(sselfsname((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pys __getattr__dscCs|i|ƒSdS(sÂGet a collection of this database by name. Raises InvalidName if an invalid collection name is used. :Parameters: - `name`: the name of the collection to get N(sselfs __getattr__sname(sselfsname((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pys __getitem__nscCs;||iƒjotd|ƒ‚nt|||ƒSdS(s'Create a new collection in this database. Normally collection creation is automatic. This method should only if you want to specify options on creation. CollectionInvalid is raised if the collection already exists. Options should be a dictionary, with any of the following options: - "size": desired initial size for the collection (in bytes). must be less than or equal to 10000000000. For capped collections this size is the max size of the collection. - "capped": if True, this is a capped collection - "max": maximum number of objects if capped (optional) :Parameters: - `name`: the name of the collection to create - `options` (optional): options to use on the new collection scollection %s already existsN(snamesselfscollection_namessCollectionInvalids Collectionsoptions(sselfsnamesoptions((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pyscreate_collectionxscCsTx#|iD]}|i||ƒ}q Wx#|iD]}|i||ƒ}q0W|SdS(sæApply manipulators to an incoming SON object before it gets stored. :Parameters: - `son`: the son object going into the database - `collection`: the collection the son object is being saved in N(sselfs_Database__manipulatorss manipulatorstransform_incomingssons collections_Database__copying_manipulators(sselfssons collections manipulator((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pys _fix_incomings  cCsfx,ti|iƒD]}|i||ƒ}qWx,ti|iƒD]}|i||ƒ}qBW|SdS(säApply manipulators to a SON object as it comes out of the database. :Parameters: - `son`: the son object coming out of the database - `collection`: the collection the son object was saved in N( spymongos _reversedsselfs_Database__manipulatorss manipulatorstransform_outgoingssons collections_Database__copying_manipulators(sselfssons collections manipulator((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pys _fix_outgoingscCsp|di|d|ƒ}|o|ddjo7|d|jo|Sntd||dfƒ‚n|SdS(sIssue a DB command. s$cmds_socksokiserrmsgscommand %r failed: %sN(sselfsfind_onescommandssocksresultschecksallowable_errorssOperationFailure(sselfscommandsallowable_errorsscheckssocksresult((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pys_commandªscCsÎ|diƒ}gi}|D]}||dƒq~}gi}|D]<}|i|i dƒo||t |i ƒdƒqIqI~}gi}|D]!}d|jo||ƒqœqœ~}|SdS(sAGet a list of all the collection names in this database. ssystem.namespacessnames.is$N( sselfsfindsresultssappends_[1]srsnamessns startswiths_Database__nameslen(sselfsresultssns_[1]srsnames((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pyscollection_namesµs +S8cCs¡|}t|tƒo|iƒ}nt|tiƒ otdƒ‚n|iƒi |iƒ|ƒ||i ƒjodSn|i hdt |ƒ<ƒdS(s Drop a collection. :Parameters: - `name_or_collection`: the name of a collection to drop or the collection object itself sDname_or_collection must be an instance of (Collection, str, unicode)Nsdrop( sname_or_collectionsnames isinstances Collectionstypess StringTypess TypeErrorsselfs connections _purge_indexscollection_namess_commandsunicode(sselfsname_or_collectionsname((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pysdrop_collection¿scCsÀ|}t|tƒo|iƒ}nt|tiƒ otdƒ‚n|ihdt |ƒ<ƒ}|d}|i dƒdjp|i dƒdjot d||fƒ‚n|SdS( sƒValidate a collection. Returns a string of validation info. Raises CollectionInvalid if validation fails. sDname_or_collection must be an instance of (Collection, str, unicode)svalidatesresults exceptioniÿÿÿÿscorrupts%s invalid: %sN(sname_or_collectionsnames isinstances Collectionstypess StringTypess TypeErrorsselfs_commandsunicodesresultsinfosfindsCollectionInvalid(sselfsname_or_collectionsinfosnamesresult((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pysvalidate_collectionÕs ,cCsM|ihdd<ƒ}|ddjo|ddjpt‚|dSdS(sGet the database's current profiling level. Returns one of (`pymongo.OFF`, `pymongo.SLOW_ONLY`, `pymongo.ALL`). sprofileiÿÿÿÿswasiiN(sselfs_commandsresultsAssertionError(sselfsresult((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pysprofiling_levelês)cCsXt|tiƒ p|djp |djotdƒ‚n|ihd|<ƒdS(sáSet the database's profiling level. Raises ValueError if level is not one of (`pymongo.OFF`, `pymongo.SLOW_ONLY`, `pymongo.ALL`). :Parameters: - `level`: the profiling level to use iis*level must be one of (OFF, SLOW_ONLY, ALL)sprofileN(s isinstanceslevelstypessIntTypes ValueErrorsselfs_command(sselfslevel((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pysset_profiling_levelôs.cCst|diƒƒSdS(sAReturns a list containing current profiling information. ssystem.profileN(slistsselfsfind(sself((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pysprofiling_infoscCsc|ihdd<ƒ}|iddƒtjotSn|ddjo|iiƒn|SdS(s°Get a database error if one occured on the last operation. Return None if the last operation was error-free. Otherwise return the error that occurred. s getlasterroriserris not masterN(sselfs_commandserrorsgetsNones_Database__connections_reset(sselfserror((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pyserrorscCs|ihdd<ƒSdS(soGet status information from the last operation. Returns a SON object with status information. s getlasterroriN(sselfs_command(sself((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pys last_statusscCsA|ihdd<ƒ}|iddƒtjotSn|SdS(sêGet the most recent error to have occurred on this database. Only returns errors that have occurred since the last call to `Database.reset_error_history`. Returns None if no such errors have occurred. s getpreverroriserriN(sselfs_commandserrorsgetsNone(sselfserror((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pysprevious_errors cCs|ihdd<ƒdS(s¼Reset the error history of this database. Calls to `Database.previous_error` will only return errors that have occurred since the most recent call to this method. s reseterroriN(sselfs_command(sself((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pysreset_error_history'scCs|SdS(N(sself(sself((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pys__iter__/scCstdƒ‚dS(Ns!'Database' object is not iterable(s TypeError(sself((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pysnext2scCszt|tiƒ otdƒ‚nt|tiƒ otdƒ‚ntƒ}|i|d|ƒt |i ƒƒSdS(s9Get a password digest to use for authentication. s.password must be an instance of (str, unicode)s.username must be an instance of (str, unicode)s:mongo:N( s isinstancespasswordstypess StringTypess TypeErrorsusernames_md5funcsmd5hashsupdatesunicodes hexdigest(sselfsusernamespasswordsmd5hash((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pys_password_digest5s cCst|tiƒ otdƒ‚nt|tiƒ otdƒ‚n|ihdd<ƒ}|d}|i ||ƒ}t ƒ}|id|t|ƒ|fƒt|iƒƒ}yG|itddfdt|ƒfd|fd |fgƒƒ}tSWntj o tSnXd S( s”Authenticate to use this database. Once authenticated, the user has full read and write access to this database. Raises TypeError if either name or password is not an instance of (str, unicode). Authentication lasts for the life of the database connection, or until `Database.logout` is called. The "admin" database is special. Authenticating on "admin" gives access to *all* databases. Effectively, "admin" access means root access to the database. :Parameters: - `name`: the name of the user to authenticate - `password`: the password of the user to authenticate s*name must be an instance of (str, unicode)s.password must be an instance of (str, unicode)sgetnonceisnonces%s%s%ss authenticatesuserskeyN(s isinstancesnamestypess StringTypess TypeErrorspasswordsselfs_commandsresultsnonces_password_digestsdigests_md5funcsmd5hashsupdatesunicodes hexdigestskeysSONsTruesOperationFailuresFalse(sselfsnamespasswordsnoncesmd5hashskeysresultsdigest((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pys authenticateAs    ?cCs|ihdd<ƒdS(szDeauthorize use of this database for this connection. Note that other databases may still be authorized. slogoutiN(sselfs_command(sself((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pyslogoutescCsOt|tƒ otdt|ƒƒ‚n||iihd|i<ƒSdS(sDereference a DBRef, getting the SON object it points to. Raises TypeError if dbref is not an instance of DBRef. Returns a SON object or None if the reference does not point to a valid object. :Parameters: - `dbref`: the reference scannot dereference a %ss_idN( s isinstancesdbrefsDBRefs TypeErrorstypesselfs collectionsfind_onesid(sselfsdbref((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pys dereferencelscGsht|tƒ ot|ƒ}ntd|fdt|ƒfgƒ}|i|ƒ}|i dt ƒSdS(sEvaluate a JavaScript expression on the Mongo server. Useful if you need to touch a lot of data lightly; in such a scenario the network transfer of the data could be a bottleneck. The `code` argument must be a JavaScript function. Additional positional arguments will be passed to that function when it is run on the server. Raises TypeError if `code` is not an instance of (str, unicode, `Code`). Raises OperationFailure if the eval fails. Returns the result of the evaluation. :Parameters: - `code`: string representation of JavaScript code to be evaluated - `args` (optional): additional positional arguments are passed to the `code` being evaluated s$evalsargssretvalN( s isinstancescodesCodesSONslistsargsscommandsselfs_commandsresultsgetsNone(sselfscodesargsscommandsresult((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pysevalys $cOstd|iƒ‚dS(sJThis is only here so that some API misusages are easier to debug. s'Database' object is not callable. If you meant to call the '%s' method on a 'Collection' object it is failing because no such method exists.N(s TypeErrorsselfs_Database__name(sselfsargsskwargs((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pys__call__’s($s__name__s __module__s__doc__s__init__s_Database__check_namesadd_son_manipulators connectionsnames__cmp__s__repr__s __getattr__s __getitem__screate_collections _fix_incomings _fix_outgoingsTruesNones_commandscollection_namessdrop_collectionsvalidate_collectionsprofiling_levelsset_profiling_levelsprofiling_infoserrors last_statussprevious_errorsreset_error_historys__iter__snexts_password_digests authenticateslogouts dereferencesevals__call__(((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pysDatabase"s@                  $  (s__doc__stypesshashlibsmd5s_md5funcsnewssonsSONsdbrefsDBRefsson_manipulatorsObjectIdInjectorsObjectIdShufflers collections Collectionserrorss InvalidNamesCollectionInvalidsOperationFailurescodesCodespymongosobjectsDatabase(shashlibsCodesDBRefsDatabases InvalidNames CollectionsSONsObjectIdShufflersObjectIdInjectors_md5funcsCollectionInvalidsOperationFailurespymongostypessmd5((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/database.pys?s          PK­cÒ:Ùbhÿóópymongo/dbref.py# Copyright 2009 10gen, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tools for manipulating DBRefs (references to Mongo objects).""" import types from objectid import ObjectId class DBRef(object): """A reference to an object stored in a Mongo database. """ def __init__(self, collection, id): """Initialize a new DBRef. Raises TypeError if collection is not an instance of (str, unicode) or id is not an instance of ObjectId. :Parameters: - `collection`: the collection the object is stored in - `id`: the value of the object's _id field """ if not isinstance(collection, types.StringTypes): raise TypeError("collection must be an instance of (str, unicode)") if isinstance(collection, types.StringType): collection = unicode(collection, "utf-8") self.__collection = collection self.__id = id def collection(self): """Get this DBRef's collection as unicode. """ return self.__collection collection = property(collection) def id(self): """Get this DBRef's _id as an ObjectId. """ return self.__id id = property(id) def __repr__(self): return "DBRef(" + repr(self.collection) + ", " + repr(self.id) + ")" def __cmp__(self, other): if isinstance(other, DBRef): return cmp([self.__collection, self.__id], [other.__collection, other.__id]) return NotImplemented PK°s;ÊÙ*  pymongo/dbref.pyc;ò fk:Jc@s6dZdkZdklZdefd„ƒYZdS(s<Tools for manipulating DBRefs (references to Mongo objects).N(sObjectIdsDBRefcBsStZdZd„Zd„ZeeƒZd„ZeeƒZd„Zd„ZRS(s9A reference to an object stored in a Mongo database. cCs`t|tiƒ otdƒ‚nt|tiƒot|dƒ}n||_||_ dS(s(Initialize a new DBRef. Raises TypeError if collection is not an instance of (str, unicode) or id is not an instance of ObjectId. :Parameters: - `collection`: the collection the object is stored in - `id`: the value of the object's _id field s0collection must be an instance of (str, unicode)sutf-8N( s isinstances collectionstypess StringTypess TypeErrors StringTypesunicodesselfs_DBRef__collectionsids _DBRef__id(sselfs collectionsid((s2build/bdist.darwin-9.8.0-i386/egg/pymongo/dbref.pys__init__s  cCs |iSdS(s0Get this DBRef's collection as unicode. N(sselfs_DBRef__collection(sself((s2build/bdist.darwin-9.8.0-i386/egg/pymongo/dbref.pys collection-scCs |iSdS(s-Get this DBRef's _id as an ObjectId. N(sselfs _DBRef__id(sself((s2build/bdist.darwin-9.8.0-i386/egg/pymongo/dbref.pysid3scCs*dt|iƒdt|iƒdSdS(NsDBRef(s, s)(sreprsselfs collectionsid(sself((s2build/bdist.darwin-9.8.0-i386/egg/pymongo/dbref.pys__repr__9scCsAt|tƒo)t|i|ig|i|igƒSntSdS(N(s isinstancesothersDBRefscmpsselfs_DBRef__collections _DBRef__idsNotImplemented(sselfsother((s2build/bdist.darwin-9.8.0-i386/egg/pymongo/dbref.pys__cmp__<s( s__name__s __module__s__doc__s__init__s collectionspropertysids__repr__s__cmp__(((s2build/bdist.darwin-9.8.0-i386/egg/pymongo/dbref.pysDBRefs       (s__doc__stypessobjectidsObjectIdsobjectsDBRef(sDBRefsObjectIdstypes((s2build/bdist.darwin-9.8.0-i386/egg/pymongo/dbref.pys?s  PK£v ;žY…--pymongo/errors.py# Copyright 2009 10gen, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Exceptions raised by the Mongo driver.""" class ConnectionFailure(IOError): """Raised when a connection to the database cannot be made or is lost. """ class AutoReconnect(ConnectionFailure): """Raised when a connection to the database is lost and an attempt to auto-reconnect will be made. """ class ConfigurationError(Exception): """Raised when something is incorrectly configured. """ class OperationFailure(Exception): """Raised when a database operation fails. """ class InvalidOperation(Exception): """Raised when a client attempts to perform an invalid operation. """ class CollectionInvalid(Exception): """Raised when collection validation fails. """ class InvalidName(ValueError): """Raised when an invalid name is used. """ class InvalidBSON(ValueError): """Raised when trying to create a BSON object from invalid data. """ class InvalidDocument(ValueError): """Raised when trying to create a BSON object from an invalid document. """ class UnsupportedTag(ValueError): """Raised when trying to parse an unsupported tag in an XML document. """ class InvalidId(ValueError): """Raised when trying to create an ObjectId from invalid data. """ PK°s;Y¶©llpymongo/errors.pyc;ò a„Jc@südZdefd„ƒYZdefd„ƒYZdefd„ƒYZdefd„ƒYZd efd „ƒYZd efd „ƒYZd e fd„ƒYZ de fd„ƒYZ de fd„ƒYZ de fd„ƒYZ de fd„ƒYZdS(s&Exceptions raised by the Mongo driver.sConnectionFailurecBstZdZRS(sHRaised when a connection to the database cannot be made or is lost. (s__name__s __module__s__doc__(((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/errors.pysConnectionFailures s AutoReconnectcBstZdZRS(shRaised when a connection to the database is lost and an attempt to auto-reconnect will be made. (s__name__s __module__s__doc__(((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/errors.pys AutoReconnects sConfigurationErrorcBstZdZRS(s5Raised when something is incorrectly configured. (s__name__s __module__s__doc__(((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/errors.pysConfigurationErrors sOperationFailurecBstZdZRS(s,Raised when a database operation fails. (s__name__s __module__s__doc__(((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/errors.pysOperationFailure"s sInvalidOperationcBstZdZRS(sCRaised when a client attempts to perform an invalid operation. (s__name__s __module__s__doc__(((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/errors.pysInvalidOperation's sCollectionInvalidcBstZdZRS(s-Raised when collection validation fails. (s__name__s __module__s__doc__(((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/errors.pysCollectionInvalid,s s InvalidNamecBstZdZRS(s)Raised when an invalid name is used. (s__name__s __module__s__doc__(((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/errors.pys InvalidName1s s InvalidBSONcBstZdZRS(sBRaised when trying to create a BSON object from invalid data. (s__name__s __module__s__doc__(((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/errors.pys InvalidBSON6s sInvalidDocumentcBstZdZRS(sIRaised when trying to create a BSON object from an invalid document. (s__name__s __module__s__doc__(((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/errors.pysInvalidDocument;s sUnsupportedTagcBstZdZRS(sGRaised when trying to parse an unsupported tag in an XML document. (s__name__s __module__s__doc__(((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/errors.pysUnsupportedTag@s s InvalidIdcBstZdZRS(s@Raised when trying to create an ObjectId from invalid data. (s__name__s __module__s__doc__(((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/errors.pys InvalidIdEs N(s__doc__sIOErrorsConnectionFailures AutoReconnects ExceptionsConfigurationErrorsOperationFailuresInvalidOperationsCollectionInvalids ValueErrors InvalidNames InvalidBSONsInvalidDocumentsUnsupportedTags InvalidId( s InvalidIds InvalidNamesConfigurationErrorsOperationFailures InvalidBSONsCollectionInvalidsInvalidDocuments AutoReconnectsInvalidOperationsUnsupportedTagsConnectionFailure((s3build/bdist.darwin-9.8.0-i386/egg/pymongo/errors.pys?sPKòŠô:½ócHg g "pymongo/master_slave_connection.py# Copyright 2009 10gen, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Master-Slave connection to Mongo. Performs all writes to Master instance and distributes reads among all instances.""" import types import random from database import Database from connection import Connection class MasterSlaveConnection(object): """A master-slave connection to Mongo. """ def __init__(self, master, slaves=[]): """Create a new Master-Slave connection. The resultant connection should be interacted with using the same mechanisms as a regular `Connection`. The `Connection` instances used to create this `MasterSlaveConnection` can themselves make use of connection pooling, etc. 'Connection' instances used as slaves should be created with the slave_okay option set to True. If connection pooling is being used the connections should be created with "auto_start_request" mode set to False. All request functionality that is needed should be initiated by calling `start_request` on the `MasterSlaveConnection` instance. Raises TypeError if `master` is not an instance of `Connection` or slaves is not a list of at least one `Connection` instances. :Parameters: - `master`: `Connection` instance for the writable Master - `slaves` (optional): list of `Connection` instances for the read-only slaves """ if not isinstance(master, Connection): raise TypeError("master must be a Connection instance") if not isinstance(slaves, types.ListType) or len(slaves) == 0: raise TypeError("slaves must be a list of length >= 1") for slave in slaves: if not isinstance(slave, Connection): raise TypeError("slave %r is not an instance of Connection" % slave) self.__in_request = False self.__master = master self.__slaves = slaves def master(self): return self.__master master = property(master) def slaves(self): return self.__slaves slaves = property(slaves) def set_cursor_manager(self, manager_class): """Set the cursor manager for this connection. Helper to set cursor manager for each individual `Connection` instance that make up this `MasterSlaveConnection`. """ self.__master.set_cursor_manager(manager_class) for slave in self.__slaves: slave.set_cursor_manager(manager_class) # _connection_to_use is a hack that we need to include to make sure # that killcursor operations can be sent to the same instance on which # the cursor actually resides... def _send_message(self, operation, data, _connection_to_use=None): """Say something to Mongo. Sends a message on the Master connection. This is used for inserts, updates, and deletes. Raises ConnectionFailure if the message cannot be sent. Returns the request id of the sent message. :Parameters: - `operation`: opcode of the message - `data`: data to send """ if _connection_to_use is None or _connection_to_use == -1: return self.__master._send_message(operation, data) return self.__slaves[_connection_to_use]._send_message(operation, data) # _connection_to_use is a hack that we need to include to make sure # that getmore operations can be sent to the same instance on which # the cursor actually resides... def _receive_message(self, operation, data, _sock=None, _connection_to_use=None): """Receive a message from Mongo. Sends the given message and returns a (connection_id, response) pair. :Parameters: - `operation`: opcode of the message to send - `data`: data to send """ if _connection_to_use is not None: if _connection_to_use == -1: return (-1, self.__master._receive_message(operation, data, _sock)) else: return (_connection_to_use, self.__slaves[_connection_to_use] ._receive_message(operation, data, _sock)) # for now just load-balance randomly among slaves only... connection_id = random.randrange(0, len(self.__slaves)) if self.__in_request or connection_id == -1: return (-1, self.__master._receive_message(operation, data, _sock)) return (connection_id, self.__slaves[connection_id]._receive_message(operation, data, _sock)) def start_request(self): """Start a "request". See documentation for `Connection.start_request`. Note that all operations performed within a request will be sent using the Master connection. """ self.__in_request = True self.__master.start_request() def end_request(self): """End the current "request". See documentation for `Connection.end_request`. """ self.__in_request = False self.__master.end_request() def __cmp__(self, other): if isinstance(other, MasterSlaveConnection): return cmp((self.__master, self.__slaves), (other.__master, other.__slaves)) return NotImplemented def __repr__(self): return "MasterSlaveConnection(%r, %r)" % (self.__master, self.__slaves) def __getattr__(self, name): """Get a database by name. Raises InvalidName if an invalid database name is used. :Parameters: - `name`: the name of the database to get """ return Database(self, name) def __getitem__(self, name): """Get a database by name. Raises InvalidName if an invalid database name is used. :Parameters: - `name`: the name of the database to get """ return self.__getattr__(name) def close_cursor(self, cursor_id, connection_id): """Close a single database cursor. Raises TypeError if cursor_id is not an instance of (int, long). What closing the cursor actually means depends on this connection's cursor manager. :Parameters: - `cursor_id`: cursor id to close - `connection_id`: id of the `Connection` instance where the cursor was opened """ if connection_id == -1: return self.__master.close_cursor(cursor_id) return self.__slaves[connection_id].close_cursor(cursor_id) def database_names(self): """Get a list of all database names. """ return self.__master.database_names() def drop_database(self, name_or_database): """Drop a database. :Parameters: - `name_or_database`: the name of a database to drop or the object itself """ return self.__master.drop_database(name_or_database) def __iter__(self): return self def next(self): raise TypeError("'MasterSlaveConnection' object is not iterable") def _cache_index(self, database_name, collection_name, index_name, ttl): return self.__master._cache_index(database_name, collection_name, index_name, ttl) def _purge_index(self, database_name, collection_name=None, index_name=None): return self.__master._purge_index(database_name, collection_name, index_name) PK°s;èÒÁÖ*Ö*#pymongo/master_slave_connection.pyc;ò YàdJc@sLdZdkZdkZdklZdklZdefd„ƒYZdS(stMaster-Slave connection to Mongo. Performs all writes to Master instance and distributes reads among all instances.N(sDatabase(s ConnectionsMasterSlaveConnectioncBsãtZdZgd„Zd„ZeeƒZd„ZeeƒZd„Zed„Z eed„Z d„Z d„Z d „Z d „Zd „Zd „Zd „Zd„Zd„Zd„Zd„Zd„Zeed„ZRS(s(A master-slave connection to Mongo. cCs­t|tƒ otdƒ‚nt|tiƒ pt|ƒdjotdƒ‚nx3|D]+}t|tƒ otd|ƒ‚q_q_Wt |_ ||_ ||_ dS(sÿCreate a new Master-Slave connection. The resultant connection should be interacted with using the same mechanisms as a regular `Connection`. The `Connection` instances used to create this `MasterSlaveConnection` can themselves make use of connection pooling, etc. 'Connection' instances used as slaves should be created with the slave_okay option set to True. If connection pooling is being used the connections should be created with "auto_start_request" mode set to False. All request functionality that is needed should be initiated by calling `start_request` on the `MasterSlaveConnection` instance. Raises TypeError if `master` is not an instance of `Connection` or slaves is not a list of at least one `Connection` instances. :Parameters: - `master`: `Connection` instance for the writable Master - `slaves` (optional): list of `Connection` instances for the read-only slaves s$master must be a Connection instanceis$slaves must be a list of length >= 1s)slave %r is not an instance of ConnectionN(s isinstancesmasters Connections TypeErrorsslavesstypessListTypeslensslavesFalsesselfs"_MasterSlaveConnection__in_requests_MasterSlaveConnection__masters_MasterSlaveConnection__slaves(sselfsmastersslavessslave((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pys__init__s'  cCs |iSdS(N(sselfs_MasterSlaveConnection__master(sself((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pysmasterCscCs |iSdS(N(sselfs_MasterSlaveConnection__slaves(sself((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pysslavesGscCs5|ii|ƒx|iD]}|i|ƒqWdS(s·Set the cursor manager for this connection. Helper to set cursor manager for each individual `Connection` instance that make up this `MasterSlaveConnection`. N(sselfs_MasterSlaveConnection__mastersset_cursor_managers manager_classs_MasterSlaveConnection__slavessslave(sselfs manager_classsslave((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pysset_cursor_managerKs  cCsL|tjp |djo|ii||ƒSn|i|i||ƒSdS(sfSay something to Mongo. Sends a message on the Master connection. This is used for inserts, updates, and deletes. Raises ConnectionFailure if the message cannot be sent. Returns the request id of the sent message. :Parameters: - `operation`: opcode of the message - `data`: data to send iÿÿÿÿN(s_connection_to_usesNonesselfs_MasterSlaveConnection__masters _send_messages operationsdatas_MasterSlaveConnection__slaves(sselfs operationsdatas_connection_to_use((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pys _send_messageXs cCsÔ|tj oQ|djo d|ii|||ƒfSq^||i|i|||ƒfSnt i dt |iƒƒ}|i p |djo d|ii|||ƒfSn||i|i|||ƒfSdS(sãReceive a message from Mongo. Sends the given message and returns a (connection_id, response) pair. :Parameters: - `operation`: opcode of the message to send - `data`: data to send iÿÿÿÿiN(s_connection_to_usesNonesselfs_MasterSlaveConnection__masters_receive_messages operationsdatas_socks_MasterSlaveConnection__slavessrandoms randrangeslens connection_ids"_MasterSlaveConnection__in_request(sselfs operationsdatas_socks_connection_to_uses connection_id((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pys_receive_messagels   $ cCst|_|iiƒdS(sÄStart a "request". See documentation for `Connection.start_request`. Note that all operations performed within a request will be sent using the Master connection. N(sTruesselfs"_MasterSlaveConnection__in_requests_MasterSlaveConnection__masters start_request(sself((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pys start_request‰s cCst|_|iiƒdS(s\End the current "request". See documentation for `Connection.end_request`. N(sFalsesselfs"_MasterSlaveConnection__in_requests_MasterSlaveConnection__masters end_request(sself((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pys end_request“s cCsAt|tƒo)t|i|if|i|ifƒSntSdS(N(s isinstancesothersMasterSlaveConnectionscmpsselfs_MasterSlaveConnection__masters_MasterSlaveConnection__slavessNotImplemented(sselfsother((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pys__cmp__›scCsd|i|ifSdS(NsMasterSlaveConnection(%r, %r)(sselfs_MasterSlaveConnection__masters_MasterSlaveConnection__slaves(sself((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pys__repr__¡scCst||ƒSdS(s«Get a database by name. Raises InvalidName if an invalid database name is used. :Parameters: - `name`: the name of the database to get N(sDatabasesselfsname(sselfsname((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pys __getattr__¤scCs|i|ƒSdS(s«Get a database by name. Raises InvalidName if an invalid database name is used. :Parameters: - `name`: the name of the database to get N(sselfs __getattr__sname(sselfsname((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pys __getitem__®scCs9|djo|ii|ƒSn|i|i|ƒSdS(s}Close a single database cursor. Raises TypeError if cursor_id is not an instance of (int, long). What closing the cursor actually means depends on this connection's cursor manager. :Parameters: - `cursor_id`: cursor id to close - `connection_id`: id of the `Connection` instance where the cursor was opened iÿÿÿÿN(s connection_idsselfs_MasterSlaveConnection__masters close_cursors cursor_ids_MasterSlaveConnection__slaves(sselfs cursor_ids connection_id((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pys close_cursor¸s  cCs|iiƒSdS(s*Get a list of all database names. N(sselfs_MasterSlaveConnection__mastersdatabase_names(sself((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pysdatabase_namesÈscCs|ii|ƒSdS(sDrop a database. :Parameters: - `name_or_database`: the name of a database to drop or the object itself N(sselfs_MasterSlaveConnection__masters drop_databasesname_or_database(sselfsname_or_database((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pys drop_databaseÍscCs|SdS(N(sself(sself((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pys__iter__ÖscCstdƒ‚dS(Ns.'MasterSlaveConnection' object is not iterable(s TypeError(sself((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pysnextÙscCs|ii||||ƒSdS(N(sselfs_MasterSlaveConnection__masters _cache_indexs database_namescollection_names index_namesttl(sselfs database_namescollection_names index_namesttl((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pys _cache_indexÜscCs|ii|||ƒSdS(N(sselfs_MasterSlaveConnection__masters _purge_indexs database_namescollection_names index_name(sselfs database_namescollection_names index_name((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pys _purge_indexàs (s__name__s __module__s__doc__s__init__smasterspropertysslavessset_cursor_managersNones _send_messages_receive_messages start_requests end_requests__cmp__s__repr__s __getattr__s __getitem__s close_cursorsdatabase_namess drop_databases__iter__snexts _cache_indexs _purge_index(((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pysMasterSlaveConnections,  $             ( s__doc__stypessrandomsdatabasesDatabases connections ConnectionsobjectsMasterSlaveConnection(srandomsDatabasesMasterSlaveConnections Connectionstypes((sDbuild/bdist.darwin-9.8.0-i386/egg/pymongo/master_slave_connection.pys?s     PK!‹;7rñbÖÖpymongo/objectid.py# Copyright 2009 10gen, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Representation of an ObjectId for Mongo.""" import threading import random import types import time import socket import os import struct try: import hashlib _md5func = hashlib.md5 except: # for Python < 2.5 import md5 _md5func = md5.new from errors import InvalidId class ObjectId(object): """A Mongo ObjectId. """ _inc = 0 _inc_lock = threading.Lock() def __init__(self, id=None): """Initialize a new ObjectId. If no value of id is given, create a new (unique) ObjectId. If given id is an instance of (string, ObjectId) validate it and use that. Otherwise, a TypeError is raised. If given an invalid id, InvalidId is raised. :Parameters: - `id` (optional): a valid ObjectId """ if id is None: self.__generate() else: self.__validate(id) def __generate(self): """Generate a new value for this ObjectId. """ oid = "" # 4 bytes current time oid += struct.pack(">i", int(time.time())) # 3 bytes machine machine_hash = _md5func() machine_hash.update(socket.gethostname()) oid += machine_hash.digest()[0:3] # 2 bytes pid oid += struct.pack(">H", os.getpid() % 0xFFFF) # 3 bytes inc ObjectId._inc_lock.acquire() oid += struct.pack(">i", ObjectId._inc)[1:4] ObjectId._inc = (ObjectId._inc + 1) % 0xFFFFFF ObjectId._inc_lock.release() self.__id = oid def __validate(self, oid): """Validate and use the given id for this ObjectId. Raises TypeError if id is not an instance of (str, ObjectId) and InvalidId if it is not a valid ObjectId. :Parameters: - `oid`: a valid ObjectId """ if isinstance(oid, ObjectId): self.__id = oid.__id elif isinstance(oid, types.StringType): if len(oid) == 12: self.__id = oid else: raise InvalidId("%s is not a valid ObjectId" % oid) else: raise TypeError("id must be an instance of (str, ObjectId), " "not %s" % type(oid)) def url_encode(self, legacy=False): """Get a string representation of this ObjectId safe for use in a url. The `legacy` parameter is for backwards compatibility only and should almost always be kept False. It might eventually be removed. The reverse can be achieved using `url_decode()`. :Parameters: - `legacy` (optional): use the legacy byte ordering to represent the ObjectId. if you aren't positive you need this it is probably best left as False. """ if legacy: return self.legacy_str().encode("hex") else: return self.__id.encode("hex") def url_decode(cls, encoded_oid, legacy=False): """Create an ObjectId from an encoded hex string. The `legacy` parameter is for backwards compatibility only and should almost always be kept False. It might eventually be removed. The reverse can be achieved using `url_encode()`. :Parameters: - `encoded_oid`: string encoding of an ObjectId (as created by `url_encode()`) - `legacy` (optional): use the legacy byte ordering to represent the ObjectId. if you aren't positive you need this it is probably best left as False. """ if legacy: oid = encoded_oid.decode("hex") return cls.from_legacy_str(oid) else: return cls(encoded_oid.decode("hex")) url_decode = classmethod(url_decode) def legacy_str(self): return self.__id[7::-1] + self.__id[:7:-1] def from_legacy_str(cls, legacy_str): return cls(legacy_str[7::-1] + legacy_str[:7:-1]) from_legacy_str = classmethod(from_legacy_str) def __str__(self): return self.__id def __repr__(self): return "ObjectId(%r)" % self.__id def __cmp__(self, other): if isinstance(other, ObjectId): return cmp(self.__id, other.__id) return NotImplemented PK°s;×vñffpymongo/objectid.pyc;ò ®V”Jc@sždZdkZdkZdkZdkZdkZdkZdkZydkZei Z Wndk Z e i Z nXdk l Z defd„ƒYZdS(s(Representation of an ObjectId for Mongo.N(s InvalidIdsObjectIdcBs›tZdZdZeiƒZed„Zd„Z d„Z e d„Z e d„Z ee ƒZ d„Zd„ZeeƒZd „Zd „Zd „ZRS( sA Mongo ObjectId. icCs,|tjo|iƒn|i|ƒdS(s^Initialize a new ObjectId. If no value of id is given, create a new (unique) ObjectId. If given id is an instance of (string, ObjectId) validate it and use that. Otherwise, a TypeError is raised. If given an invalid id, InvalidId is raised. :Parameters: - `id` (optional): a valid ObjectId N(sidsNonesselfs_ObjectId__generates_ObjectId__validate(sselfsid((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/objectid.pys__init__)s  cCsÖd}|tidttiƒƒƒ7}tƒ}|iti ƒƒ||i ƒdd!7}|tidt i ƒdƒ7}t iiƒ|tidt iƒdd!7}t idd t _t iiƒ||_d S( s0Generate a new value for this ObjectId. ss>iiis>HiÿÿiiiÿÿÿN(soidsstructspacksintstimes_md5funcs machine_hashsupdatessockets gethostnamesdigestsossgetpidsObjectIds _inc_locksacquires_incsreleasesselfs _ObjectId__id(sselfsoids machine_hash((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/objectid.pys __generate9s"     cCst|tƒo|i|_n^t|tiƒo4t|ƒdjo ||_q}td|ƒ‚nt dt |ƒƒ‚dS(sîValidate and use the given id for this ObjectId. Raises TypeError if id is not an instance of (str, ObjectId) and InvalidId if it is not a valid ObjectId. :Parameters: - `oid`: a valid ObjectId i s%s is not a valid ObjectIds1id must be an instance of (str, ObjectId), not %sN( s isinstancesoidsObjectIds _ObjectId__idsselfstypess StringTypeslens InvalidIds TypeErrorstype(sselfsoid((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/objectid.pys __validateQs cCs2|o|iƒidƒSn|iidƒSdS(sêGet a string representation of this ObjectId safe for use in a url. The `legacy` parameter is for backwards compatibility only and should almost always be kept False. It might eventually be removed. The reverse can be achieved using `url_decode()`. :Parameters: - `legacy` (optional): use the legacy byte ordering to represent the ObjectId. if you aren't positive you need this it is probably best left as False. shexN(slegacysselfs legacy_strsencodes _ObjectId__id(sselfslegacy((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/objectid.pys url_encodees cCs>|o |idƒ}|i|ƒSn||idƒƒSdS(s:Create an ObjectId from an encoded hex string. The `legacy` parameter is for backwards compatibility only and should almost always be kept False. It might eventually be removed. The reverse can be achieved using `url_encode()`. :Parameters: - `encoded_oid`: string encoding of an ObjectId (as created by `url_encode()`) - `legacy` (optional): use the legacy byte ordering to represent the ObjectId. if you aren't positive you need this it is probably best left as False. shexN(slegacys encoded_oidsdecodesoidsclssfrom_legacy_str(sclss encoded_oidslegacysoid((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/objectid.pys url_decodews cCs,|iddd…|iddd…SdS(Niiÿÿÿÿ(sselfs _ObjectId__id(sself((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/objectid.pys legacy_strscCs,||ddd…|ddd…ƒSdS(Niiÿÿÿÿ(sclss legacy_str(sclss legacy_str((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/objectid.pysfrom_legacy_strscCs |iSdS(N(sselfs _ObjectId__id(sself((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/objectid.pys__str__”scCsd|iSdS(Ns ObjectId(%r)(sselfs _ObjectId__id(sself((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/objectid.pys__repr__—scCs/t|tƒot|i|iƒSntSdS(N(s isinstancesothersObjectIdscmpsselfs _ObjectId__idsNotImplemented(sselfsother((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/objectid.pys__cmp__šs(s__name__s __module__s__doc__s_incs threadingsLocks _inc_locksNones__init__s_ObjectId__generates_ObjectId__validatesFalses url_encodes url_decodes classmethods legacy_strsfrom_legacy_strs__str__s__repr__s__cmp__(((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/objectid.pysObjectId"s             (s__doc__s threadingsrandomstypesstimessocketsossstructshashlibsmd5s_md5funcsnewserrorss InvalidIdsobjectsObjectId( shashlibs InvalidIdssocketsObjectIdsrandoms threadings_md5funcstimesmd5sosstypessstruct((s5build/bdist.darwin-9.8.0-i386/egg/pymongo/objectid.pys?s            PK;‚Å:ò(8x))pymongo/son.py# Copyright 2009 10gen, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tools for creating and manipulating SON, the Serialized Ocument Notation. Regular dictionaries can be used instead of SON objects, but not when the order of keys is important. A SON object can be used just like a normal Python dictionary.""" import datetime import re import binascii import base64 import types try: import xml.etree.ElementTree as ET except ImportError: import elementtree.ElementTree as ET from code import Code from binary import Binary from objectid import ObjectId from dbref import DBRef from errors import UnsupportedTag class SON(dict): """SON data. A subclass of dict that maintains ordering of keys and provides a few extra niceties for dealing with SON. SON objects can be saved and retrieved from Mongo. The mapping from Python types to Mongo types is as follows: =================================== ============= =================== Python Type Mongo Type Supported Direction =================================== ============= =================== None null both bool boolean both int number (int) both float number (real) both string string py -> mongo unicode string both list array both dict / `SON` object both datetime.datetime [#dt]_ [#dt2]_ date both compiled re regex both `pymongo.binary.Binary` binary both `pymongo.objectid.ObjectId` oid both `pymongo.dbref.DBRef` dbref both None undefined mongo -> py unicode code mongo -> py `pymongo.code.Code` code py -> mongo unicode symbol mongo -> py =================================== ============= =================== Note that to save binary data it must be wrapped as an instance of `pymongo.binary.Binary`. Otherwise it will be saved as a Mongo string and retrieved as unicode. .. [#dt] datetime.datetime instances will be rounded to the nearest millisecond when saved .. [#dt2] all datetime.datetime instances are treated as *naive*. clients should always use UTC. """ def __init__(self, data=None, **kwargs): self.__keys = [] dict.__init__(self) self.update(data) self.update(kwargs) def __repr__(self): result = [] for key in self.__keys: result.append("(%r, %r)" % (key, self[key])) return "SON([%s])" % ", ".join(result) def __setitem__(self, key, value): if key not in self: self.__keys.append(key) dict.__setitem__(self, key, value) def __delitem__(self, key): self.__keys.remove(key) dict.__delitem__(self, key) def keys(self): return list(self.__keys) def copy(self): other = SON() other.update(self) return other # TODO this is all from UserDict.DictMixin. it could probably be made more # efficient. # second level definitions support higher levels def __iter__(self): for k in self.keys(): yield k def has_key(self, key): return key in self.keys() def __contains__(self, key): return key in self.keys() # third level takes advantage of second level definitions def iteritems(self): for k in self: yield (k, self[k]) def iterkeys(self): return self.__iter__() # fourth level uses definitions from lower levels def itervalues(self): for _, v in self.iteritems(): yield v def values(self): return [v for _, v in self.iteritems()] def items(self): return list(self.iteritems()) def clear(self): for key in self.keys(): del self[key] def setdefault(self, key, default=None): try: return self[key] except KeyError: self[key] = default return default def pop(self, key, *args): if len(args) > 1: raise TypeError("pop expected at most 2 arguments, got "\ + repr(1 + len(args))) try: value = self[key] except KeyError: if args: return args[0] raise del self[key] return value def popitem(self): try: k, v = self.iteritems().next() except StopIteration: raise KeyError('container is empty') del self[k] return (k, v) def update(self, other=None, **kwargs): # Make progressively weaker assumptions about "other" if other is None: pass elif hasattr(other, 'iteritems'): # iteritems saves memory and lookups for k, v in other.iteritems(): self[k] = v elif hasattr(other, 'keys'): for k in other.keys(): self[k] = other[k] else: for k, v in other: self[k] = v if kwargs: self.update(kwargs) def get(self, key, default=None): try: return self[key] except KeyError: return default def __cmp__(self, other): if isinstance(other, SON): return cmp((dict(self.iteritems()), self.keys()), (dict(other.iteritems()), other.keys())) return cmp(dict(self.iteritems()), other) def __len__(self): return len(self.keys()) # Thanks to Jeff Jenkins for the idea and original implementation def to_dict(self): """Convert a SON document to a normal Python dictionary instance. This is trickier than just *dict(...)* because it needs to be recursive. """ def transform_value(value): if isinstance(value, types.ListType): return [transform_value(v) for v in value] if isinstance(value, SON): value = dict(value) if isinstance(value, types.DictType): for k, v in value.iteritems(): value[k] = transform_value(v) return value return transform_value(dict(self)) def from_xml(cls, xml): """Create an instance of SON from an xml document. """ def pad(list, index): while index >= len(list): list.append(None) def make_array(array): doc = make_doc(array) array = [] for (key, value) in doc.items(): index = int(key) pad(array, index) array[index] = value return array def make_string(string): return string.text is not None and unicode(string.text) or u"" def make_code(code): return code.text is not None and Code(code.text) or Code("") def make_binary(binary): if binary.text is not None: return Binary(base64.decodestring(binary.text)) return Binary("") def make_boolean(bool): return bool.text == "true" def make_date(date): return datetime.datetime.utcfromtimestamp(float(date.text) / 1000.0) def make_ref(dbref): return DBRef(make_elem(dbref[0]), make_elem(dbref[1])) def make_oid(oid): return ObjectId(binascii.unhexlify(oid.text)) def make_int(data): return int(data.text) def make_null(null): return None def make_number(number): return float(number.text) def make_regex(regex): return re.compile(make_elem(regex[0]), make_elem(regex[1])) def make_options(data): options = 0 if not data.text: return options if "i" in data.text: options |= re.IGNORECASE if "l" in data.text: options |= re.LOCALE if "m" in data.text: options |= re.MULTILINE if "s" in data.text: options |= re.DOTALL if "u" in data.text: options |= re.UNICODE if "x" in data.text: options |= re.VERBOSE return options def make_elem(elem): try: return {"array": make_array, "doc": make_doc, "string": make_string, "binary": make_binary, "boolean": make_boolean, "code": make_code, "date": make_date, "ref": make_ref, "ns": make_string, "oid": make_oid, "int": make_int, "null": make_null, "number": make_number, "regex": make_regex, "pattern": make_string, "options": make_options, }[elem.tag](elem) except KeyError: raise UnsupportedTag("cannot parse tag: %s" % elem.tag) def make_doc(doc): son = SON() for elem in doc: son[elem.attrib["name"]] = make_elem(elem) return son tree = ET.XML(xml) doc = tree[1] return make_doc(doc) from_xml = classmethod(from_xml) PK°s;¦‘Э@­@pymongo/son.pyc;ò s})Jc@sÃdZdkZdkZdkZdkZdkZydkiiZ Wne j odk iZ nXdk l Z dklZdklZdklZdklZdefd„ƒYZdS( sïTools for creating and manipulating SON, the Serialized Ocument Notation. Regular dictionaries can be used instead of SON objects, but not when the order of keys is important. A SON object can be used just like a normal Python dictionary.N(sCode(sBinary(sObjectId(sDBRef(sUnsupportedTagsSONcBsþtZdZed„Zd„Zd„Zd„Zd„Zd„Z d„Z d„Z d „Z d „Z d „Zd „Zd „Zd„Zd„Zed„Zd„Zd„Zed„Zed„Zd„Zd„Zd„Zd„ZeeƒZRS(sÞSON data. A subclass of dict that maintains ordering of keys and provides a few extra niceties for dealing with SON. SON objects can be saved and retrieved from Mongo. The mapping from Python types to Mongo types is as follows: =================================== ============= =================== Python Type Mongo Type Supported Direction =================================== ============= =================== None null both bool boolean both int number (int) both float number (real) both string string py -> mongo unicode string both list array both dict / `SON` object both datetime.datetime [#dt]_ [#dt2]_ date both compiled re regex both `pymongo.binary.Binary` binary both `pymongo.objectid.ObjectId` oid both `pymongo.dbref.DBRef` dbref both None undefined mongo -> py unicode code mongo -> py `pymongo.code.Code` code py -> mongo unicode symbol mongo -> py =================================== ============= =================== Note that to save binary data it must be wrapped as an instance of `pymongo.binary.Binary`. Otherwise it will be saved as a Mongo string and retrieved as unicode. .. [#dt] datetime.datetime instances will be rounded to the nearest millisecond when saved .. [#dt2] all datetime.datetime instances are treated as *naive*. clients should always use UTC. cKs4g|_ti|ƒ|i|ƒ|i|ƒdS(N(sselfs _SON__keyssdicts__init__supdatesdataskwargs(sselfsdataskwargs((s0build/bdist.darwin-9.8.0-i386/egg/pymongo/son.pys__init__Ps   cCsJg}x,|iD]!}|id|||fƒqWddi|ƒSdS(Ns(%r, %r)s SON([%s])s, (sresultsselfs _SON__keysskeysappendsjoin(sselfsresultskey((s0build/bdist.darwin-9.8.0-i386/egg/pymongo/son.pys__repr__Vs  cCs8||jo|ii|ƒnti|||ƒdS(N(skeysselfs _SON__keyssappendsdicts __setitem__svalue(sselfskeysvalue((s0build/bdist.darwin-9.8.0-i386/egg/pymongo/son.pys __setitem__\s cCs$|ii|ƒti||ƒdS(N(sselfs _SON__keyssremoveskeysdicts __delitem__(sselfskey((s0build/bdist.darwin-9.8.0-i386/egg/pymongo/son.pys __delitem__ascCst|iƒSdS(N(slistsselfs _SON__keys(sself((s0build/bdist.darwin-9.8.0-i386/egg/pymongo/son.pyskeysescCstƒ}|i|ƒ|SdS(N(sSONsothersupdatesself(sselfsother((s0build/bdist.darwin-9.8.0-i386/egg/pymongo/son.pyscopyhs  ccsx|iƒD] }|Vq WdS(N(sselfskeyssk(sselfsk((s0build/bdist.darwin-9.8.0-i386/egg/pymongo/son.pys__iter__ps cCs||iƒjSdS(N(skeysselfskeys(sselfskey((s0build/bdist.darwin-9.8.0-i386/egg/pymongo/son.pyshas_keytscCs||iƒjSdS(N(skeysselfskeys(sselfskey((s0build/bdist.darwin-9.8.0-i386/egg/pymongo/son.pys __contains__wsccs#x|D]}|||fVqWdS(N(sselfsk(sselfsk((s0build/bdist.darwin-9.8.0-i386/egg/pymongo/son.pys iteritems{scCs|iƒSdS(N(sselfs__iter__(sself((s0build/bdist.darwin-9.8.0-i386/egg/pymongo/son.pysiterkeyssccs%x|iƒD]\}}|Vq WdS(N(sselfs iteritemss_sv(sselfs_sv((s0build/bdist.darwin-9.8.0-i386/egg/pymongo/son.pys itervaluesƒs cCs5gi}|iƒD]\}}||ƒq~SdS(N(sappends_[1]sselfs iteritemss_sv(sselfs_[1]s_sv((s0build/bdist.darwin-9.8.0-i386/egg/pymongo/son.pysvalues‡scCst|iƒƒSdS(N(slistsselfs iteritems(sself((s0build/bdist.darwin-9.8.0-i386/egg/pymongo/son.pysitemsŠscCs"x|iƒD] }||=q WdS(N(sselfskeysskey(sselfskey((s0build/bdist.darwin-9.8.0-i386/egg/pymongo/son.pysclears cCs4y ||SWntj o||| start_blocking + timeout: break self.__lock.release() return did_acquire def release(self): """Release the lock. When the lock is locked, reset it to unlocked and return. If any other threads are blocked waiting for the lock to become unlocked, allow exactly one of them to proceed. Do not call this method when the lock is unlocked - a RuntimeError will be raised. There is no return value. """ self.__lock.acquire() if self.__unlocked.isSet(): self.__lock.release() raise RuntimeError("trying to release an unlocked TimeoutableLock") self.__unlocked.set() self.__lock.release() PK°s;ï €ÇÔ Ô pymongo/thread_util.pyc;ò ‡|)Jc@s2dZdkZdkZdefd„ƒYZdS(s*Some utilities for dealing with threading.NsTimeoutableLockcBs/tZdZd„Zeed„Zd„ZRS(sBLock implementation that allows blocking acquires to timeout. cCs/tiƒ|_|iiƒtiƒ|_dS(N(s threadingsEventsselfs_TimeoutableLock__unlockedssetsLocks_TimeoutableLock__lock(sself((s8build/bdist.darwin-9.8.0-i386/egg/pymongo/thread_util.pys__init__s cCs-t}|iiƒ|iiƒo|iiƒt}nß|o×|t j ot i ƒ}nx·to«|iiƒ|t j o"|ii||t i ƒƒn|iiƒ|iiƒ|iiƒo|iiƒt}Pqa|t j ot i ƒ||joPqaqaWn|iiƒ|SdS(sÉAcquire the lock, blocking or non-blocking. When invoked without arguments, block until the lock is unlocked, then set it to locked, and return True. When invoked with the `blocking` argument set to True, do the same thing as when called without arguments, and return True. When invoked with the `blocking` argument set to False, do not block. If a call without an argument would block, return False immediately; otherwise do the same thing as when called without arguments, and return True. If `blocking` is True and `timeout` is not None then `timeout` specifies a timeout in seconds. If the lock cannot be acquired within the time limit return False. If `blocking` is False then `timeout` is ignored. :Parameters: - `blocking` (optional): perform a blocking acquire - `timeout` (optional): add a time limit to a blocking acquire N(sFalses did_acquiresselfs_TimeoutableLock__locksacquires_TimeoutableLock__unlockedsisSetsclearsTruesblockingstimeoutsNonestimesstart_blockingsreleaseswait(sselfsblockingstimeouts did_acquiresstart_blocking((s8build/bdist.darwin-9.8.0-i386/egg/pymongo/thread_util.pysacquires0      "   $ cCsX|iiƒ|iiƒo|iiƒtdƒ‚n|iiƒ|iiƒdS(sdRelease the lock. When the lock is locked, reset it to unlocked and return. If any other threads are blocked waiting for the lock to become unlocked, allow exactly one of them to proceed. Do not call this method when the lock is unlocked - a RuntimeError will be raised. There is no return value. s-trying to release an unlocked TimeoutableLockN(sselfs_TimeoutableLock__locksacquires_TimeoutableLock__unlockedsisSetsreleases RuntimeErrorsset(sself((s8build/bdist.darwin-9.8.0-i386/egg/pymongo/thread_util.pysreleaseUs    (s__name__s __module__s__doc__s__init__sTruesNonesacquiresrelease(((s8build/bdist.darwin-9.8.0-i386/egg/pymongo/thread_util.pysTimeoutableLocks  7(s__doc__s threadingstimesobjectsTimeoutableLock(s threadingsTimeoutableLockstime((s8build/bdist.darwin-9.8.0-i386/egg/pymongo/thread_util.pys?s  PK¯s;“×2¤EGG-INFO/dependency_links.txtPK¯s;…WnNN¤<EGG-INFO/PKG-INFOPK¯s;NÔ­ ¤¹EGG-INFO/requires.txtPK¯s;g$«ÄĤ÷EGG-INFO/SOURCES.txtPK¯s;íó,%¤íEGG-INFO/top_level.txtPK°s;“×2¤0EGG-INFO/zip-safePK¢`è:|dz½¤`gridfs/__init__.pyPK°s;Z\ù[Êʤ—(gridfs/__init__.pycPKqC:n7à¡ÈȤ’:gridfs/errors.pyPK°s;9 9¤ˆ=gridfs/errors.pycPK'`;ä\`•3•3¤É?gridfs/grid_file.pyPK°s;Lj;8¾>¾>¤sgridfs/grid_file.pycPK&s;„P¡j ¤²pymongo/__init__.pyPK°s;˜÷F  ¤=½pymongo/__init__.pycPK¢cÒ:ïôfÆj j ¤‚Èpymongo/binary.pyPK°s;Š<®G  ¤Òpymongo/binary.pycPK!‹;Gߟ9×6×6¤hÝpymongo/bson.pyPK°s;{Š,´W´W¤lpymongo/bson.pycPK”cÒ:4µ¤••¤Nlpymongo/code.pyPK°s;‰t……¤tpymongo/code.pycPKeR;Ò2ømwiwi¤Ã|pymongo/collection.pyPK°s;Cû³ xx¤mæpymongo/collection.pycPK€R;5²Céeée¤º^pymongo/connection.pyPK°s;<áÓ±uu¤ÖÄpymongo/connection.pycPKQ;¹?Z3ø;ø;¤§:pymongo/cursor.pyPK°s;›Ré¼F¼F¤Îvpymongo/cursor.pycPK~Å:–ß[æ æ ¤º½pymongo/cursor_manager.pyPK°s;¼=·¤×Èpymongo/cursor_manager.pycPKN^Ö:B刨Ž9Ž9¤Øpymongo/database.pyPK°s;νƒæœMœM¤Îpymongo/database.pycPK­cÒ:Ùbhÿóó¤œ_pymongo/dbref.pyPK°s;ÊÙ*  ¤½gpymongo/dbref.pycPK£v ;žY…--¤ýqpymongo/errors.pyPK°s;Y¶©ll¤Yypymongo/errors.pycPKòŠô:½ócHg g "¤õˆpymongo/master_slave_connection.pyPK°s;èÒÁÖ*Ö*#¤œ©pymongo/master_slave_connection.pycPK!‹;7rñbÖÖ¤³Ôpymongo/objectid.pyPK°s;×vñff¤ºçpymongo/objectid.pycPK;‚Å:ò(8x))¤Rpymongo/son.pyPK°s;¦‘Э@­@¤š)pymongo/son.pycPK”Å:iÜħ¤tjpymongo/son_manipulator.pyPK°s;Š»µ‘""¤n€pymongo/son_manipulator.pycPK½Å:±]‡Ã à ¤©¢pymongo/thread_util.pyPK°s;ï €ÇÔ Ô ¤ °pymongo/thread_util.pycPK,,] ©¾