PKË‹ž6“×2EGG-INFO/dependency_links.txt PKË‹ž6jOžv««EGG-INFO/PKG-INFOMetadata-Version: 1.0 Name: ZestyParser Version: 0.8.1 Summary: Write less parsing code. Write nicer parsing code. Have fun with it. Home-page: http://adamatlas.org/2006/12/ZestyParser/ Author: Adam Atlas Author-email: adam@atlas.st License: MIT Description: ZestyParser is a small parsing toolkit for Python. It doesn't use the traditional separated lexer/parser approach, nor does it make you learn a new ugly syntax for specifying grammar. Grammars are built with pure Python objects; its flow is very simple, but can handle nearly any parsing problem you throw at it. Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Natural Language :: English Classifier: Programming Language :: Python Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Topic :: Text Processing PKË‹ž6ò;¢ïïEGG-INFO/SOURCES.txtCHANGES.txt LICENSE.txt MANIFEST MANIFEST.in setup.py Docs/epydoc.css Docs/index.html Docs/private/ZestyParser-module.html Docs/private/ZestyParser.AHT-module.html Docs/private/ZestyParser.AHT.Env-class.html Docs/private/ZestyParser.AHT._AHTFactory-class.html Docs/private/ZestyParser.DebuggingParser-module.html Docs/private/ZestyParser.DebuggingParser.DebuggingParser-class.html Docs/private/ZestyParser.Helpers-module.html Docs/private/ZestyParser.Helpers.EncloseHelper-class.html Docs/private/ZestyParser.Helpers.EscapeHelper-class.html Docs/private/ZestyParser.Helpers.IndentHelper-class.html Docs/private/ZestyParser.Helpers.IndentationLevel-class.html Docs/private/ZestyParser.Helpers.Oper-class.html Docs/private/ZestyParser.Parser-module.html Docs/private/ZestyParser.Parser.Error-class.html Docs/private/ZestyParser.Parser.NotMatched-class.html Docs/private/ZestyParser.Parser.ParseError-class.html Docs/private/ZestyParser.Parser.ZestyParser-class.html Docs/private/ZestyParser.Tags-module.html Docs/private/ZestyParser.Tags._Env-class.html Docs/private/ZestyParser.Tags._Tag-class.html Docs/private/ZestyParser.Tokens-module.html Docs/private/ZestyParser.Tokens.AbstractToken-class.html Docs/private/ZestyParser.Tokens.CompositeToken-class.html Docs/private/ZestyParser.Tokens.Default-class.html Docs/private/ZestyParser.Tokens.Defer-class.html Docs/private/ZestyParser.Tokens.ListReplacing-class.html Docs/private/ZestyParser.Tokens.Lookahead-class.html Docs/private/ZestyParser.Tokens.Negative-class.html Docs/private/ZestyParser.Tokens.Omit-class.html Docs/private/ZestyParser.Tokens.Only-class.html Docs/private/ZestyParser.Tokens.Placeholder-class.html Docs/private/ZestyParser.Tokens.RE-class.html Docs/private/ZestyParser.Tokens.Raw-class.html Docs/private/ZestyParser.Tokens.SingleReplacing-class.html Docs/private/ZestyParser.Tokens.Skip-class.html Docs/private/ZestyParser.Tokens.TakeToken-class.html Docs/private/ZestyParser.Tokens.TokenSequence-class.html Docs/private/ZestyParser.Tokens.TokenSeries-class.html Docs/private/ZestyParser.Tokens.TokenWrapper-class.html Docs/private/ZestyParser.Tokens._EOF-class.html Docs/private/ZestyParser.Tokens._Whitespace-class.html Docs/private/__builtin__.object-class.html Docs/private/__builtin__.type-class.html Docs/private/epydoc.css Docs/private/exceptions.Exception-class.html Docs/private/frames.html Docs/private/help.html Docs/private/index.html Docs/private/indices.html Docs/private/toc-ZestyParser-module.html Docs/private/toc-ZestyParser.AHT-module.html Docs/private/toc-ZestyParser.DebuggingParser-module.html Docs/private/toc-ZestyParser.Helpers-module.html Docs/private/toc-ZestyParser.Parser-module.html Docs/private/toc-ZestyParser.Tags-module.html Docs/private/toc-ZestyParser.Tokens-module.html Docs/private/toc-everything.html Docs/private/toc.html Docs/private/trees.html Docs/public/ZestyParser-module.html Docs/public/ZestyParser.AHT-module.html Docs/public/ZestyParser.AHT.Env-class.html Docs/public/ZestyParser.DebuggingParser-module.html Docs/public/ZestyParser.DebuggingParser.DebuggingParser-class.html Docs/public/ZestyParser.Helpers-module.html Docs/public/ZestyParser.Helpers.EncloseHelper-class.html Docs/public/ZestyParser.Helpers.EscapeHelper-class.html Docs/public/ZestyParser.Helpers.IndentHelper-class.html Docs/public/ZestyParser.Parser-module.html Docs/public/ZestyParser.Parser.NotMatched-class.html Docs/public/ZestyParser.Parser.ParseError-class.html Docs/public/ZestyParser.Parser.ZestyParser-class.html Docs/public/ZestyParser.Tags-module.html Docs/public/ZestyParser.Tokens-module.html Docs/public/ZestyParser.Tokens.AbstractToken-class.html Docs/public/ZestyParser.Tokens.CompositeToken-class.html Docs/public/ZestyParser.Tokens.Default-class.html Docs/public/ZestyParser.Tokens.Defer-class.html Docs/public/ZestyParser.Tokens.Lookahead-class.html Docs/public/ZestyParser.Tokens.Negative-class.html Docs/public/ZestyParser.Tokens.Omit-class.html Docs/public/ZestyParser.Tokens.Only-class.html Docs/public/ZestyParser.Tokens.Placeholder-class.html Docs/public/ZestyParser.Tokens.RE-class.html Docs/public/ZestyParser.Tokens.Raw-class.html Docs/public/ZestyParser.Tokens.Skip-class.html Docs/public/ZestyParser.Tokens.TakeToken-class.html Docs/public/ZestyParser.Tokens.TokenSequence-class.html Docs/public/ZestyParser.Tokens.TokenSeries-class.html Docs/public/ZestyParser.Tokens.TokenWrapper-class.html Docs/public/__builtin__.object-class.html Docs/public/__builtin__.type-class.html Docs/public/epydoc.css Docs/public/exceptions.Exception-class.html Docs/public/frames.html Docs/public/help.html Docs/public/index.html Docs/public/indices.html Docs/public/toc-ZestyParser-module.html Docs/public/toc-ZestyParser.AHT-module.html Docs/public/toc-ZestyParser.DebuggingParser-module.html Docs/public/toc-ZestyParser.Helpers-module.html Docs/public/toc-ZestyParser.Parser-module.html Docs/public/toc-ZestyParser.Tags-module.html Docs/public/toc-ZestyParser.Tokens-module.html Docs/public/toc-everything.html Docs/public/toc.html Docs/public/trees.html ZestyParser/AHT.py ZestyParser/DebuggingParser.py ZestyParser/Helpers.py ZestyParser/Parser.py ZestyParser/Tags.py ZestyParser/Tokens.py ZestyParser/__init__.py ZestyParser.egg-info/PKG-INFO ZestyParser.egg-info/SOURCES.txt ZestyParser.egg-info/dependency_links.txt ZestyParser.egg-info/top_level.txt examples/bdecode.py examples/calcy.py examples/elements.py examples/mork.py examples/n3.py examples/n3bench.py examples/n3rdflib.py examples/phpserialize.py examples/plist.py examples/sexp-bench.py examples/sexp.py examples/testy.py examples/unittests.py PKË‹ž6µ& EGG-INFO/top_level.txtZestyParser PKÌ‹ž6“×2EGG-INFO/zip-safe PKd‰ž6g¹ (ÝÝZestyParser/__init__.py# ZestyParser 0.8.1 -- Parses in Python zestily # Copyright (C) 2006-2007 Adam Atlas # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. ''' @version: 0.8.1 @author: Adam Atlas @copyright: Copyright 2006-2007 Adam Atlas. Released under the MIT license (see LICENSE.txt). @contact: adam@atlas.st @group Parsing: Parser,Tokens @group Utilities: AHT,DebuggingParser ''' from Parser import * from Tokens import * from AHT import * from DebuggingParser import * from Tags import * import HelpersPKË‹ž6”ì?EZestyParser/__init__.pyc;ò l[6Fc@s6dZdkTdkTdkTdkTdkTdkZdS(sà @version: 0.8.1 @author: Adam Atlas @copyright: Copyright 2006-2007 Adam Atlas. Released under the MIT license (see LICENSE.txt). @contact: adam@atlas.st @group Parsing: Parser,Tokens @group Utilities: AHT,DebuggingParser (s*N(s__doc__sParsersTokenssAHTsDebuggingParsersTagssHelpers(sHelpers((s9build/bdist.darwin-8.9.1-i386/egg/ZestyParser/__init__.pys?s PKe‰ž6 bwT T ZestyParser/AHT.py# ZestyParser 0.8.1 -- Parses in Python zestily # Copyright (C) 2006-2007 Adam Atlas # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. ''' @version: 0.8.1 @author: Adam Atlas @copyright: Copyright 2006-2007 Adam Atlas. Released under the MIT license (see LICENSE.txt). @contact: adam@atlas.st This module is now deprecated in favour of L{Tags}. The documentation and classes are still available to assist in converting code that uses AHT, but this module may be removed in a future version. In general, given some AHT environment E, code can be converted to use Tags by changing C{callback=/to=/as=/>> E.x} to C{callback=/to=/>> Tags.x} and C{isinstance(y, E.x)} to C{y in Tags.x}. AHT (Ad Hoc Types) is a utility module providing an easy way to generate "labels" for objects in abstract parse trees without defining a class for each one. To use it, create an instance of L{Env}. Now you can access any property on it and get a unique type for that name. The first time such a type is called, it becomes a subclass of the type of whatever it is passed. For example, C{EnvInstance.SomeEntity("hi")} marks C{SomeEntity} as being a subclass of C{str}, and returns an instance of itself initialized with C{"hi"}.) Now you can check at any time with nothing more than a C{isinstance(something, EnvInstance.SomeEntity)} how a piece of data was instantiated. Ad Hoc Types are primarily intended to be used in conjunction with L{AbstractToken} types, where you should set it as the C{as} parameter, or, if it is more convenient (e.g. when you must use C{>>}), as its callback. ''' __all__ = ('Env',) class Env: ''' @see: L{AHT} ''' _aht_types = {} def __getattr__(self, attr): if attr in self._aht_types: return self._aht_types[attr] else: return _AHTFactory(self, attr) class _AHTFactory: def __init__(self, env, name): self.env, self.name = env, name def __call__(self, arg): if self.name not in self.env._aht_types: self.env._aht_types[self.name] = type(self.name, (arg.__class__,), {}) return self.env._aht_types[self.name](arg)PKË‹ž6æÇu&¦ ¦ ZestyParser/AHT.pyc;ò o[6Fc@s9dZdfZdfd„ƒYZdfd„ƒYZdS(s› @version: 0.8.1 @author: Adam Atlas @copyright: Copyright 2006-2007 Adam Atlas. Released under the MIT license (see LICENSE.txt). @contact: adam@atlas.st This module is now deprecated in favour of L{Tags}. The documentation and classes are still available to assist in converting code that uses AHT, but this module may be removed in a future version. In general, given some AHT environment E, code can be converted to use Tags by changing C{callback=/to=/as=/>> E.x} to C{callback=/to=/>> Tags.x} and C{isinstance(y, E.x)} to C{y in Tags.x}. AHT (Ad Hoc Types) is a utility module providing an easy way to generate "labels" for objects in abstract parse trees without defining a class for each one. To use it, create an instance of L{Env}. Now you can access any property on it and get a unique type for that name. The first time such a type is called, it becomes a subclass of the type of whatever it is passed. For example, C{EnvInstance.SomeEntity("hi")} marks C{SomeEntity} as being a subclass of C{str}, and returns an instance of itself initialized with C{"hi"}.) Now you can check at any time with nothing more than a C{isinstance(something, EnvInstance.SomeEntity)} how a piece of data was instantiated. Ad Hoc Types are primarily intended to be used in conjunction with L{AbstractToken} types, where you should set it as the C{as} parameter, or, if it is more convenient (e.g. when you must use C{>>}), as its callback. sEnvcBstZdZhZd„ZRS(s @see: L{AHT} cCs0||ijo|i|Snt||ƒSdS(N(sattrsselfs _aht_typess _AHTFactory(sselfsattr((s4build/bdist.darwin-8.9.1-i386/egg/ZestyParser/AHT.pys __getattr__-s(s__name__s __module__s__doc__s _aht_typess __getattr__(((s4build/bdist.darwin-8.9.1-i386/egg/ZestyParser/AHT.pysEnv's s _AHTFactorycBstZd„Zd„ZRS(NcCs||f\|_|_dS(N(senvsnamesself(sselfsenvsname((s4build/bdist.darwin-8.9.1-i386/egg/ZestyParser/AHT.pys__init__4scCs]|i|iijo,t|i|ifhƒ|ii|i}. Raises L{NotMatched} if not enough characters are left. # ''' # # def __init__(self, length, callback=None, as=None, name=None): # AbstractToken.__init__(self, length, callback, as, name) # # def __call__(self, parser, start): # end = start + self.desc # if parser.len < end: raise NotMatched # parser.cursor = end # return parser.data[start:end] # #class TokenSeries (AbstractToken): # ''' # A particularly versatile class whose instances match one token multiple times. # # The properties L{skip}, L{prefix}, L{postfix}, and L{delimiter} are optional tokens which add structure to the series. It can be represented, approximately in the idioms of L{TokenSequence}, as follows:: # # [Skip(skip) + Omit(prefix) + desc + Omit(postfix)] + [Skip(skip) + Omit(delimiter) + Skip(skip) + Omit(prefix) + desc + Omit(postfix)] + ... + Skip(skip) # # Or, if there is no delimiter:: # # [Skip(skip) + Omit(prefix) + desc + Omit(postfix)] + ... + Skip(skip) # # @ivar desc: The token to match. # @type desc: token # @ivar min: The minimum number of times L{desc} must match. # @type min: int # @ivar max: The maximum number of times L{desc} will try to match. # @type max: int # @ivar skip: An optional token to skip between matches. # @type skip: token # @ivar prefix: An optional token to require (but omit from the return value) before each instance of L{token}. # @type prefix: token # @ivar postfix: An optional token to require (but omit from the return value) after each instance of L{token}. # @type postfix: token # @ivar delimiter: An optional token to require (but omit from the return value) between each instance of L{token}. # @type delimiter: token # @ivar until: An optional 2-tuple whose first item is a token, and whose second item is either a message or False. The presence of this property indicates that the token in C{until[0]} must match at the end of the series. If this fails, then if C{until[1]} is a message, a ParseError will be raised with that message; if it is False, NotMatched will be raised. # ''' # def __init__(self, token, min=0, max=False, skip=EmptyToken, prefix=EmptyToken, postfix=EmptyToken, delimiter=None, until=None, includeDelimiter=False, callback=None, as=None, name=None): # AbstractToken.__init__(self, token, callback, as, name) # self.min, self.max, self.skip, self.prefix, self.postfix, self.delimiter, self.until, self.includeDelimiter = min, max, skip, prefix, postfix, delimiter, until, includeDelimiter # # def __call__(self, parser, origCursor): # o = [] # i = 0 # done = False # while (self.max is False or i != self.max): # if self.until and parser.skip(self.until[0]): done = True; break # parser.skip(self.skip) # # c = parser.cursor # if i != 0 and self.delimiter is not None: # d = parser.scan(self.delimiter) # if parser.last is None: parser.cursor = c; break # parser.skip(self.skip) # if not parser.skip(self.prefix): parser.cursor = c; break # t = parser.scan(self.desc) # if parser.last is None: parser.cursor = c; break # if not parser.skip(self.postfix): parser.cursor = c; break # # if i != 0 and self.includeDelimiter: o.append(d) # o.append(t) # i += 1 # if not done and self.until: # if self.until[1]: raise ParseError(parser, self.until[1]) # else: raise NotMatched # if len(o) >= self.min: # return self.preprocessResult(parser, o, origCursor) # else: # raise NotMatched # #class Defer (AbstractToken): # ''' # A token which takes a callable (generally a lambda) which takes no arguments and itself returns a token. A Defer instance, upon being called, will call this function, scan for the returned token, and return that return value. This is primarily intended to allow you to define tokens recursively; if you need to refer to a token that hasn't been defined yet, simply use C{Defer(lambda: T_SOME_TOKEN)}, where C{T_SOME_TOKEN} is the token's eventual name. # ''' # # def __init__(self, func, callback=None, as=None, name=None): # AbstractToken.__init__(self, func, callback, as, name) # # def __call__(self, parser, origCursor): # t = parser.scan(self.desc()) # if parser.last is None: raise NotMatched # return t # #class Skip (AbstractToken): # ''' # See L{TokenSequence}. # ''' # def __call__(self, parser, origCursor): # parser.skip(self.desc) # #class Omit(AbstractToken): # ''' # See L{TokenSequence}. # ''' # def __call__(self, parser, origCursor): # if not parser.skip(self.desc): raise NotMatched # #class _EOF(AbstractToken): # def __call__(self, parser, origCursor): # if parser.cursor != parser.len: raise NotMatched #EOF = _EOF(None) PKÌ‹ž6rÛò´´ZestyParser/CompilingTokens.pyc;ò —‹&Fc @sàdkZdkZdklZlZdkZdddddddd d d d d df ZgZdfd„ƒYZdeei fd„ƒYZ deei fd„ƒYZ de ei fd„ƒYZ deei fd„ƒYZ de eifd„ƒYZdeeifd„ƒYZdeeifd„ƒYZd eeifd„ƒYZedƒZd eeifd„ƒYZd eeifd„ƒYZd eeifd„ƒYZdeeifd„ƒYZeeƒZdS(N(s NotMatcheds ParseErrorsCompilingTokensTokensRawTokensCompositeTokens TokenSequences TakeTokens TokenSeriess EmptyTokensDefaultsSkipsOmitsDefersEOFs CompilingBasecBstZd„Zd„ZRS(NcCst||gƒSdS(N(s TokenSequencesselfsother(sselfsother((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pys__add__scCst||gƒSdS(N(sCompositeTokensselfsother(sselfsother((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pys__or__s(s__name__s __module__s__add__s__or__(((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pys CompilingBases cBs)tZeZeeed„Zd„ZRS(NcCs)tii|||||ƒ||_dS(N( sTokenss AbstractTokens__init__sselfsdescscallbacksassnamescode(sselfsdescscodescallbacksassname((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pys__init__"scBsge|iƒejo+h}d|ieƒ|U|d|_n|i||i|||ƒ|ƒSdS(Ns!def f(self, parser, origCursor): sf( stypesselfscodesstrsxsglobalsspreprocessResultsparsers origCursor(sselfsparsers origCursorsx((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pys__call__%s (s__name__s __module__sNonescodes__init__s__call__(((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pysCompilingToken scBstZRS(N(s__name__s __module__(((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pysToken,scBstZeeeed„ZRS(NcCsedt|ƒ}|o|iƒ}|d7}n |d7}|d7}ti||||||ƒdS(Ns9 end = origCursor + %i d = parser.data[origCursor:end] s if d.lower() == self.desc: s if d == self.desc: s: parser.cursor = end return d else: raise NotMatched ( slensstringscodescaseInsensitiveslowersCompilingTokens__init__sselfscallbacksassname(sselfsstringscallbacksassnamescaseInsensitivescode((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pys__init__/s   (s__name__s __module__sNonesFalses__init__(((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pysRawToken.scBstZRS(N(s__name__s __module__(((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pysCompositeTokenEscBs,tZeeed„Zd„Zd„ZRS(NcCs˜d}x\ttt|ƒƒ|ƒD]?\}}|d|7}t|tt fƒ o|d7}q"q"W|d7}||_ t i ||||||ƒdS(Ns o = []; d = self.desc sC r = parser.scan(d[%i]) if parser.last is None: raise NotMatched s o.append(r) s return o(scodeszipsrangeslensdescsists isinstancesSkipsOmitsselfs finalcodesCompilingTokens__init__scallbacksassname(sselfsdescscallbacksassnamescodesist((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pys__init__Hs   cCskt|tƒot|i|iƒSn@t|dƒot|it|ƒƒSnt|i|gƒSdS(Ns__iter__(s isinstancesothers TokenSequencesselfsdescshasattrslist(sselfsother((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pys__add__Ws cCsgt|tƒo|i|i7_n:t|dƒo|it|ƒ7_n|ii|ƒ|SdS(Ns__iter__(s isinstancesothers TokenSequencesselfsdescshasattrslistsappend(sselfsother((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pys__iadd___s (s__name__s __module__sNones__init__s__add__s__iadd__(((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pys TokenSequenceGs cBstZRS(N(s__name__s __module__(((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pys TakeTokenhscBstZRS(N(s__name__s __module__(((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pys TokenSeriesiscBstZRS(N(s__name__s __module__(((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pysDefaultjsscBstZRS(N(s__name__s __module__(((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pysSkiplscBstZRS(N(s__name__s __module__(((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pysOmitmscBstZRS(N(s__name__s __module__(((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pysDefernss_EOFcBstZRS(N(s__name__s __module__(((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pys_EOFos(srescopysParsers NotMatcheds ParseErrorsTokenss__all__srstacks CompilingBases AbstractTokensCompilingTokensTokensRawTokensCompositeTokens TokenSequences TakeTokens TokenSeriessDefaults EmptyTokensSkipsOmitsDefers_EOFsNonesEOF(sEOFs TokenSeriessCompilingTokens CompilingBases TokenSequences__all__s EmptyTokensTokenssresDefaultsOmitsCompositeTokens ParseErrorsTokens_EOFsRawTokenscopysDefers NotMatchedsSkipsrstacks TakeToken((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/CompilingTokens.pys?s& - ! PKg‰ž6 öGú‹ ‹ ZestyParser/DebuggingParser.py# ZestyParser 0.8.1 -- Parses in Python zestily # Copyright (C) 2006-2007 Adam Atlas # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. ''' @version: 0.8.1 @author: Adam Atlas @copyright: Copyright 2006-2007 Adam Atlas. Released under the MIT license (see LICENSE.txt). @contact: adam@atlas.st ''' import Parser, sys #todo - configurable levels of verbosity? #use `logging` stdlib module? class DebuggingParser (Parser.ZestyParser): ''' A L{Parser.ZestyParser} subclass which is useful for debugging parsers. It parses as usual, but it also prints a comprehensive trace to stderr. ''' depth = -1 def __init__(self, *a, **k): self.dest = k.pop('dest', sys.stderr) Parser.ZestyParser.__init__(self, *a, **k) def scan(self, token): self.depth += 1 ind = ' | ' * self.depth self.dest.write('%sBeginning to scan for %r at position %i\n' % (ind, token, self.cursor)) r = Parser.ZestyParser.scan(self, token) if self.last: self.dest.write('%sGot %r -- now at %i\n' % (ind, r, self.cursor)) else: self.dest.write("%sDidn't match\n" % ind) self.depth -= 1 return r def skip(self, token): self.depth += 1 ind = ' | ' * self.depth self.dest.write('%sBeginning to skip %r at position %i\n' % (ind, token, self.cursor)) r = Parser.ZestyParser.skip(self, token) if r: self.dest.write('%sMatched -- now at %i\n' % (ind, self.cursor)) else: self.dest.write("%sDidn't match\n" % ind) self.depth -= 1 return r def iter(self, token, *args, **kwargs): self.depth += 1 ind = ' | ' * self.depth self.dest.write('%sBeginning to iterate %r at position %i\n' % (ind, token, self.cursor)) i = Parser.ZestyParser.iter(self, token, *args, **kwargs) while 1: self.dest.write('%sIterating\n' % ind) yield i.next() self.dest.write('%sDone iterating -- now at %i\n' % (ind, self.cursor)) self.depth -= 1PKÌ‹ž6º$ðí í ZestyParser/DebuggingParser.pyc;ò r[6Fc@s5dZdkZdkZdeifd„ƒYZdS(s› @version: 0.8.1 @author: Adam Atlas @copyright: Copyright 2006-2007 Adam Atlas. Released under the MIT license (see LICENSE.txt). @contact: adam@atlas.st NsDebuggingParsercBs8tZdZdZd„Zd„Zd„Zd„ZRS(s™ A L{Parser.ZestyParser} subclass which is useful for debugging parsers. It parses as usual, but it also prints a comprehensive trace to stderr. iÿÿÿÿcOs2|idtiƒ|_tii|||ŽdS(Nsdest( skspopssyssstderrsselfsdestsParsers ZestyParsers__init__sa(sselfsask((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/DebuggingParser.pys__init__(scCsª|id7_d|i}|iid|||ifƒtii ||ƒ}|i o$|iid|||ifƒn|iid|ƒ|id8_|SdS(Nis | s*%sBeginning to scan for %r at position %i s%sGot %r -- now at %i s%sDidn't match ( sselfsdepthsindsdestswritestokenscursorsParsers ZestyParsersscansrslast(sselfstokensindsr((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/DebuggingParser.pysscan,s   $cCs¤|id7_d|i}|iid|||ifƒtii ||ƒ}|o!|iid||ifƒn|iid|ƒ|id8_|SdS(Nis | s&%sBeginning to skip %r at position %i s%sMatched -- now at %i s%sDidn't match ( sselfsdepthsindsdestswritestokenscursorsParsers ZestyParsersskipsr(sselfstokensindsr((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/DebuggingParser.pysskip<s  !cos´|id7_d|i}|iid|||ifƒtii ||||Ž}x*no"|iid|ƒ|i ƒVqaW|iid||ifƒ|id8_dS(Nis | s)%sBeginning to iterate %r at position %i s %sIterating s%sDone iterating -- now at %i (sselfsdepthsindsdestswritestokenscursorsParsers ZestyParsersitersargsskwargssisnext(sselfstokensargsskwargssisind((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/DebuggingParser.pysiterLs  (s__name__s __module__s__doc__sdepths__init__sscansskipsiter(((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/DebuggingParser.pysDebuggingParser"s    (s__doc__sParserssyss ZestyParsersDebuggingParser(ssyssDebuggingParsersParser((s@build/bdist.darwin-8.9.1-i386/egg/ZestyParser/DebuggingParser.pys?sPKh‰ž6ÇÒo©99ZestyParser/Helpers.py# ZestyParser 0.8.1 -- Parses in Python zestily # Copyright (C) 2006-2007 Adam Atlas # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. ''' @version: 0.8.1 @author: Adam Atlas @copyright: Copyright 2006-2007 Adam Atlas. Released under the MIT license (see LICENSE.txt). @contact: adam@atlas.st ''' from Tokens import * import re, Tokens, Parser __all__ = ['_next_', '_this_', '_top_', '_sp_', 'UNARY', 'BINARY', 'LEFT', 'CENTER', 'RIGHT', 'oper', 'Float', 'Int', 'ExpressionHelper', 'QuoteHelper', 'EscapeHelper', 'FromOct', 'FromHex', 'PyEsc', 'SameChar', 'EscCh', 'IndentHelper', 'EncloseHelper'] _ = Placeholder() _next_ = _('next') _this_ = _('this') _top_ = _('top') _sp_ = _('sp') UNARY=1 BINARY=2 LEFT=-1 CENTER=0 RIGHT=1 def _funcmap(f): def newfunc(arg): return f(*arg) return newfunc class Oper (TokenSequence): pass def oper(symbol, operation=None, ops=BINARY, pos=CENTER): if isinstance(symbol, basestring): symtok = Omit(Raw(symbol)) else: symtok = symbol if ops == BINARY: if pos == LEFT: tok = Oper([symtok, _next_, _this_]) elif pos == CENTER: tok = Oper([_next_, symtok, _this_]) elif pos == RIGHT: tok = Oper([_next_, _this_, symtok]) elif ops == UNARY: if pos in (LEFT, CENTER): tok = Oper([symtok, _next_]) elif pos == RIGHT: tok = Oper([_next_, symtok]) tok = Tokens._pad(_sp_, tok) if operation: tok.callback = _funcmap(operation) tok.name = '' % symbol return tok Float = RE('[0-9]*\.[0-9]+', group=0, to=float) Int = RE('[0-9]+', group=0, to=int) def ExpressionHelper(toks, space=Whitespace): toks = [toks[0]] + [(t|_next_) for t in toks] for i in range(1, len(toks)): toks[i] %= dict(next=toks[i-1], this=toks[i], top=toks[-1]) if space: for i, tok in enumerate(toks): if not isinstance(tok, Oper): toks[i] = Tokens._pad(_sp_, tok) toks[-1] %= {_sp_: space} return toks[-1] # Quote helper def QuoteHelper(esc='\\', quotes=('"', "'"), allowed='.', *a, **k): r = [] for q in quotes: if len(q) == 2: left, right = q[0], q[1] else: left = right = q[0] if isinstance(left, basestring): left = Raw(left) if isinstance(right, basestring): right = Raw(right) ^ True o = left + Only(RE(r'(?:%s%s|[^%s])*' % (re.escape(esc), allowed, re.escape(right.desc)), group=0)) + right r.append(o) default_callback = EscapeHelper((EscCh(symbol=esc, anything=True),SameChar)) return CompositeToken(r, callback=default_callback, *a, **k) # Escape helper class EscapeHelper: def __init__(self, *escapes): self.escapes = [(re.compile(t[0]), t[1]) for t in escapes] def __call__(self, val): if hasattr(val, 'group'): val = val.group(0) for (regex, replace) in self.escapes: val = regex.sub(replace, val) return val def FromOct(m): return chr(int(m.group(1), 8)) def FromHex(m): return chr(int(m.group(1), 16)) def PyEsc(m): return eval('"\\%s"' % m.group(1)) def SameChar(m): return m.group(1) def EscCh(chars='', symbol='\\', anything=False): symbol = re.escape(symbol) chars = [re.escape(c) for c in chars] return (anything and ('%s(.)') or (r'%%s(%s)' % '|'.join(chars))) % symbol # Indent helper class IndentationLevel (AbstractToken): def __init__(self, depth, space, tabwidth=8, *a, **k): AbstractToken.__init__(self, depth, *a, **k) self.space = space self.tabwidth = 8 def __call__(self, parser, origCursor): if not parser.skip(self.space): raise NotMatched i = parser.data.rfind('\n', 0, parser.cursor) + 1 level = len(parser.data[i:parser.cursor].expandtabs(self.tabwidth)) if level < self.desc: raise Parser.NotMatched elif level > self.desc: parser.cursor = origCursor + self.desc return True class IndentHelper (Tokens.SingleReplacing, AbstractToken): def __init__(self, desc, space='[ \t]*', tabwidth=8, skip=None, *a, **k): AbstractToken.__init__(self, desc, *a, **k) if isinstance(space, basestring): space = RE(space, group=0) self.space = space self.tabwidth = 8 self.skip = skip self.key = id(IndentHelper) def __call__(self, parser, origCursor): prev_level = parser.context.setdefault(self.key, -1) parser.scan(self.space) last_nl = parser.data.rfind('\n', 0, parser.cursor) + 1 new_level = len(parser.data[last_nl:parser.cursor].expandtabs(self.tabwidth)) if prev_level >= new_level: raise Parser.NotMatched parser.context[self.key] = new_level parser.cursor = origCursor out = [] t_line = IndentationLevel(new_level, self.space, self.tabwidth) + Only(self.desc) while 1: if self.skip: parser.skip(self.skip) line = parser.scan(t_line) if not parser.last: break out.append(line) if self.skip: parser.skip(self.skip) parser.context[self.key] = prev_level if not len(out): raise Parser.NotMatched return out # Enclose helper class EncloseHelper (Tokens.SingleReplacing, AbstractToken): def __init__(self, pairs, content, space=Whitespace, *a, **k): AbstractToken.__init__(self, content, *a, **k) self.ptoks = [] self.space = space if not isinstance(pairs, list): pairs = [pairs] for start, end in pairs: if isinstance(start, basestring): start = Raw(start) if isinstance(end, basestring): end = Raw(end) self.ptoks.append((start, end ^ True)) def __call__(self, parser, origCursor): for start, end in self.ptoks: if not parser.skip(start): continue if self.space is Whitespace: parser.skip(parser.whitespace) elif self.space: parser.skip(self.space) v = parser.scan(self.desc) if not parser.last: continue if self.space is Whitespace: parser.skip(parser.whitespace) elif self.space: parser.skip(self.space) parser.skip(end) return v raise Parser.NotMatchedPKÌ‹ž6Û»^Òß,ß,ZestyParser/Helpers.pyc;ò u[6Fc@sçdZdkTdkZdkZdkZddddddd d d d d dddddddddddgZeƒZedƒZedƒZedƒZ edƒZ dZ dZ dZ d ZdZd!„Zd"efd#„ƒYZee ed$„Zed%d&d d'eƒZed(d&d d'eƒZed)„Zd*d+d,fd-d.„Zdfd/„ƒYZd0„Zd1„Zd2„Z d3„Z!d4d*e"d5„Z#d6e$fd7„ƒYZ%dei&e$fd8„ƒYZ'dei&e$fd9„ƒYZ(dS(:s› @version: 0.8.1 @author: Adam Atlas @copyright: Copyright 2006-2007 Adam Atlas. Released under the MIT license (see LICENSE.txt). @contact: adam@atlas.st (s*Ns_next_s_this_s_top_s_sp_sUNARYsBINARYsLEFTsCENTERsRIGHTsopersFloatsIntsExpressionHelpers QuoteHelpers EscapeHelpersFromOctsFromHexsPyEscsSameCharsEscChs IndentHelpers EncloseHelpersnextsthisstopsspiiiÿÿÿÿics‡d†}|SdS(Ncsˆ|ŒSdS(N(sfsarg(sarg(sf(s8build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Helpers.pysnewfunc0s(snewfunc(sfsnewfunc((sfs8build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Helpers.pys_funcmap/s sOpercBstZRS(N(s__name__s __module__(((s8build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Helpers.pysOper4scCsMt|tƒott|ƒƒ}n|}|tjov|t jot |t t gƒ}q |tjot t |t gƒ}q |tjot t t |gƒ}q n^|tjoP|t tfjot |t gƒ}q |tjot t |gƒ}q ntit|ƒ}|ot|ƒ|_nd||_|SdS(Ns (s isinstancessymbols basestringsOmitsRawssymtoksopssBINARYspossLEFTsOpers_next_s_this_stoksCENTERsRIGHTsUNARYsTokenss_pads_sp_s operations_funcmapscallbacksname(ssymbols operationsopssposssymtokstok((s8build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Helpers.pysoper6s(       s[0-9]*\.[0-9]+sgroupstos[0-9]+c Cs|dggi}|D]}||tBƒq~}xRtdt|ƒƒD];}||ctd||dd||d|dƒ;> func return newFunc def DebugNote(note): ''' Factory which creates a logging function usable as a token callback. The logging function prints L{note}, the parser's coordinates, and the value matched. It then returns the value unmodified. Example:: T_FOO = TokenSeries(RawToken('foo')) >> DebugNote('foo_series') ''' def func(parser, val, cursor): coord = parser.coord() print '%s(%i, %i): %s' % (note, coord[0], coord[1], val) return val return func class ZestyParser: ''' Parses one stream of data, by means of L{tokens}. @ivar context: A dictionary which can be used for storing any necessary state information. @type context: dict @ivar data: The sequence being parsed (probably a string). @type data: sequence @ivar cursor: The current position of the parser in L{data}. @type cursor: int @ivar last: The last matched token. @type last: L{token} ''' context = {} data = None cursor = 0 len = 0 last = None whitespace = None def __init__(self, data=None): '''Initializes the parser, optionally calling L{useData}''' if data: self.useData(data) from Tokens import RE self.whitespace = RE('\s+') def useData(self, data): ''' Begin parsing a stream of data @param data: The data to parse. @type data: sequence ''' self.data = data self.cursor = 0 self.len = len(data) def scan(self, token): ''' Scan for one token. @param token: The token to scan for. @return: The return value of the matching token, or None if the token raised NotMatched. @rtype: object @raise ParseError: If a token fails to match and it has a failMessage parameter. ''' oldCursor = self.cursor try: r = getattr(token, 'parse', token)(self, oldCursor) self.last = token return r except NotMatched: self.cursor = oldCursor self.last = None return None def skip(self, token): ''' A convenience method that skips one token and returns whether it matched. @param token: The token to scan for. @type token: token @return: Whether or not the token matched. @rtype: bool ''' oldCursor = self.cursor try: getattr(token, 'parse', token)(self, oldCursor) return token except NotMatched: self.cursor = oldCursor def iter(self, token, skip=None, until=None): ''' Returns a generator iterator which scans for L{token} every time it is invoked. @param token: The token to scan for. @param skip: An optional token to L{skip} before each L{scan} for L{token}. @type skip: token @param until: An optional 2-tuple. If defined, the iterator will scan for L{token} until it reaches the token C{until[0]}; if L{scan} returns C{None} before the iterator encounters this token, it raises a L{ParseError} with the message given in C{until[1]}. @type until: tuple @rtype: iterator ''' while 1: if skip: self.skip(skip) if until and self.skip(until[0]): break r = self.scan(token) if self.last: yield r elif until: if until[1] is True: raise ParseError(self, 'Expected %s' % str(until[0])) else: raise ParseError(self, until[1]) else: break def coord(self, loc=None): ''' Returns row/column coordinates for a given point in the input stream, or L{cursor} by default. Counting starts at C{(1, 1)}. @param loc: An index of L{data}. @type loc: int @return: A 2-tuple representing (row, column). @rtype: tuple ''' if loc is None: loc = self.cursor row = self.data.count('\n', 0, loc) + 1 col = loc - self.data.rfind('\n', 0, loc) return (row, col)PKÌ‹ž61¿d:!:!ZestyParser/Parser.pyc;ò x[6Fc@s†dZdddddfZdefd„ƒYZdefd„ƒYZdefd „ƒYZd „Zd „Zdfd „ƒYZd S(s› @version: 0.8.1 @author: Adam Atlas @copyright: Copyright 2006-2007 Adam Atlas. Released under the MIT license (see LICENSE.txt). @contact: adam@atlas.st s ZestyParsers NotMatcheds ParseErrors CallbackFors DebugNotesErrorcBstZRS(N(s__name__s __module__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Parser.pysErrorscBstZdZRS(sKRaised by a token if it has failed to match at the parser's current cursor.(s__name__s __module__s__doc__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Parser.pys NotMatched s cBs tZdZd„Zd„ZRS(s>Raised by a token to indicate that a parse error has occurred.cCs+|||iƒf\|_|_|_dS(sÅ @param parser: The parser instance that encountered the error. @type parser: ZestyParser @param message: A message explaining the error. @type message: str N(sparsersmessagescoordsself(sselfsparsersmessage((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Parser.pys__init__#scCs&d|i|id|idfSdS(sUPrints the error message and the row and column corresponding to the parser's cursor.s%s at line %i column %iiiN(sselfsmessagescoord(sself((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Parser.pys__str__+s(s__name__s __module__s__doc__s__init__s__str__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Parser.pys ParseError!s  cs‡d†}|SdS(sx Function decorator indicating that the function should be set as the callback of the given token; returns the token instead of the function. Example:: @CallbackFor(Token('([0-9]+)')) def T_INT(r): print r This is equivalent to:: def T_INT(r): print r T_INT = Token('([0-9]+)', callback=T_INT) cs ˆ|?SdS(N(stokensfunc(sfunc(stoken(s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Parser.pysnewFunc=sN(snewFunc(stokensnewFunc((stokens7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Parser.pys CallbackFor/s  cs‡d†}|SdS(s% Factory which creates a logging function usable as a token callback. The logging function prints L{note}, the parser's coordinates, and the value matched. It then returns the value unmodified. Example:: T_FOO = TokenSeries(RawToken('foo')) >> DebugNote('foo_series') cs1|iƒ}dˆ|d|d|fGH|SdS(Ns%s(%i, %i): %sii(sparserscoordsnotesval(sparsersvalscursorscoord(snote(s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Parser.pysfuncHs N(sfunc(snotesfunc((snotes7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Parser.pys DebugNoteAs cBsttZdZhZeZdZdZeZeZ ed„Z d„Z d„Z d„Z eed„Zed„ZRS(sÏ Parses one stream of data, by means of L{tokens}. @ivar context: A dictionary which can be used for storing any necessary state information. @type context: dict @ivar data: The sequence being parsed (probably a string). @type data: sequence @ivar cursor: The current position of the parser in L{data}. @type cursor: int @ivar last: The last matched token. @type last: L{token} icCs8|o|i|ƒndkl}|dƒ|_dS(s5Initializes the parser, optionally calling L{useData}(sREs\s+N(sdatasselfsuseDatasTokenssREs whitespace(sselfsdatasRE((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Parser.pys__init__ds  cCs%||_d|_t|ƒ|_dS(s~ Begin parsing a stream of data @param data: The data to parse. @type data: sequence iN(sdatasselfscursorslen(sselfsdata((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Parser.pysuseDatajs  cCse|i}y,t|d|ƒ||ƒ}||_|SWn)tj o||_t|_tSnXdS(s, Scan for one token. @param token: The token to scan for. @return: The return value of the matching token, or None if the token raised NotMatched. @rtype: object @raise ParseError: If a token fails to match and it has a failMessage parameter. sparseN( sselfscursors oldCursorsgetattrstokensrslasts NotMatchedsNone(sselfstokensrs oldCursor((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Parser.pysscanus    cCsM|i}y!t|d|ƒ||ƒ|SWntj o||_nXdS(sô A convenience method that skips one token and returns whether it matched. @param token: The token to scan for. @type token: token @return: Whether or not the token matched. @rtype: bool sparseN(sselfscursors oldCursorsgetattrstokens NotMatched(sselfstokens oldCursor((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Parser.pysskipˆs ccs½x¶no®|o|i|ƒn|o|i|dƒoPn|i|ƒ}|io|Vq|oI|dtjo!t|dt |dƒƒ‚q´t||dƒ‚qPq WdS(sC Returns a generator iterator which scans for L{token} every time it is invoked. @param token: The token to scan for. @param skip: An optional token to L{skip} before each L{scan} for L{token}. @type skip: token @param until: An optional 2-tuple. If defined, the iterator will scan for L{token} until it reaches the token C{until[0]}; if L{scan} returns C{None} before the iterator encounters this token, it raises a L{ParseError} with the message given in C{until[1]}. @type until: tuple @rtype: iterator iis Expected %sN( sskipsselfsuntilsscanstokensrslastsTrues ParseErrorsstr(sselfstokensskipsuntilsr((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Parser.pysiter˜s  !cCs`|tjo |i}n|iidd|ƒd}||iidd|ƒ}||fSdS(s$ Returns row/column coordinates for a given point in the input stream, or L{cursor} by default. Counting starts at C{(1, 1)}. @param loc: An index of L{data}. @type loc: int @return: A 2-tuple representing (row, column). @rtype: tuple s iiN( slocsNonesselfscursorsdatascountsrowsrfindscol(sselfslocscolsrow((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Parser.pyscoord¯s  (s__name__s __module__s__doc__scontextsNonesdatascursorslenslasts whitespaces__init__suseDatasscansskipsiterscoord(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Parser.pys ZestyParserNs    N( s__doc__s__all__s ExceptionsErrors NotMatcheds ParseErrors CallbackFors DebugNotes ZestyParser(s CallbackFors NotMatcheds DebugNotes__all__s ZestyParsers ParseErrorsError((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Parser.pys?s  PKk‰ž6GŠ z  ZestyParser/Tags.py# ZestyParser 0.8.1 -- Parses in Python zestily # Copyright (C) 2006-2007 Adam Atlas # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. ''' @version: 0.8.1 @author: Adam Atlas @copyright: Copyright 2006-2007 Adam Atlas. Released under the MIT license (see LICENSE.txt). @contact: adam@atlas.st Tags is a utility module providing an easy way to label objects in abstract parse trees without defining a class for each one. It supersedes the L{AHT} module. This module provides a global "Tags" object; you create a tag by accessing any attribute on it. A tag is a callable object, suitable for use as an L{AbstractToken} C{to} parameter, or, if it is more convenient (e.g. when you must use C{>>}), a callback. Later, you can check if a given tag has been applied to an object by checking for membership with C{in}. For example:: >>> l = [1, 2, 3] >>> l in Tags.thing False >>> Tags.thing(l) [1, 2, 3] >>> l in Tags.thing True ''' __all__ = ('Tags',) class _Env (object): ''' @see: L{Tags} ''' _tagobjs = {} def __getattr__(self, attr): if attr not in self._tagobjs: self._tagobjs[attr] = _Tag(attr) return self._tagobjs[attr] Tags = _Env() class _Tag (object): def __init__(self, name): self.name = name self.objects = [] def __call__(self, arg): self.objects.append(arg) return arg def __contains__(self, arg): return (arg in self.objects) def __repr__(self): return '' % self.namePKÌ‹ž6k5eŸ Ÿ ZestyParser/Tags.pyc;ò z[6Fc@sHdZdfZdefd„ƒYZeƒZdefd„ƒYZdS(s0 @version: 0.8.1 @author: Adam Atlas @copyright: Copyright 2006-2007 Adam Atlas. Released under the MIT license (see LICENSE.txt). @contact: adam@atlas.st Tags is a utility module providing an easy way to label objects in abstract parse trees without defining a class for each one. It supersedes the L{AHT} module. This module provides a global "Tags" object; you create a tag by accessing any attribute on it. A tag is a callable object, suitable for use as an L{AbstractToken} C{to} parameter, or, if it is more convenient (e.g. when you must use C{>>}), a callback. Later, you can check if a given tag has been applied to an object by checking for membership with C{in}. For example:: >>> l = [1, 2, 3] >>> l in Tags.thing False >>> Tags.thing(l) [1, 2, 3] >>> l in Tags.thing True sTagss_EnvcBstZdZhZd„ZRS(s @see: L{Tags} cCs6||ijot|ƒ|i|(sselfsname(sself((s5build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tags.pys__repr__Cs(s__name__s __module__s__init__s__call__s __contains__s__repr__(((s5build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tags.pys_Tag7s   N(s__doc__s__all__sobjects_EnvsTagss_Tag(s_Envs_Tags__all__sTags((s5build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tags.pys?'s  PKm‰ž61|FÑÒgÒgZestyParser/Tokens.py# ZestyParser 0.8.1 -- Parses in Python zestily # Copyright (C) 2006-2007 Adam Atlas # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. ''' @group Basic Tokens: Raw,RE,RawToken,Token,TakeToken @group Complex Tokens: CompositeToken,TokenSequence,TokenSeries @group Special Tokens: Defer,Default,Lookahead,Negative @group TokenSequence Flags: Omit,Skip,Only @version: 0.8.1 @author: Adam Atlas @copyright: Copyright 2006-2007 Adam Atlas. Released under the MIT license (see LICENSE.txt). @contact: adam@atlas.st @var EmptyToken: A L{Default} instance initialized with the empty string. @var EOF: A token which matches (and returns C{None}) if the parser is at the end of its L{data } sequence. In ZestyParser, a token object must, at minimum, be a callable taking a L{ZestyParser } instance and its current L{cursor } as parameters. It can do whatever it needs with the parser's L{data } and L{cursor } properties before returning. It may raise L{NotMatched} to indicate to the L{ZestyParser } instance that it failed to match; it may also raise L{ParseError} to indicate, for instance, that it began matching successfully but encountered an unrecoverable error. The L{Tokens} module contains a variety of predefined token classes (instances of which are callable) and other valid token objects which should cover most parsing situations. ''' import re, copy, types, warnings from Parser import NotMatched, ParseError __all__ = ('Placeholder', 'AbstractToken', 'TokenWrapper', 'RE', 'Raw', 'Token', 'RawToken', 'CompositeToken', 'TokenSequence', 'TakeToken', 'TokenSeries', 'EmptyToken', 'Default', 'Skip', 'Omit', 'Only', 'Defer', 'Lookahead', 'Negative', 'EOF', 'Whitespace', 'Const', 'Inf') rstack = [] replstack = [] Inf = -1 def count_args(callable): t = type(callable) if t is types.FunctionType: return callable.func_code.co_argcount elif t is types.ClassType: return callable.__init__.im_func.func_code.co_argcount - 1 elif t is types.InstanceType: return callable.__call__.im_func.func_code.co_argcount - 1 elif t is types.MethodType: return callable.im_func.func_code.co_argcount - 1 #assume it's some builtin that only takes the data itself as a parameter return 1 class Placeholder: def __init__(self, key=None): self.key = key def __eq__(self, key): return self.key == key def __hash__(self): return hash(self.key) def __call__(self, key=None): return Placeholder(key) def _single_replace(cls, subj, vals, kwvals): if isinstance(subj, cls): if vals and subj == None: return vals.pop(0) elif subj in kwvals and subj != None: return kwvals[subj] elif isinstance(subj, AbstractToken): subj._replace(vals, kwvals) return subj _single_replace = classmethod(_single_replace) def _list_replace(cls, subj, vals, kwvals): for i, v in enumerate(subj): subj[i] = cls._single_replace(v, vals, kwvals) _list_replace = classmethod(_list_replace) def __repr__(self): return '' % self.key class ListReplacing: def _replace(self, vals, kwvals): if self not in replstack: replstack.append(self) Placeholder._list_replace(self.desc, vals, kwvals) replstack.pop() class SingleReplacing: def _replace(self, vals, kwvals): if self not in replstack: replstack.append(self) self.desc = Placeholder._single_replace(self.desc, vals, kwvals) replstack.pop() class AbstractToken (object): ''' Base class from which most tokens defined in this module derive. Subclassing this is not required for writing tokens, since they can be any callable with certain semantics, but this class provides several useful services for creating reusable token classes, such as callback support and convenient operator overloading. @ivar desc: The generic "description" variable which stores the "essence" of any given instance. Subclasses use this as needed. @ivar callback: An optional callable which, if not None, will be called whenever an instance matches successfully. It may take one, two, or three parameters, depending on its needs. If one, it will be passed whatever data the token matched (i.e. whatever it would normally have returned upon being called). If two, it will be passed the L{ZestyParser } instance and the data. If three, it will be passed the parser, the data, and the what the parser's cursor was when this token started matching. Callbacks may raise L{NotMatched} or L{ParseError} with the usual behaviour. They should also return a value, which will be returned to the calling L{ZestyParser } instance. @ivar to: An optional callable which, if not None, will be called in the same manner as a callback (after any callback and before returning to the parser instance), but will be passed only one argument: the data matched (or returned by the callback, if any). Its main purpose is to allow you to concisely do things like C{Token('[0-9]+', group=0, to=int)} -- the builtin callable C{int} will be passed the text matched by the regex, so the token will ultimately return an integer instead of a string or a regex match object. You can also use this property with L{AHT} types, for more complex multi-stage parsing. See the C{n3.py} and C{n3rdflib.py} examples for a demonstration of this. (In previous versions, this was passed to the initializer as C{as}, but this is deprecated because C{as} will become a reserved word in Python 2.6. Change your code to use {to}.) ''' name = None failMessage = None callback = None to = None #'as' is deprecated in favor of 'to' since it's becoming a reserved word def __init__(self, desc, callback=None, to=None, as=None, name=None): self.desc = desc self.callback = callback self.to = to or as if as: warnings.warn('`as` argument is deprecated; use `to`', DeprecationWarning, stacklevel=2) self.name = name def __repr__(self): return '%s %s' % (self.__class__.__name__, (self.name or str(self))) def __str__(self): return '' def _make_callbackrun(self, func, callback): argcount = count_args(callback) if argcount == 1: def f(parser, origCursor): return callback(func(parser, origCursor)) elif argcount == 2: def f(parser, origCursor): return callback(parser, func(parser, origCursor)) elif argcount == 3: def f(parser, origCursor): return callback(parser, func(parser, origCursor), origCursor) return f def _make_torun(self, func): def f(parser, origCursor): return self.to(func(parser, origCursor)) return f def _make_failcheck(self, func): def f(parser, origCursor): try: data = func(parser, origCursor) return data except NotMatched: if self.failMessage is True: raise ParseError(parser, 'Expected %s' % str(self)) elif self.failMessage: raise ParseError(parser, self.failMessage) else: raise return f def _poke(self): c = self.__call__ if self.callback: c = self._make_callbackrun(c, self.callback) if self.to: c = self._make_torun(c) if self.failMessage: c = self._make_failcheck(c) if c is self.__call__ and not isinstance(self, Defer): self.parse = None del self.parse else: self.parse = c def __copy__(self): n = self.__class__.__new__(self.__class__) n.__dict__.update(self.__dict__) n._poke() n.desc = copy.copy(self.desc) return n def __setattr__(self, name, value): super(AbstractToken, self).__setattr__(name, value) if name in ('callback', 'failMessage', 'to'): self._poke() def __add__(self, other): '''Allows you to construct L{TokenSequence}s with the + operator.''' return TokenSequence([self, other]) def __sub__(self, other): '''Allows you to construct L{TokenSequence}s with the - operator, automatically padded with L{Whitespace}. I realize it's a bit weird to use the - operator for this, but the main motivation is giving it the same precedence as +. Still, you can read it as a sort of "blank" (which is what the left and right tokens are being joined by), instead of "minus".''' return TokenSequence([self, Whitespace, other]) def __or__(self, other): '''Allows you to construct L{CompositeToken}s with the | operator.''' return CompositeToken([self, other]) def __mul__(self, val): '''Allows you to construct L{TokenSeries} with the * operator. Operand can be: - int (a series of exactly this many) - (int, ) (a series of at least this many) - (x:int, y:int) a series of x to y The constant Inf can be used in some of these -- * Inf yields a 0--infinity series, and * (x, Inf) yields an x--infinity series. ''' t = TokenSeries(self) if isinstance(val, int): t.min = t.max = val elif isinstance(val, tuple): if len(val) == 2: t.min, t.max = val elif len(val) == 1: t.min = val return t __rmul__ = __mul__ def __rshift__(self, callback): ''' Convenience overloading for setting the L{callback} of a token whose initializer you do not call directly, such as the result of combining tokens with L{+<__add__>} or L{|<__or__>}. @param callback: An L{AbstractToken}-compatible callback. @type callback: callable @return: A copy of C{self} with the L{callback} property set to C{callback}. ''' new = copy.copy(self) new.callback = callback return new def __xor__(self, message): ''' Overloading for setting the L{failMessage} of a token. @param message: The message to be raised with L{ParseError} if this token fails to match. @type message: str @return: A copy of C{self} with the L{failMessage} property set to C{callback}. ''' new = copy.copy(self) new.failMessage = message return new def __invert__(self): return Negative(self) def _replace(self, vals, kwvals): pass def __imod__(self, val): if isinstance(val, (tuple, list)): self._replace(val, {}) elif isinstance(val, dict): self._replace([], val) else: self._replace([val], {}) return self def __mod__(self, val): new = copy.copy(self) new %= val return new class TokenWrapper (AbstractToken): '''If you write your own token type in a way other than subclassing AbstractToken, e.g. by simply writing a function, you can use this as a decorator to automatically let it take advantage of AbstractToken's magic. ''' def __call__(self, parser, origCursor): t = parser.scan(self.desc) if parser.last: return t else: raise NotMatched def __str__(self): return repr(self.desc) class RE (AbstractToken): ''' A class whose instances match Python regular expressions. @ivar group: If defined, L{__call__} returns that group of the regular expression match instead of the whole match object. @type group: int ''' def __init__(self, regex, group=None, **kwargs): ''' @param regex: Either a compiled regex object or a string regex. @param group: To be set as the object's L{group} property. @type group: int ''' if not hasattr(regex, 'match'): regex = re.compile(regex, re.DOTALL) super(Token, self).__init__(regex, **kwargs) if group is not None: try: group = int(group) except ValueError: raise ValueError('got non-numeric value for `group`: ' + group) self.group = group def __call__(self, parser, origCursor): matches = self.desc.match(parser.data, origCursor) if matches is None: raise NotMatched parser.cursor = matches.end() if self.group is not None: matches = matches.group(self.group) return matches def __str__(self): return repr(self.desc.pattern) Token = RE class Raw (AbstractToken): ''' A class whose instances match only a particular string. Returns that string. @ivar caseInsensitive: If true, ignores case. @type caseInsensitive: bool ''' def __init__(self, string, caseInsensitive=False, **kwargs): ''' @param string: The string to match. @type string: str @param caseInsensitive: To be set as the object's L{caseInsensitive} property. @type caseInsensitive: bool ''' super(Raw, self).__init__(string, **kwargs) self.len = len(string) self.caseInsensitive = caseInsensitive if caseInsensitive: self.desc = self.desc.lower() def __call__(self, parser, origCursor): end = origCursor + self.len d = parser.data[origCursor:end] if (not self.caseInsensitive and d == self.desc) or (self.caseInsensitive and d.lower() == self.desc): parser.cursor = end return d else: raise NotMatched def __str__(self): return repr(self.desc) RawToken = Raw class Default (AbstractToken): ''' A class whose instances always return L{desc} and do not advance the parser's cursor. ''' def __call__(self, parser, origCursor): return self.desc def __str__(self): return repr(self.desc) EmptyToken = Default('') class CompositeToken (ListReplacing, AbstractToken): ''' A class whose instances match any of a number of tokens. @ivar desc: A list of token objects. @type desc: list ''' def __call__(self, parser, origCursor): for t in self.desc: r = parser.scan(t) if parser.last: return r raise NotMatched def __str__(self): if self in rstack: return '...' else: rstack.append(self) d = '(' + ' | '.join([repr(t) for t in self.desc]) + ')' rstack.pop() return d def __or__(self, other): if hasattr(other, '__iter__'): return CompositeToken(self.desc + list(other)) else: return CompositeToken(self.desc + [other]) def __ior__(self, other): if hasattr(other, '__iter__'): self.desc += list(other) else: self.desc.append(other) return self class TokenSequence (ListReplacing, AbstractToken): ''' A class whose instances match a sequence of tokens. Returns a corresponding list of return values from L{ZestyParser.scan}. Some special types, L{Skip}, L{Omit}, and L{Only}, are allowed in the sequence. These are wrappers for other token objects adding special behaviours. If it encounters a L{Skip} token, it will process it with L{ZestyParser.skip}, ignore whether it matched, and not include it in the list. If it encounters a L{Omit} token, it will still require that it match (the default behaviour), but it will not be included in the list. If the sequence contains an L{Only} token, its result will be returned instead of the usual list, though it still requires that subsequent tokens match. Multiple L{Only} tokens are meaningless and L{TokenSequence}'s behavior in that case is undefined. @ivar desc: A list of token objects. @type desc: list ''' def __call__(self, parser, origCursor): o = [] only = False onlyVal = None for g in self.desc: if g is Whitespace: parser.skip(parser.whitespace) r = parser.scan(g) if parser.last is None: raise NotMatched if isinstance(g, Only): only = True onlyVal = r continue if not isinstance(g, (Skip, Omit, _Whitespace)) and not only: o.append(r) if only: #heh return onlyVal else: return o def __str__(self): if self in rstack: return '...' else: rstack.append(self) d = '(' + ' + '.join([repr(t) for t in self.desc]) + ')' rstack.pop() return d def __add__(self, other): if hasattr(other, '__iter__'): return TokenSequence(self.desc + list(other)) else: return TokenSequence(self.desc + [other]) def __sub__(self, other): if hasattr(other, '__iter__'): return TokenSequence(self.desc + [Whitespace] + list(other)) else: return TokenSequence(self.desc + [Whitespace, other]) def __iadd__(self, other): if hasattr(other, '__iter__'): self.desc += list(other) else: self.desc.append(other) return self class TakeToken (AbstractToken): ''' A class whose instances match and return a given number of characters from the parser's L{data}. Raises L{NotMatched} if not enough characters are left. ''' def __init__(self, length, **kwargs): super(TakeToken, self).__init__(length, **kwargs) def __call__(self, parser, start): end = start + self.desc if parser.len < end: raise NotMatched parser.cursor = end return parser.data[start:end] class TokenSeries (SingleReplacing, AbstractToken): ''' A particularly versatile class whose instances match one token multiple times (with a great degree of customizability). The properties L{skip}, L{prefix}, L{postfix}, and L{delimiter} are optional tokens which add structure to the series. It can be represented, approximately in the idioms of L{TokenSequence}, as follows:: [Skip(skip) + Omit(prefix) + desc + Omit(postfix)] + [Skip(skip) + Omit(delimiter) + Skip(skip) + Omit(prefix) + desc + Omit(postfix)] + ... + Skip(skip) Or, if there is no delimiter:: [Skip(skip) + Omit(prefix) + desc + Omit(postfix)] + ... + Skip(skip) @ivar desc: The token to match. @type desc: token @ivar min: The minimum number of times L{desc} must match. @type min: int @ivar max: The maximum number of times L{desc} will try to match. @type max: int @ivar skip: An optional token to skip between matches. @type skip: token @ivar prefix: An optional token to require (but omit from the return value) before each instance of L{token}. @type prefix: token @ivar postfix: An optional token to require (but omit from the return value) after each instance of L{token}. @type postfix: token @ivar delimiter: An optional token to require (but omit from the return value) between each instance of L{token}. @type delimiter: token @ivar until: An optional 2-tuple whose first item is a token, and whose second item is either a message or False. The presence of this property indicates that the token in C{until[0]} must match at the end of the series. If this fails, then if C{until[1]} is a message, a ParseError will be raised with that message; if it is False, NotMatched will be raised. ''' def __init__(self, token, min=0, max=-1, skip=None, prefix=None, postfix=None, delimiter=None, until=None, includeDelimiter=False, **kwargs): super(TokenSeries, self).__init__(token, **kwargs) self.min, self.max, self.skip, self.prefix, self.postfix, self.delimiter, self.until, self.includeDelimiter = min, max, skip, prefix, postfix, delimiter, until, includeDelimiter def __call__(self, parser, origCursor): o = [] i = 0 done = False while i != self.max: c = parser.cursor if self.until and parser.skip(self.until[0]): done = True break if self.skip: parser.skip(self.skip) c = parser.cursor if i != 0 and self.delimiter is not None: d = parser.scan(self.delimiter) if parser.last is None: break if self.skip: parser.skip(self.skip) if self.prefix and not parser.skip(self.prefix): break t = parser.scan(self.desc) if parser.last is None: break if self.postfix and not parser.skip(self.postfix): break if i != 0 and self.includeDelimiter: o.append(d) o.append(t) i += 1 if self.until and not done: parser.cursor = c if self.until[1]: if self.until[1] is True: raise ParseError(parser, 'Expected %s' % self.until[0]) else: raise ParseError(parser, self.until[1]) else: raise NotMatched if i >= self.min: return o else: raise NotMatched def __str__(self): return repr(self.desc) class Defer (AbstractToken): ''' A token which takes a callable (generally a lambda) which takes no arguments and itself returns a token. A Defer instance, upon being called, will call this function, scan for the returned token, and return that return value. This is primarily intended to allow you to define tokens recursively; if you need to refer to a token that hasn't been defined yet, simply use C{Defer(lambda: T_SOME_TOKEN)}, where C{T_SOME_TOKEN} is the token's eventual name. ''' def __call__(self, parser, origCursor): self.__call__ = self.desc() self._poke() return self.parse(parser, origCursor) def _pad(p, subj, outer=True): if isinstance(subj, TokenSequence): t = copy.copy(subj) t.desc = list(t.desc) start = (not outer) and 1 or 0 stop = 2*(len(t.desc) + (outer and 1 or -1)) for i in range(start, stop, 2): t.desc.insert(i, p) return t else: return TokenSequence([p, Only(subj), p]) class Skip (SingleReplacing, AbstractToken): ''' See L{TokenSequence}. ''' def __call__(self, parser, origCursor): parser.skip(self.desc) def pad(self, tok, outer=True): ''' Takes a TokenSequence and returns a copy with this L{Skip} token separating every element therein. Alternately, takes any other token and returns a TokenSequence equivalent to (self + Only(tok) + self). @param tok: The token sequence to operate on. @type tok: L{TokenSequence} or other token @param outer: If operating on a L{TokenSequence}, whether to also include self on either side of the sequence, in addition to between each element. @type outer: bool @return: L{TokenSequence} ''' return _pad(self, tok, outer) class Omit (SingleReplacing, AbstractToken): ''' See L{TokenSequence}. ''' def __call__(self, parser, origCursor): if not parser.skip(self.desc): raise NotMatched class Only (SingleReplacing, TokenWrapper): ''' See L{TokenSequence}. ''' class Lookahead (SingleReplacing, AbstractToken): ''' Scans for another token and returns its result as usual, but doesn't actually advance the parser's cursor. ''' def __call__(self, parser, origCursor): t = parser.scan(self.desc) parser.cursor = origCursor if parser.last is None: raise NotMatched return t class Negative (SingleReplacing, AbstractToken): ''' Scans for another token, and only matches (returning True) if that token did not match. ''' def __call__(self, parser, origCursor): parser.scan(self.desc) parser.cursor = origCursor if parser.last: raise NotMatched return True class _EOF (AbstractToken): ''' Matches returning None if the parser is at the end of its input. ''' def __call__(self, parser, origCursor): if parser.cursor != parser.len: raise NotMatched EOF = _EOF(None) class _Whitespace (AbstractToken): def __call__(self, parser, origCursor): parser.skip(parser.whitespace) Whitespace = _Whitespace(None) def Const(value): ''' This is a factory function used for when you want an L{AbstractToken}-compatible callback that returns a constant. Example:: RawToken('foo') >> Const('bar') This token matches 'foo' as usual, but always returns 'bar'. ''' def f(r): return value return fPKÌ‹ž6-ƒWðœœZestyParser/Tokens.pyc;ò ~[6Fc@s³dZdkZdkZdkZdkZdklZlZddddddd d d d d ddddddddddddfZgZ gZ dZ d„Z dfd„ƒYZ dfd„ƒYZdfd „ƒYZdefd!„ƒYZdefd"„ƒYZdefd#„ƒYZeZdefd$„ƒYZeZdefd%„ƒYZed&ƒZd eefd'„ƒYZd eefd(„ƒYZd efd)„ƒYZd eefd*„ƒYZdefd+„ƒYZed,„Zdeefd-„ƒYZ deefd.„ƒYZ!deefd/„ƒYZ"deefd0„ƒYZ#deefd1„ƒYZ$d2efd3„ƒYZ%e%e&ƒZ'd4efd5„ƒYZ(e(e&ƒZ)d6„Z*dS(7s& @group Basic Tokens: Raw,RE,RawToken,Token,TakeToken @group Complex Tokens: CompositeToken,TokenSequence,TokenSeries @group Special Tokens: Defer,Default,Lookahead,Negative @group TokenSequence Flags: Omit,Skip,Only @version: 0.8.1 @author: Adam Atlas @copyright: Copyright 2006-2007 Adam Atlas. Released under the MIT license (see LICENSE.txt). @contact: adam@atlas.st @var EmptyToken: A L{Default} instance initialized with the empty string. @var EOF: A token which matches (and returns C{None}) if the parser is at the end of its L{data } sequence. In ZestyParser, a token object must, at minimum, be a callable taking a L{ZestyParser } instance and its current L{cursor } as parameters. It can do whatever it needs with the parser's L{data } and L{cursor } properties before returning. It may raise L{NotMatched} to indicate to the L{ZestyParser } instance that it failed to match; it may also raise L{ParseError} to indicate, for instance, that it began matching successfully but encountered an unrecoverable error. The L{Tokens} module contains a variety of predefined token classes (instances of which are callable) and other valid token objects which should cover most parsing situations. N(s NotMatcheds ParseErrors Placeholders AbstractTokens TokenWrappersREsRawsTokensRawTokensCompositeTokens TokenSequences TakeTokens TokenSeriess EmptyTokensDefaultsSkipsOmitsOnlysDefers LookaheadsNegativesEOFs WhitespacesConstsInfiÿÿÿÿcCs§t|ƒ}|tijo|iiSnv|tijo|ii iidSnN|ti jo|i i iidSn&|ti jo|i iidSndSdS(Ni( stypescallableststypess FunctionTypes func_codes co_argcounts ClassTypes__init__sim_funcs InstanceTypes__call__s MethodType(scallablest((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys count_args4s cBsetZed„Zd„Zd„Zed„Zd„ZeeƒZd„Z ee ƒZ d„Z RS(NcCs ||_dS(N(skeysself(sselfskey((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__init__BscCs|i|jSdS(N(sselfskey(sselfskey((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__eq__EscCst|iƒSdS(N(shashsselfskey(sself((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__hash__HscCst|ƒSdS(N(s Placeholderskey(sselfskey((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__call__KscCs‹t||ƒoO|o |tjo|idƒSqƒ||jo |tjo ||Sqƒn%t|tƒo|i||ƒn|SdS(Ni( s isinstancessubjsclssvalssNonespopskwvalss AbstractTokens_replace(sclsssubjsvalsskwvals((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys_single_replaceNscCs:x3t|ƒD]%\}}|i|||ƒ||(sselfskey(sself((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__repr__^s( s__name__s __module__sNones__init__s__eq__s__hash__s__call__s_single_replaces classmethods _list_replaces__repr__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys PlaceholderAs       s ListReplacingcBstZd„ZRS(NcCsB|tjo1ti|ƒti|i||ƒtiƒndS(N( sselfs replstacksappends Placeholders _list_replacesdescsvalsskwvalsspop(sselfsvalsskwvals((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys_replacebs  (s__name__s __module__s_replace(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys ListReplacingassSingleReplacingcBstZd„ZRS(NcCsG|tjo6ti|ƒti|i||ƒ|_tiƒndS(N( sselfs replstacksappends Placeholders_single_replacesdescsvalsskwvalsspop(sselfsvalsskwvals((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys_replacehs  (s__name__s __module__s_replace(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pysSingleReplacinggscBsãtZdZeZeZeZeZeeeed„Zd„Z d„Z d„Z d„Z d„Z d„Zd„Zd „Zd „Zd „Zd „Zd „ZeZd„Zd„Zd„Zd„Zd„Zd„ZRS(s Base class from which most tokens defined in this module derive. Subclassing this is not required for writing tokens, since they can be any callable with certain semantics, but this class provides several useful services for creating reusable token classes, such as callback support and convenient operator overloading. @ivar desc: The generic "description" variable which stores the "essence" of any given instance. Subclasses use this as needed. @ivar callback: An optional callable which, if not None, will be called whenever an instance matches successfully. It may take one, two, or three parameters, depending on its needs. If one, it will be passed whatever data the token matched (i.e. whatever it would normally have returned upon being called). If two, it will be passed the L{ZestyParser } instance and the data. If three, it will be passed the parser, the data, and the what the parser's cursor was when this token started matching. Callbacks may raise L{NotMatched} or L{ParseError} with the usual behaviour. They should also return a value, which will be returned to the calling L{ZestyParser } instance. @ivar to: An optional callable which, if not None, will be called in the same manner as a callback (after any callback and before returning to the parser instance), but will be passed only one argument: the data matched (or returned by the callback, if any). Its main purpose is to allow you to concisely do things like C{Token('[0-9]+', group=0, to=int)} -- the builtin callable C{int} will be passed the text matched by the regex, so the token will ultimately return an integer instead of a string or a regex match object. You can also use this property with L{AHT} types, for more complex multi-stage parsing. See the C{n3.py} and C{n3rdflib.py} examples for a demonstration of this. (In previous versions, this was passed to the initializer as C{as}, but this is deprecated because C{as} will become a reserved word in Python 2.6. Change your code to use {to}.) cCsP||_||_|p||_|otidtddƒn||_dS(Ns%`as` argument is deprecated; use `to`s stackleveli( sdescsselfscallbackstosasswarningsswarnsDeprecationWarningsname(sselfsdescscallbackstosassname((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__init__}s   cCs(d|ii|ip t|ƒfSdS(Ns%s %s(sselfs __class__s__name__snamesstr(sself((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__repr__…scCsdSdS(Ns((sself((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__str__ˆscsttˆƒ}|djo‡‡d†}nA|djo‡‡d†}n!|djo‡‡d†}n|SdS(Nicsˆˆ||ƒƒSdS(N(scallbacksfuncsparsers origCursor(sparsers origCursor(scallbacksfunc(s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pysfsicsˆ|ˆ||ƒƒSdS(N(scallbacksparsersfuncs origCursor(sparsers origCursor(scallbacksfunc(s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pysfsicsˆ|ˆ||ƒ|ƒSdS(N(scallbacksparsersfuncs origCursor(sparsers origCursor(scallbacksfunc(s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pysf“s(s count_argsscallbacksargcountsf(sselfsfuncscallbacksfsargcount((sfuncscallbacks7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys_make_callbackrunŠs    cs‡‡d†}|SdS(Ncsˆiˆ||ƒƒSdS(N(sselfstosfuncsparsers origCursor(sparsers origCursor(sselfsfunc(s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pysf˜s(sf(sselfsfuncsf((sselfsfuncs7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys _make_torun—scs‡‡d†}|SdS(Ncsyˆ||ƒ}|SWnctj oWˆitjot|dt ˆƒƒ‚q}ˆiot|ˆiƒ‚q}‚nXdS(Ns Expected %s( sfuncsparsers origCursorsdatas NotMatchedsselfs failMessagesTrues ParseErrorsstr(sparsers origCursorsdata(sselfsfunc(s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pysfs (sf(sselfsfuncsf((sselfsfuncs7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys_make_failcheckœs cCs§|i}|io|i||iƒ}n|io|i|ƒ}n|io|i|ƒ}n||ijot |t ƒ ot |_ |` n ||_ dS(N( sselfs__call__scscallbacks_make_callbackrunstos _make_toruns failMessages_make_failchecks isinstancesDefersNonesparse(sselfsc((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys_poke©s    !  cCsO|ii|iƒ}|ii|iƒ|iƒti|iƒ|_|SdS(N( sselfs __class__s__new__sns__dict__supdates_pokescopysdesc(sselfsn((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__copy__¸s  cCsAtt|ƒi||ƒ|dddfjo|iƒndS(Nscallbacks failMessagesto(ssupers AbstractTokensselfs __setattr__snamesvalues_poke(sselfsnamesvalue((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys __setattr__¿scCst||gƒSdS(s>Allows you to construct L{TokenSequence}s with the + operator.N(s TokenSequencesselfsother(sselfsother((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__add__ÄscCst|t|gƒSdS(siAllows you to construct L{TokenSequence}s with the - operator, automatically padded with L{Whitespace}. I realize it's a bit weird to use the - operator for this, but the main motivation is giving it the same precedence as +. Still, you can read it as a sort of "blank" (which is what the left and right tokens are being joined by), instead of "minus".N(s TokenSequencesselfs Whitespacesother(sselfsother((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__sub__ÈscCst||gƒSdS(s?Allows you to construct L{CompositeToken}s with the | operator.N(sCompositeTokensselfsother(sselfsother((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__or__ÎscCs•t|ƒ}t|tƒo||_|_n^t|tƒoMt |ƒdjo|\|_|_qt |ƒdjo ||_qn|SdS(sˆAllows you to construct L{TokenSeries} with the * operator. Operand can be: - int (a series of exactly this many) - (int, ) (a series of at least this many) - (x:int, y:int) a series of x to y The constant Inf can be used in some of these -- * Inf yields a 0--infinity series, and * (x, Inf) yields an x--infinity series. iiN( s TokenSeriessselfsts isinstancesvalsintsminsmaxstupleslen(sselfsvalst((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__mul__Òs cCs ti|ƒ}||_|SdS(s¸ Convenience overloading for setting the L{callback} of a token whose initializer you do not call directly, such as the result of combining tokens with L{+<__add__>} or L{|<__or__>}. @param callback: An L{AbstractToken}-compatible callback. @type callback: callable @return: A copy of C{self} with the L{callback} property set to C{callback}. N(scopysselfsnewscallback(sselfscallbacksnew((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys __rshift__æs cCs ti|ƒ}||_|SdS(s\ Overloading for setting the L{failMessage} of a token. @param message: The message to be raised with L{ParseError} if this token fails to match. @type message: str @return: A copy of C{self} with the L{failMessage} property set to C{callback}. N(scopysselfsnewsmessages failMessage(sselfsmessagesnew((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__xor__òs cCst|ƒSdS(N(sNegativesself(sself((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys __invert__þscCsdS(N((sselfsvalsskwvals((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys_replacescCsit|ttfƒo|i|hƒn8t|tƒo|ig|ƒn|i|ghƒ|SdS(N(s isinstancesvalstupleslistsselfs_replacesdict(sselfsval((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__imod__s cCs!ti|ƒ}||;}|SdS(N(scopysselfsnewsval(sselfsvalsnew((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__mod__ s (s__name__s __module__s__doc__sNonesnames failMessagescallbackstos__init__s__repr__s__str__s_make_callbackruns _make_toruns_make_failchecks_pokes__copy__s __setattr__s__add__s__sub__s__or__s__mul__s__rmul__s __rshift__s__xor__s __invert__s_replaces__imod__s__mod__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys AbstractTokenns2              cBs tZdZd„Zd„ZRS(sØIf you write your own token type in a way other than subclassing AbstractToken, e.g. by simply writing a function, you can use this as a decorator to automatically let it take advantage of AbstractToken's magic. cCs.|i|iƒ}|io|Snt‚dS(N(sparsersscansselfsdescstslasts NotMatched(sselfsparsers origCursorst((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__call__s cCst|iƒSdS(N(sreprsselfsdesc(sself((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__str__s(s__name__s __module__s__doc__s__call__s__str__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys TokenWrappers  cBs,tZdZed„Zd„Zd„ZRS(sÜ A class whose instances match Python regular expressions. @ivar group: If defined, L{__call__} returns that group of the regular expression match instead of the whole match object. @type group: int cKs—t|dƒ oti|tiƒ}ntt|ƒi|||t j o:yt |ƒ}WqŠt j ot d|ƒ‚qŠXn||_ dS(s­ @param regex: Either a compiled regex object or a string regex. @param group: To be set as the object's L{group} property. @type group: int smatchs#got non-numeric value for `group`: N(shasattrsregexsrescompilesDOTALLssupersTokensselfs__init__skwargssgroupsNonesints ValueError(sselfsregexsgroupskwargs((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__init__&s cCsl|ii|i|ƒ}|tjo t‚n|i ƒ|_ |i tj o|i |i ƒ}n|SdS(N( sselfsdescsmatchsparsersdatas origCursorsmatchessNones NotMatchedsendscursorsgroup(sselfsparsers origCursorsmatches((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__call__6s cCst|iiƒSdS(N(sreprsselfsdescspattern(sself((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__str__?s(s__name__s __module__s__doc__sNones__init__s__call__s__str__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pysREs   cBs,tZdZed„Zd„Zd„ZRS(s­ A class whose instances match only a particular string. Returns that string. @ivar caseInsensitive: If true, ignores case. @type caseInsensitive: bool cKsRtt|ƒi||t|ƒ|_||_|o|ii ƒ|_ndS(sÊ @param string: The string to match. @type string: str @param caseInsensitive: To be set as the object's L{caseInsensitive} property. @type caseInsensitive: bool N( ssupersRawsselfs__init__sstringskwargsslenscaseInsensitivesdescslower(sselfsstringscaseInsensitiveskwargs((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__init__Ks  cCss||i}|i||!}|i o ||ijp|io|i ƒ|ijo||_ |Snt ‚dS(N( s origCursorsselfslensendsparsersdatasdscaseInsensitivesdescslowerscursors NotMatched(sselfsparsers origCursorsendsd((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__call__Xs  ; cCst|iƒSdS(N(sreprsselfsdesc(sself((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__str__`s(s__name__s __module__s__doc__sFalses__init__s__call__s__str__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pysRawDs  cBs tZdZd„Zd„ZRS(s_ A class whose instances always return L{desc} and do not advance the parser's cursor. cCs |iSdS(N(sselfsdesc(sselfsparsers origCursor((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__call__iscCst|iƒSdS(N(sreprsselfsdesc(sself((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__str__ks(s__name__s __module__s__doc__s__call__s__str__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pysDefaultes  scBs2tZdZd„Zd„Zd„Zd„ZRS(s… A class whose instances match any of a number of tokens. @ivar desc: A list of token objects. @type desc: list cCs?x2|iD]'}|i|ƒ}|io|Sq q Wt‚dS(N(sselfsdescstsparsersscansrslasts NotMatched(sselfsparsers origCursorstsr((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__call__ws    cCsu|tjodSn]ti|ƒddigi}|iD]}|t|ƒƒq<~ƒd}ti ƒ|SdS(Ns...s(s | s)( sselfsrstacksappendsjoins_[1]sdescstsreprsdspop(sselfsds_[1]st((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__str__~s   A cCsCt|dƒot|it|ƒƒSnt|i|gƒSdS(Ns__iter__(shasattrsothersCompositeTokensselfsdescslist(sselfsother((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__or__‡scCsAt|dƒo|it|ƒ7_n|ii|ƒ|SdS(Ns__iter__(shasattrsothersselfsdescslistsappend(sselfsother((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__ior__s(s__name__s __module__s__doc__s__call__s__str__s__or__s__ior__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pysCompositeTokenps   cBs;tZdZd„Zd„Zd„Zd„Zd„ZRS(s A class whose instances match a sequence of tokens. Returns a corresponding list of return values from L{ZestyParser.scan}. Some special types, L{Skip}, L{Omit}, and L{Only}, are allowed in the sequence. These are wrappers for other token objects adding special behaviours. If it encounters a L{Skip} token, it will process it with L{ZestyParser.skip}, ignore whether it matched, and not include it in the list. If it encounters a L{Omit} token, it will still require that it match (the default behaviour), but it will not be included in the list. If the sequence contains an L{Only} token, its result will be returned instead of the usual list, though it still requires that subsequent tokens match. Multiple L{Only} tokens are meaningless and L{TokenSequence}'s behavior in that case is undefined. @ivar desc: A list of token objects. @type desc: list cCsÝg}t}t}x±|iD]¦}|tjo|i |i ƒn|i |ƒ}|itjo t‚nt|tƒot}|}qnt|tttfƒ o| o|i|ƒqqW|o|Sn|SdS(N(sosFalsesonlysNonesonlyValsselfsdescsgs Whitespacesparsersskips whitespacesscansrslasts NotMatcheds isinstancesOnlysTruesSkipsOmits _Whitespacesappend(sselfsparsers origCursorsgsosonlysrsonlyVal((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__call__ s&   "cCsu|tjodSn]ti|ƒddigi}|iD]}|t|ƒƒq<~ƒd}ti ƒ|SdS(Ns...s(s + s)( sselfsrstacksappendsjoins_[1]sdescstsreprsdspop(sselfsds_[1]st((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__str__µs   A cCsCt|dƒot|it|ƒƒSnt|i|gƒSdS(Ns__iter__(shasattrsothers TokenSequencesselfsdescslist(sselfsother((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__add__¾scCsMt|dƒo"t|itgt|ƒƒSnt|it|gƒSdS(Ns__iter__(shasattrsothers TokenSequencesselfsdescs Whitespaceslist(sselfsother((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__sub__Äs"cCsAt|dƒo|it|ƒ7_n|ii|ƒ|SdS(Ns__iter__(shasattrsothersselfsdescslistsappend(sselfsother((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__iadd__Ês(s__name__s __module__s__doc__s__call__s__str__s__add__s__sub__s__iadd__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys TokenSequence”s    cBs tZdZd„Zd„ZRS(s´ A class whose instances match and return a given number of characters from the parser's L{data}. Raises L{NotMatched} if not enough characters are left. cKstt|ƒi||dS(N(ssupers TakeTokensselfs__init__slengthskwargs(sselfslengthskwargs((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__init__ÖscCsB||i}|i|jo t‚n||_|i||!SdS(N( sstartsselfsdescsendsparserslens NotMatchedscursorsdata(sselfsparsersstartsend((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__call__Ùs    (s__name__s __module__s__doc__s__init__s__call__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys TakeTokenÑs  c BsAtZdZddeeeeeed„Zd„Zd„ZRS(sÅ A particularly versatile class whose instances match one token multiple times (with a great degree of customizability). The properties L{skip}, L{prefix}, L{postfix}, and L{delimiter} are optional tokens which add structure to the series. It can be represented, approximately in the idioms of L{TokenSequence}, as follows:: [Skip(skip) + Omit(prefix) + desc + Omit(postfix)] + [Skip(skip) + Omit(delimiter) + Skip(skip) + Omit(prefix) + desc + Omit(postfix)] + ... + Skip(skip) Or, if there is no delimiter:: [Skip(skip) + Omit(prefix) + desc + Omit(postfix)] + ... + Skip(skip) @ivar desc: The token to match. @type desc: token @ivar min: The minimum number of times L{desc} must match. @type min: int @ivar max: The maximum number of times L{desc} will try to match. @type max: int @ivar skip: An optional token to skip between matches. @type skip: token @ivar prefix: An optional token to require (but omit from the return value) before each instance of L{token}. @type prefix: token @ivar postfix: An optional token to require (but omit from the return value) after each instance of L{token}. @type postfix: token @ivar delimiter: An optional token to require (but omit from the return value) between each instance of L{token}. @type delimiter: token @ivar until: An optional 2-tuple whose first item is a token, and whose second item is either a message or False. The presence of this property indicates that the token in C{until[0]} must match at the end of the series. If this fails, then if C{until[1]} is a message, a ParseError will be raised with that message; if it is False, NotMatched will be raised. iiÿÿÿÿc Ksktt|ƒi|| |||||||| f\|_|_|_|_ |_ |_ |_ |_ dS(N(ssupers TokenSeriessselfs__init__stokenskwargssminsmaxsskipsprefixspostfixs delimitersuntilsincludeDelimiter( sselfstokensminsmaxsskipsprefixspostfixs delimitersuntilsincludeDelimiterskwargs((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__init__ûsc Cs9g}d}t}xƒ||ijor|i}|i o|i |i dƒo t }Pn|i o|i |i ƒn|i}|djo |i t j oI|i|i ƒ}|it joPn|i o|i |i ƒqçn|io|i |iƒ oPn|i|iƒ}|it joPn|io|i |iƒ oPn|djo|io|i|ƒn|i|ƒ|d7}qW|i o| om||_|i doL|i dt jot|d|i dƒ‚qt||i dƒ‚qt‚n||ijo|Snt‚dS(Niis Expected %s(sosisFalsesdonesselfsmaxsparserscursorscsuntilsskipsTrues delimitersNonesscansdslastsprefixsdescstspostfixsincludeDelimitersappends ParseErrors NotMatchedsmin( sselfsparsers origCursorscsdsisosdonest((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__call__ÿsN !      cCst|iƒSdS(N(sreprsselfsdesc(sself((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__str__%s(s__name__s __module__s__doc__sNonesFalses__init__s__call__s__str__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys TokenSeriesßs ! &cBstZdZd„ZRS(sÎ A token which takes a callable (generally a lambda) which takes no arguments and itself returns a token. A Defer instance, upon being called, will call this function, scan for the returned token, and return that return value. This is primarily intended to allow you to define tokens recursively; if you need to refer to a token that hasn't been defined yet, simply use C{Defer(lambda: T_SOME_TOKEN)}, where C{T_SOME_TOKEN} is the token's eventual name. cCs-|iƒ|_|iƒ|i||ƒSdS(N(sselfsdescs__call__s_pokesparsesparsers origCursor(sselfsparsers origCursor((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__call__-s (s__name__s __module__s__doc__s__call__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pysDefer(s cCsÀt|tƒo“ti|ƒ}t|iƒ|_| odpd}dt |iƒ|odpd}x-t ||dƒD]}|ii ||ƒq~W|Snt|t|ƒ|gƒSdS(Niiiiÿÿÿÿ(s isinstancessubjs TokenSequencescopystslistsdescsoutersstartslensstopsrangesisinsertspsOnly(spssubjsoutersstartsisstopst((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys_pad2s%cBs#tZdZd„Zed„ZRS(s See L{TokenSequence}. cCs|i|iƒdS(N(sparsersskipsselfsdesc(sselfsparsers origCursor((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__call__AscCst|||ƒSdS(s& Takes a TokenSequence and returns a copy with this L{Skip} token separating every element therein. Alternately, takes any other token and returns a TokenSequence equivalent to (self + Only(tok) + self). @param tok: The token sequence to operate on. @type tok: L{TokenSequence} or other token @param outer: If operating on a L{TokenSequence}, whether to also include self on either side of the sequence, in addition to between each element. @type outer: bool @return: L{TokenSequence} N(s_padsselfstoksouter(sselfstoksouter((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pyspadDs (s__name__s __module__s__doc__s__call__sTruespad(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pysSkip=s  cBstZdZd„ZRS(s See L{TokenSequence}. cCs"|i|iƒ o t‚ndS(N(sparsersskipsselfsdescs NotMatched(sselfsparsers origCursor((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__call__Ts(s__name__s __module__s__doc__s__call__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pysOmitPs cBstZdZRS(s See L{TokenSequence}. (s__name__s __module__s__doc__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pysOnlyXs cBstZdZd„ZRS(st Scans for another token and returns its result as usual, but doesn't actually advance the parser's cursor. cCs=|i|iƒ}||_|itjo t ‚n|SdS(N( sparsersscansselfsdescsts origCursorscursorslastsNones NotMatched(sselfsparsers origCursorst((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__call__as   (s__name__s __module__s__doc__s__call__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys Lookahead]s cBstZdZd„ZRS(sa Scans for another token, and only matches (returning True) if that token did not match. cCs5|i|iƒ||_|io t‚ntSdS(N( sparsersscansselfsdescs origCursorscursorslasts NotMatchedsTrue(sselfsparsers origCursor((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__call__ks   (s__name__s __module__s__doc__s__call__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pysNegativegs s_EOFcBstZdZd„ZRS(sJ Matches returning None if the parser is at the end of its input. cCs!|i|ijo t‚ndS(N(sparserscursorslens NotMatched(sselfsparsers origCursor((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__call__us(s__name__s __module__s__doc__s__call__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys_EOFqs s _WhitespacecBstZd„ZRS(NcCs|i|iƒdS(N(sparsersskips whitespace(sselfsparsers origCursor((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys__call__zs(s__name__s __module__s__call__(((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys _Whitespaceyscs‡d†}|SdS(sô This is a factory function used for when you want an L{AbstractToken}-compatible callback that returns a constant. Example:: RawToken('foo') >> Const('bar') This token matches 'foo' as usual, but always returns 'bar'. csˆSdS(N(svalue(sr(svalue(s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pysf…sN(sf(svaluesf((svalues7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pysConst~s (+s__doc__srescopystypesswarningssParsers NotMatcheds ParseErrors__all__srstacks replstacksInfs count_argss Placeholders ListReplacingsSingleReplacingsobjects AbstractTokens TokenWrappersREsTokensRawsRawTokensDefaults EmptyTokensCompositeTokens TokenSequences TakeTokens TokenSeriessDefersTrues_padsSkipsOmitsOnlys LookaheadsNegatives_EOFsNonesEOFs _Whitespaces WhitespacesConst(&s ListReplacingsEOFs TokenSeriess AbstractTokensNegatives_pads_EOFsConsts Whitespaces TokenSequences__all__sres EmptyTokens TokenWrappersREsSingleReplacings PlaceholdersOmitswarningssDefaults LookaheadsCompositeTokens ParseErrorsTokens _WhitespacesRawTokenscopystypessDefers NotMatchedsOnlysRawsrstacks count_argssSkips TakeTokensInfs replstack((s7build/bdist.darwin-8.9.1-i386/egg/ZestyParser/Tokens.pys?'sD$K  ¤ $ $=I     PKË‹ž6“×2¤EGG-INFO/dependency_links.txtPKË‹ž6jOžv««¤<EGG-INFO/PKG-INFOPKË‹ž6ò;¢ïï¤EGG-INFO/SOURCES.txtPKË‹ž6µ& ¤7EGG-INFO/top_level.txtPKÌ‹ž6“×2¤wEGG-INFO/zip-safePKd‰ž6g¹ (Ýݤ§ZestyParser/__init__.pyPKË‹ž6”ì?E¤¹ ZestyParser/__init__.pycPKe‰ž6 bwT T ¤#ZestyParser/AHT.pyPKË‹ž6æÇu&¦ ¦ ¤‰/ZestyParser/AHT.pycPK‹Š’69ÀzÍÔ Ô ¤`;ZestyParser/CompilingTokens.pyPKÌ‹ž6rÛò´´¤p\ZestyParser/CompilingTokens.pycPKg‰ž6 öGú‹ ‹ ¤ayZestyParser/DebuggingParser.pyPKÌ‹ž6º$ðí í ¤(†ZestyParser/DebuggingParser.pycPKh‰ž6ÇÒo©99¤R’ZestyParser/Helpers.pyPKÌ‹ž6Û»^Òß,ß,¤¿¯ZestyParser/Helpers.pycPKj‰ž6 ÒÜܤÓÜZestyParser/Parser.pyPKÌ‹ž61¿d:!:!¤â÷ZestyParser/Parser.pycPKk‰ž6GŠ z  ¤PZestyParser/Tags.pyPKÌ‹ž6k5eŸ Ÿ ¤…#ZestyParser/Tags.pycPKm‰ž61|FÑÒgÒg¤V.ZestyParser/Tokens.pyPKÌ‹ž6-ƒW𜜤[–ZestyParser/Tokens.pycPK§¨2