PK×›?J¨D, EGG-INFO/PKG-INFOMetadata-Version: 1.0 Name: spark-parser Version: 1.6.0 Summary: An Earley-Algorithm Context-free grammar Parser Toolkit Home-page: https://github.com/rocky/python-spark/ Author: Rocky Bernstein Author-email: rb@dustyfeet.com License: MIT Description: |Supported Python Versions| SPARK ===== SPARK stands for Scanning, Parsing, and Rewriting Kit. It uses Jay Earley's algorithm for parsing context free grammars, and comes with some generic Abstract Syntax Tree routines. There is also a prototype scanner which does its job by combining Python regular expressions. The original version of this was written by John Aycock for his Ph.d thesis and was described in his 1998 paper: "Compiling Little Languages in Python" at the 7th International Python Conference. The current incarnation of this code is maintained (or not) by Rocky Bernstein. Note: Earley algorithm parsers are almost linear when given an LR grammar. These are grammars which are left-recursive. Installation ------------ This uses `setup.py`, so it follows the standard Python routine: :: python setup.py install # may need sudo # or if you have pyenv: python setup.py develop Example ------- The github `example` directory_ has a worked-out examples; Package uncompyle6_ uses this and contains a much larger example. See Also -------- * features_ * http://pages.cpsc.ucalgary.ca/~aycock/spark/ (Old and not very well maintained) * https://pypi.python.org/pypi/uncompyle6/ .. _features: https://github.com/rocky/python-spark/blob/master/NEW-FEATURES.rst .. _directory: https://github.com/rocky/python-spark/tree/master/example .. _uncompyle6: https://pypi.python.org/pypi/uncompyle6/ .. |downloads| image:: https://img.shields.io/pypi/dd/spark.svg .. |Supported Python Versions| image:: https://img.shields.io/pypi/pyversions/spark_parser.svg Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2.4 Classifier: Programming Language :: Python :: 2.5 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Topic :: Software Development :: Code Generators Classifier: Topic :: Software Development :: Libraries :: Python Modules PK×›?JÚ~N{ EGG-INFO/top_level.txtspark_parser PK×›?J#ºˆxTTEGG-INFO/SOURCES.txtChangeLog LICENSE MANIFEST.in Makefile NEW-FEATURES.rst README.rst TODO.rst setup.py bin/spark-parser-coverage example/README.md example/expr/README.md example/expr/expr.py example/expr/expr1.txt example/expr/expr2.txt example/expr2/README.md example/expr2/__init__.py example/expr2/eval.py example/expr2/parser.py example/expr2/parser.pyc example/expr2/scanner.py example/expr2/scanner.pyc example/expr2/__pycache__/parser.cpython-33.pyc example/expr2/__pycache__/scanner.cpython-33.pyc example/python2/.gitignore example/python2/Makefile example/python2/README.md example/python2/__init__.py example/python2/fn.py example/python2/fn.py~ example/python2/py2_format.py example/python2/py2_format.pyc example/python2/py2_parser.py example/python2/py2_parser.pyc example/python2/py2_scan.py example/python2/py2_scan.pyc example/python2/py2_token.py example/python2/py2_token.pyc example/python2/py2_token.py~ example/python2/python26.gr example/python2/reflow.py example/python2/__pycache__/py2_format.cpython-33.pyc example/python2/__pycache__/py2_format.cpython-34.pyc example/python2/__pycache__/py2_format.cpython-35.pyc example/python2/__pycache__/py2_format.cpython-36.pyc example/python2/__pycache__/py2_parser.cpython-33.pyc example/python2/__pycache__/py2_parser.cpython-34.pyc example/python2/__pycache__/py2_parser.cpython-35.pyc example/python2/__pycache__/py2_parser.cpython-36.pyc example/python2/__pycache__/py2_scan.cpython-33.pyc example/python2/__pycache__/py2_scan.cpython-34.pyc example/python2/__pycache__/py2_scan.cpython-35.pyc example/python2/__pycache__/py2_scan.cpython-36.pyc example/python2/__pycache__/py2_token.cpython-33.pyc example/python2/__pycache__/py2_token.cpython-34.pyc example/python2/__pycache__/py2_token.cpython-35.pyc example/python2/__pycache__/py2_token.cpython-36.pyc example/python2/test/Makefile example/python2/test/helper.py example/python2/test/helper.pyc example/python2/test/test_class.py example/python2/test/test_class.py~ example/python2/test/test_format.py example/python2/test/test_format_inline.py example/python2/test/test_parse.py example/python2/test/test_parse_inline.py example/python2/test/test_scan.py example/python2/test/test_scan_inline.py example/python2/test/__pycache__/helper.cpython-33.pyc example/python2/test/__pycache__/helper.cpython-34.pyc example/python2/test/__pycache__/helper.cpython-35.pyc example/python2/test/__pycache__/helper.cpython-36.pyc example/python2/test/format/assert.py example/python2/test/format/assert.right example/python2/test/format/def-bug.py-notyet example/python2/test/format/def.py example/python2/test/format/def.right example/python2/test/format/exec.py example/python2/test/format/exec.right example/python2/test/format/expr.py example/python2/test/format/expr.right example/python2/test/format/global.py example/python2/test/format/global.right example/python2/test/format/if.py example/python2/test/format/if.right example/python2/test/format/imports.py example/python2/test/format/imports.right example/python2/test/format/while.py example/python2/test/format/while.right example/python2/test/format/with-bug.py-notyet example/python2/test/format/with.py example/python2/test/format/with.py.~reduce-checks~ example/python2/test/format/with.right example/python2/test/parse/assert.py example/python2/test/parse/assert.right example/python2/test/parse/def.py example/python2/test/parse/def.right example/python2/test/parse/exec.py example/python2/test/parse/exec.right example/python2/test/parse/global.py example/python2/test/parse/global.right example/python2/test/parse/if.py example/python2/test/parse/if.right example/python2/test/parse/imports.py example/python2/test/parse/imports.right example/python2/test/parse/while.py example/python2/test/parse/while.right example/python2/test/scan/.gitignore example/python2/test/scan/expr1.py example/python2/test/scan/expr1.right example/python2/test/scan/indent.right example/python2/test/scan/indent1.py example/python2/test/scan/indent1.right example/python2/test/scan/syms.py example/python2/test/scan/syms.right spark_parser/__init__.py spark_parser/ast.py spark_parser/scanner.py spark_parser/spark.py spark_parser/version.py spark_parser.egg-info/PKG-INFO spark_parser.egg-info/SOURCES.txt spark_parser.egg-info/dependency_links.txt spark_parser.egg-info/top_level.txt spark_parser.egg-info/zip-safe test/test_checker.py test/test_checker.pyc test/test_grammar.py test/test_grammar.pyc test/test_grammar.py~ test/test_misc.py test/test_misc.pyc test/test_spark.py test/test_spark.pyc test/__pycache__/test_checker.cpython-33.pyc test/__pycache__/test_checker.cpython-34.pyc test/__pycache__/test_checker.cpython-35.pyc test/__pycache__/test_checker.cpython-36.pyc test/__pycache__/test_grammar.cpython-33.pyc test/__pycache__/test_grammar.cpython-34.pyc test/__pycache__/test_grammar.cpython-35.pyc test/__pycache__/test_misc.cpython-33.pyc test/__pycache__/test_misc.cpython-34.pyc test/__pycache__/test_misc.cpython-35.pyc test/__pycache__/test_misc.cpython-36.pyc test/__pycache__/test_spark.cpython-33.pyc test/__pycache__/test_spark.cpython-34.pyc test/__pycache__/test_spark.cpython-35.pyc test/__pycache__/test_spark.cpython-36.pycPK×›?J“×2EGG-INFO/dependency_links.txt PKsXÊH“×2EGG-INFO/zip-safe PK×›?JÀ”]Àèè&EGG-INFO/scripts/spark-parser-coverage#!/home/rocky/.pyenv/versions/2.3.7/bin/python """Print grammar reduce statistics for a series of spark-parser parses """ from spark_parser.version import VERSION import getopt, os, sys import pickle def sort_profile_info(path, max_count=1000): profile_info = pickle.load(open(path, "rb")) items = sorted(profile_info.items(), key=lambda kv: kv[1], reverse=False) return [item for item in items if item[1] <= max_count] program, ext = os.path.splitext(os.path.basename(__file__)) DEFAULT_COVERAGE_FILE = "/tmp/spark-grammar.cover", DEFAULT_COUNT = 100 def run(): Usage_short = """usage: %s --path coverage-file... Type -h for for full help.""" % program try: opts, files = getopt.getopt(sys.argv[1:], 'hVp:m:', ['help', 'version', 'path=', 'max-count=']) except getopt.GetoptError(e): sys.stderr.write('%s: %s\n' % (os.path.basename(sys.argv[0]), e)) sys.exit(-1) max_count = DEFAULT_COUNT path = DEFAULT_COVERAGE_FILE for opt, val in opts: if opt in ('-h', '--help'): print(__doc__) sys.exit(1) elif opt in ('-V', '--version'): print("%s %s" % (program, VERSION)) sys.exit(0) elif opt in ('-p', '--path'): path = val elif opt in ('-m', '--max_count'): max_count = int(val) else: print(opt) sys.stderr.write(Usage_short) sys.exit(1) """Print grammar reduce statistics for a series of spark-parser parses """ for rule, count in sort_profile_info(path, max_count): print("%d: %s" % (count, rule)) pass return if __name__ == '__main__': run() PK\AÆHqôÿg spark_parser/scanner.py""" Scanning and Token classes that might be useful in creating specific scanners. """ import re def _namelist(instance): namelist, namedict, classlist = [], {}, [instance.__class__] for c in classlist: for b in c.__bases__: classlist.append(b) for name in list(c.__dict__.keys()): if name not in namedict: namelist.append(name) namedict[name] = 1 return namelist class GenericToken: """A sample Token class that can be used in scanning""" def __init__(self, kind, attr=None): self.type = kind self.attr = attr def __eq__(self, o): """ '==', but it's okay if offsets and linestarts are different""" if isinstance(o, GenericToken): return (self.type == o.type) and (self.attr == o.attr) else: return self.type == o def __str__(self): if self.attr: return 'type: %s, value: %r' % (self.type, self.attr) else: return "type: %s" % self.type def __repr__(self): return self.attr or self.type # Used in generic table-driven semantics routines def __hash__(self): return hash(self.attr) # Used in generic table-driven semantics routines def __getitem__(self, i): raise IndexError class GenericScanner: """A class which can be used subclass off of to make specific sets of scanners. Scanner methods that are subclassed off of this that begin with t_ will be introspected in their documentation string and uses as a regular expression in a token pattern. For example: def t_add_op(self, s): r'[+-]' t = GenericToken(type='ADD_OP', attr=s) self.rv.append(t) """ def __init__(self): pattern = self.reflect() self.re = re.compile(pattern, re.VERBOSE) self.index2func = {} for name, number in self.re.groupindex.items(): self.index2func[number-1] = getattr(self, 't_' + name) def makeRE(self, name): doc = getattr(self, name).__doc__ rv = '(?P<%s>%s)' % (name[2:], doc) return rv def reflect(self): rv = [] for name in list(_namelist(self)): if name[:2] == 't_' and name != 't_default': rv.append(self.makeRE(name)) rv.append(self.makeRE('t_default')) return '|'.join(rv) def error(self, s, pos): """Simple-minded error handler. see py2_scan for another possibility.' """ print("Lexical error in %s at position %s" % (s, pos)) raise SystemExit def tokenize(self, s): pos = 0 n = len(s) while pos < n: m = self.re.match(s, pos) if m is None: self.error(s, pos) groups = m.groups() for i in range(len(groups)): if groups[i] and i in self.index2func: self.index2func[i](groups[i]) pos = m.end() def t_default(self, s): r'( \n )+' pass PK×›?JŸH¯Ì©©spark_parser/__init__.pyc;ò Ÿ(‘Xc@s§dkZdklZdeZdZeiddfjZdklZdkl Z dkl Z d k l Z d k l Z d k lZd klZd klZdS(N(sVERSIONs'SPARK-%s Python2 and Python3 compatiblesrestructuredtextii(sAST(sGenericASTTraversal(s#GenericASTTraversalPruningException(s DEFAULT_DEBUG(s GenericParser(sGenericASTBuilder(sGenericScanner(s GenericToken(ssyssspark_parser.versionsVERSIONs __version__s __docformat__s version_infosPYTHON3sspark_parser.astsASTsGenericASTTraversals#GenericASTTraversalPruningExceptionsspark_parser.sparks DEFAULT_DEBUGs GenericParsersGenericASTBuildersspark_parser.scannersGenericScanners GenericToken( sGenericScanners#GenericASTTraversalPruningExceptions GenericParsers DEFAULT_DEBUGsASTs GenericTokens __docformat__ssyssGenericASTTraversalsVERSIONsGenericASTBuilders __version__sPYTHON3((s5build/bdist.linux-x86_64/egg/spark_parser/__init__.pys?s          PK×›?J‰Z0ö¬¬spark_parser/version.pyc;ò èŽXc@s dZdS(s1.6.0N(sVERSION(sVERSION((s4build/bdist.linux-x86_64/egg/spark_parser/version.pys?sPK$š?JŸ7úß˄˄spark_parser/spark.py""" Copyright (c) 2015-2017 Rocky Bernstein Copyright (c) 1998-2002 John Aycock Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """ import os, pickle, re, sys if sys.version[0:3] <= '2.3': from sets import Set as set def sorted(iterable): temp = [x for x in iterable] temp.sort() return temp def _namelist(instance): namelist, namedict, classlist = [], {}, [instance.__class__] for c in classlist: for b in c.__bases__: classlist.append(b) for name in list(c.__dict__.keys()): if name not in namedict: namelist.append(name) namedict[name] = 1 return namelist def rule2str(rule): return ("%s ::= %s" % (rule[0], ' '.join(rule[1]))).rstrip() class _State: ''' Extracted from GenericParser and made global so that [un]picking works. ''' def __init__(self, stateno, items): self.T, self.complete, self.items = [], [], items self.stateno = stateno # DEFAULT_DEBUG = {'rules': True, 'transition': True, 'reduce' : True, # 'errorstack': 'full', 'dups': False } # DEFAULT_DEBUG = {'rules': False, 'transition': False, 'reduce' : True, # 'errorstack': 'plain', 'dups': False } DEFAULT_DEBUG = {'rules': False, 'transition': False, 'reduce': False, 'errorstack': None, 'context': True, 'dups': False} class GenericParser(object): ''' An Earley parser, as per J. Earley, "An Efficient Context-Free Parsing Algorithm", CACM 13(2), pp. 94-102. Also J. C. Earley, "An Efficient Context-Free Parsing Algorithm", Ph.D. thesis, Carnegie-Mellon University, August 1968. New formulation of the parser according to J. Aycock, "Practical Earley Parsing and the SPARK Toolkit", Ph.D. thesis, University of Victoria, 2001, and J. Aycock and R. N. Horspool, "Practical Earley Parsing", unpublished paper, 2001. ''' def __init__(self, start, debug=DEFAULT_DEBUG, coverage_path=None): """_start_ : grammar start symbol; _debug_ : produce optional parsing debug information _profile_ : if not None should be a file path to open with where to store profile is stored """ self.rules = {} self.rule2func = {} self.rule2name = {} # grammar coverage information self.coverage_path = coverage_path if coverage_path: self.profile_info = {} if isinstance(coverage_path, str): if os.path.exists(coverage_path): self.profile_info = pickle.load(open(coverage_path, "rb")) else: self.profile_info = None # When set, shows additional debug output self.debug = debug self.collectRules() if start not in self.rules: raise TypeError('Start symbol "%s" is not in LHS of any rule' % start) self.augment(start) self.ruleschanged = True # The key is an LHS non-terminal string. The value # should be AST if you want to pass an AST to the routine # to do the checking. The routine called is # self.reduce_is_invalid and is passed the rule, # the list of tokens, the current state item, # and index of the next last token index and # the first token index for the reduction. self.check_reduce = {} _NULLABLE = '\e_' _START = 'START' _BOF = '|-' # # When pickling, take the time to generate the full state machine; # some information is then extraneous, too. Unfortunately we # can't save the rule2func map. # def __getstate__(self): if self.ruleschanged: # # XXX - duplicated from parse() # self.computeNull() self.newrules = {} self.new2old = {} self.makeNewRules() self.ruleschanged = False self.edges, self.cores = {}, {} self.states = {0: self.makeState0()} self.makeState(0, self._BOF) # # XXX - should find a better way to do this.. # changes = True while changes: changes = False for k, v in list(self.edges.items()): if v is None: state, sym = k if state in self.states: self.goto(state, sym) changes = True rv = self.__dict__.copy() for s in list(self.states.values()): del s.items del rv['rule2func'] del rv['nullable'] del rv['cores'] return rv def __setstate__(self, D): self.rules = {} self.rule2func = {} self.rule2name = {} self.collectRules() start = D['rules'][self._START][0][1][1] # Blech. self.augment(start) D['rule2func'] = self.rule2func D['makeSet'] = self.makeSet_fast self.__dict__ = D # # A hook for GenericASTBuilder and GenericASTMatcher. Mess # thee not with this; nor shall thee toucheth the _preprocess # argument to addRule. # def preprocess(self, rule, func): return rule, func def addRule(self, doc, func, _preprocess=True): """Add a grammar rules to _self.rules_, _self.rule2func_, and _self.rule2name_ Comments, lines starting with # and blank lines are stripped from doc. We also allow limited form of * and + when there it is of the RHS has a single item, e.g. stmts ::= stmt+ """ fn = func # remove blanks lines and comment lines, e.g. lines starting with "#" doc = os.linesep.join([s for s in doc.splitlines() if s and not re.match("^\s*#", s)]) rules = doc.split() index = [] for i in range(len(rules)): if rules[i] == '::=': index.append(i-1) index.append(len(rules)) for i in range(len(index)-1): lhs = rules[index[i]] rhs = rules[index[i]+2:index[i+1]] rule = (lhs, tuple(rhs)) if _preprocess: rule, fn = self.preprocess(rule, func) # Handle a stripped-down form of *, +, and ?: # allow only one nonterminal on the right-hand side if len(rule[1]) == 1: if rule[1][0] == rule[0]: raise TypeError("Complete recursive rule %s" % rule2str(rule)) if rule[1][-1][-1] in ('*', '+', '?'): repeat = rule[1][-1][-1] nt = rule[1][-1][:-1] if repeat == '?': new_rule_pair = [rule[0], list((nt,))] else: new_rule_pair = [rule[0], [rule[0]] + list((nt,))] new_rule = rule2str(new_rule_pair) self.addRule(new_rule, func, _preprocess) if repeat == '+': second_rule_pair = (lhs, (nt,)) else: second_rule_pair = (lhs, tuple()) new_rule = rule2str(second_rule_pair) self.addRule(new_rule, func, _preprocess) continue if lhs in self.rules: if rule in self.rules[lhs]: if 'dups' in self.debug and self.debug['dups']: self.duplicate_rule(rule) continue self.rules[lhs].append(rule) else: self.rules[lhs] = [ rule ] self.rule2func[rule] = fn self.rule2name[rule] = func.__name__[2:] self.ruleschanged = True if self.profile_info is not None: rule_str = self.reduce_string(rule) if rule_str not in self.profile_info: self.profile_info[rule_str] = 0 pass return def remove_rule(self, doc): """Remove a grammar rules from _self.rules_, _self.rule2func_, and _self.rule2name_ """ rules = doc.split() index = [] for i in range(len(rules)): if rules[i] == '::=': index.append(i-1) index.append(len(rules)) for i in range(len(index)-1): lhs = rules[index[i]] rhs = rules[index[i]+2:index[i+1]] rule = (lhs, tuple(rhs)) if lhs not in self.rules: return self.rules[lhs].remove(rule) del self.rule2func[rule] del self.rule2name[rule] self.ruleschanged = True return def collectRules(self): for name in _namelist(self): if name[:2] == 'p_': func = getattr(self, name) doc = func.__doc__ self.addRule(doc, func) def augment(self, start): rule = '%s ::= %s %s' % (self._START, self._BOF, start) self.addRule(rule, lambda args: args[1], 0) def computeNull(self): self.nullable = {} tbd = [] for rulelist in list(self.rules.values()): lhs = rulelist[0][0] self.nullable[lhs] = 0 for rule in rulelist: rhs = rule[1] if len(rhs) == 0: self.nullable[lhs] = 1 continue # # We only need to consider rules which # consist entirely of nonterminal symbols. # This should be a savings on typical # grammars. # for sym in rhs: if sym not in self.rules: break else: tbd.append(rule) changes = 1 while changes: changes = 0 for lhs, rhs in tbd: if self.nullable[lhs]: continue for sym in rhs: if not self.nullable[sym]: break else: self.nullable[lhs] = 1 changes = 1 def makeState0(self): s0 = _State(0, []) for rule in self.newrules[self._START]: s0.items.append((rule, 0)) return s0 def finalState(self, tokens): # # Yuck. # if len(self.newrules[self._START]) == 2 and len(tokens) == 0: return 1 start = self.rules[self._START][0][1][1] return self.goto(1, start) def makeNewRules(self): worklist = [] for rulelist in list(self.rules.values()): for rule in rulelist: worklist.append((rule, 0, 1, rule)) for rule, i, candidate, oldrule in worklist: lhs, rhs = rule n = len(rhs) while i < n: sym = rhs[i] if (sym not in self.rules or not self.nullable[sym]): candidate = 0 i = i + 1 continue newrhs = list(rhs) newrhs[i] = self._NULLABLE+sym newrule = (lhs, tuple(newrhs)) worklist.append((newrule, i+1, candidate, oldrule)) candidate = 0 i = i + 1 else: if candidate: lhs = self._NULLABLE+lhs rule = (lhs, rhs) if lhs in self.newrules: self.newrules[lhs].append(rule) else: self.newrules[lhs] = [rule] self.new2old[rule] = oldrule def typestring(self, token): return None def duplicate_rule(self, rule): print("Duplicate rule:\n\t%s" % rule2str(rule)) def error(self, tokens, index): print("Syntax error at or near token %d: `%s'" % (index, tokens[index])) if 'context' in self.debug and self.debug['context']: if index - 2 >= 0: start = index - 2 else: start = 0 tokens = [str(tokens[i]) for i in range(start, index+1)] print("Token context:\n\t%s" % ("\n\t".join(tokens))) raise SystemExit def errorstack(self, tokens, i, full=False): """Show the stacks of completed symbols. We get this by inspecting the current transitions possible and from that extracting the set of states we are in, and from there we look at the set of symbols before the "dot". If full is True, we show the entire rule with the dot placement. Otherwise just the rule up to the dot. """ print("\n-- Stacks of completed symbols:") states = [s for s in self.edges.values() if s] # States now has the set of states we are in state_stack = set() for state in states: # Find rules which can follow, but keep only # the part before the dot for rule, dot in self.states[state].items: lhs, rhs = rule if dot > 0: if full: state_stack.add(' '.join(rhs[:dot]) + ' . ' + ' '.join(rhs[dot:])) else: state_stack.add(' '.join(rhs[:dot])) pass pass pass for stack in sorted(state_stack): print(stack) def parse(self, tokens, debug=None): """This is the main entry point from outside. Passing in a debug dictionary changes the default debug setting. """ if debug: self.debug = debug sets = [ [(1, 0), (2, 0)] ] self.links = {} if self.ruleschanged: self.computeNull() self.newrules = {} self.new2old = {} self.makeNewRules() self.ruleschanged = False self.edges, self.cores = {}, {} self.states = { 0: self.makeState0() } self.makeState(0, self._BOF) for i in range(len(tokens)): sets.append([]) if sets[i] == []: break self.makeSet(tokens, sets, i) else: sets.append([]) self.makeSet(None, sets, len(tokens)) finalitem = (self.finalState(tokens), 0) if finalitem not in sets[-2]: if len(tokens) > 0: if self.debug['errorstack']: self.errorstack(tokens, i-1, str(self.debug['errorstack']) == 'full') self.error(tokens, i-1) else: self.error(None, None) if self.profile_info is not None: self.dump_profile_info() return self.buildTree(self._START, finalitem, tokens, len(sets)-2) def isnullable(self, sym): # For symbols in G_e only. return sym.startswith(self._NULLABLE) def skip(self, xxx_todo_changeme, pos=0): (lhs, rhs) = xxx_todo_changeme n = len(rhs) while pos < n: if not self.isnullable(rhs[pos]): break pos = pos + 1 return pos def makeState(self, state, sym): assert sym is not None # print(sym) # debug # # Compute \epsilon-kernel state's core and see if # it exists already. # kitems = [] for rule, pos in self.states[state].items: lhs, rhs = rule if rhs[pos:pos+1] == (sym,): kitems.append((rule, self.skip(rule, pos+1))) tcore = tuple(sorted(kitems)) if tcore in self.cores: return self.cores[tcore] # # Nope, doesn't exist. Compute it and the associated # \epsilon-nonkernel state together; we'll need it right away. # k = self.cores[tcore] = len(self.states) K, NK = _State(k, kitems), _State(k+1, []) self.states[k] = K predicted = {} edges = self.edges rules = self.newrules for X in K, NK: worklist = X.items for item in worklist: rule, pos = item lhs, rhs = rule if pos == len(rhs): X.complete.append(rule) continue nextSym = rhs[pos] key = (X.stateno, nextSym) if nextSym not in rules: if key not in edges: edges[key] = None X.T.append(nextSym) else: edges[key] = None if nextSym not in predicted: predicted[nextSym] = 1 for prule in rules[nextSym]: ppos = self.skip(prule) new = (prule, ppos) NK.items.append(new) # # Problem: we know K needs generating, but we # don't yet know about NK. Can't commit anything # regarding NK to self.edges until we're sure. Should # we delay committing on both K and NK to avoid this # hacky code? This creates other problems.. # if X is K: edges = {} if NK.items == []: return k # # Check for \epsilon-nonkernel's core. Unfortunately we # need to know the entire set of predicted nonterminals # to do this without accidentally duplicating states. # tcore = tuple(sorted(predicted.keys())) if tcore in self.cores: self.edges[(k, None)] = self.cores[tcore] return k nk = self.cores[tcore] = self.edges[(k, None)] = NK.stateno self.edges.update(edges) self.states[nk] = NK return k def goto(self, state, sym): key = (state, sym) if key not in self.edges: # # No transitions from state on sym. # return None rv = self.edges[key] if rv is None: # # Target state isn't generated yet. Remedy this. # rv = self.makeState(state, sym) self.edges[key] = rv return rv def gotoT(self, state, t): if self.debug['rules']: print("Terminal", t, state) return [self.goto(state, t)] def gotoST(self, state, st): if self.debug['transition']: print("GotoST", st, state) rv = [] for t in self.states[state].T: if st == t: rv.append(self.goto(state, t)) return rv def add(self, set, item, i=None, predecessor=None, causal=None): if predecessor is None: if item not in set: set.append(item) else: key = (item, i) if item not in set: self.links[key] = [] set.append(item) self.links[key].append((predecessor, causal)) def makeSet(self, tokens, sets, i): cur, next = sets[i], sets[i+1] if tokens is not None: token = tokens[i] ttype = self.typestring(token) else: ttype = None token = None if ttype is not None: fn, arg = self.gotoT, ttype else: fn, arg = self.gotoST, token for item in cur: ptr = (item, i) state, parent = item add = fn(state, arg) for k in add: if k is not None: self.add(next, (k, parent), i+1, ptr) nk = self.goto(k, None) if nk is not None: self.add(next, (nk, i+1)) if parent == i: continue for rule in self.states[state].complete: lhs, rhs = rule if self.debug['reduce']: self.debug_reduce(rule, tokens, parent, i) if self.profile_info is not None: self.profile_rule(rule) if lhs in self.check_reduce and tokens: if self.check_reduce[lhs] == 'AST': ast = self.reduce_ast(rule, tokens, item, i, sets) else: ast = None invalid = self.reduce_is_invalid(rule, ast, tokens, parent, i) if ast: del ast if invalid: if self.debug['reduce']: print("Reduce %s invalid by check" % lhs) continue pass for pitem in sets[parent]: pstate, pparent = pitem k = self.goto(pstate, lhs) if k is not None: why = (item, i, rule) pptr = (pitem, parent) self.add(cur, (k, pparent), i, pptr, why) nk = self.goto(k, None) if nk is not None: self.add(cur, (nk, i)) def makeSet_fast(self, token, sets, i): # # Call *only* when the entire state machine has been built! # It relies on self.edges being filled in completely, and # then duplicates and inlines code to boost speed at the # cost of extreme ugliness. # cur, next = sets[i], sets[i+1] ttype = token is not None and self.typestring(token) or None for item in cur: ptr = (item, i) state, parent = item if ttype is not None: k = self.edges.get((state, ttype), None) if k is not None: # self.add(next, (k, parent), i+1, ptr) # INLINED --------v new = (k, parent) key = (new, i+1) if new not in next: self.links[key] = [] next.append(new) self.links[key].append((ptr, None)) # INLINED --------^ # nk = self.goto(k, None) nk = self.edges.get((k, None), None) if nk is not None: # self.add(next, (nk, i+1)) # INLINED -------------v new = (nk, i+1) if new not in next: next.append(new) # INLINED ---------------^ else: add = self.gotoST(state, token) for k in add: if k is not None: self.add(next, (k, parent), i+1, ptr) # nk = self.goto(k, None) nk = self.edges.get((k, None), None) if nk is not None: self.add(next, (nk, i+1)) if parent == i: continue for rule in self.states[state].complete: lhs, rhs = rule for pitem in sets[parent]: pstate, pparent = pitem # k = self.goto(pstate, lhs) k = self.edges.get((pstate, lhs), None) if k is not None: why = (item, i, rule) pptr = (pitem, parent) # self.add(cur, (k, pparent), i, pptr, why) # INLINED ---------v new = (k, pparent) key = (new, i) if new not in cur: self.links[key] = [] cur.append(new) self.links[key].append((pptr, why)) # INLINED ----------^ # nk = self.goto(k, None) nk = self.edges.get((k, None), None) if nk is not None: # self.add(cur, (nk, i)) # INLINED ---------v new = (nk, i) if new not in cur: cur.append(new) # INLINED ----------^ def predecessor(self, key, causal): for p, c in self.links[key]: if c == causal: return p assert 0 def causal(self, key): links = self.links[key] if len(links) == 1: return links[0][1] choices = [] rule2cause = {} for p, c in links: rule = c[2] choices.append(rule) rule2cause[rule] = c return rule2cause[self.ambiguity(choices)] def deriveEpsilon(self, nt): if len(self.newrules[nt]) > 1: rule = self.ambiguity(self.newrules[nt]) else: rule = self.newrules[nt][0] # print(rule) # debug rhs = rule[1] attr = [None] * len(rhs) for i in range(len(rhs)-1, -1, -1): attr[i] = self.deriveEpsilon(rhs[i]) return self.rule2func[self.new2old[rule]](attr) def buildTree(self, nt, item, tokens, k): if self.debug['rules']: print("NT", nt) state, parent = item choices = [] for rule in self.states[state].complete: if rule[0] == nt: choices.append(rule) rule = choices[0] if len(choices) > 1: rule = self.ambiguity(choices) # print(rule) # debug rhs = rule[1] attr = [None] * len(rhs) for i in range(len(rhs)-1, -1, -1): sym = rhs[i] if sym not in self.newrules: if sym != self._BOF: attr[i] = tokens[k-1] key = (item, k) item, k = self.predecessor(key, None) # elif self.isnullable(sym): elif self._NULLABLE == sym[0:len(self._NULLABLE)]: attr[i] = self.deriveEpsilon(sym) else: key = (item, k) why = self.causal(key) attr[i] = self.buildTree(sym, why[0], tokens, why[1]) item, k = self.predecessor(key, why) return self.rule2func[self.new2old[rule]](attr) def ambiguity(self, rules): # # XXX - problem here and in collectRules() if the same rule # appears in >1 method. Also undefined results if rules # causing the ambiguity appear in the same method. # sortlist = [] name2index = {} for i in range(len(rules)): lhs, rhs = rule = rules[i] name = self.rule2name[self.new2old[rule]] sortlist.append((len(rhs), name)) name2index[name] = i sortlist.sort() list = [a_b[1] for a_b in sortlist] return rules[name2index[self.resolve(list)]] def resolve(self, list): ''' Resolve ambiguity in favor of the shortest RHS. Since we walk the tree from the top down, this should effectively resolve in favor of a "shift". ''' return list[0] def dumpGrammar(self, out=sys.stdout): """ Print grammar rules """ for rule in sorted(self.rule2name.items()): out.write("%s\n" % rule2str(rule[0])) return def checkGrammar(self, out=sys.stderr): ''' Check grammar ''' lhs, rhs, tokens, right_recursive = self.checkSets() if len(lhs) > 0: out.write("LHS symbols not used on the RHS:\n") out.write(sorted(lhs), "\n") if len(rhs) > 0: out.write("RHS symbols not used on the LHS:\n") out.write(sorted(rhs, "\n")) if len(right_recursive) > 0: out.write("Right recursive rules:\n") for rule in right_recursive: out.write("%s ::= %s\n" % (rule[0], ' '.join(rule[1]))) pass pass def checkSets(self): ''' Check grammar ''' lhs_set = set() rhs_set = set() token_set = set() right_recursive = [] for lhs in self.rules: rules_for_lhs = self.rules[lhs] lhs_set.add(lhs) for rule in rules_for_lhs: rhs = rule[1] for sym in rhs: # We assume any symbol starting with an uppercase letter is # terminal, and anything else is a nonterminal if re.match("^[A-Z]", sym): token_set.add(sym) else: rhs_set.add(sym) if len(rhs) > 0 and lhs == rhs[-1]: right_recursive.append([lhs, rhs]) pass pass lhs_set.remove(self._START) rhs_set.remove(self._BOF) missing_lhs = lhs_set - rhs_set missing_rhs = rhs_set - lhs_set return (missing_lhs, missing_rhs, token_set, right_recursive) def reduce_string(self, rule): return "%s ::= %s" % (rule[0], ' '.join(rule[1])) # Note the unused parameters here are used in subclassed # routines that need more information def debug_reduce(self, rule, tokens, parent, i): print(self.reduce_string(rule)) def profile_rule(self, rule): """Bump count of the number of times _rule_ was used""" rule_str = self.reduce_string(rule) if rule_str not in self.profile_info: self.profile_info[rule_str] = 1 else: self.profile_info[rule_str] += 1 def get_profile_info(self): """Show the accumulated results of how many times each rule was used""" return sorted(self.profile_info.items(), key=lambda kv: kv[1], reverse=False) return def dump_profile_info(self): if isinstance(self.coverage_path, str): fp = open(self.coverage_path, 'wb') pickle.dump(self.profile_info, fp) fp.close() else: for rule, count in self.get_profile_info(): self.coverage_path.write("%s -- %d\n" % (rule, count)) pass self.coverage_path.write("-" * 40 + "\n") def reduce_ast(self, rule, tokens, item, k, sets): rhs = rule[1] ast = [None] * len(rhs) for i in range(len(rhs)-1, -1, -1): sym = rhs[i] if sym not in self.newrules: if sym != self._BOF: ast[i] = tokens[k-1] key = (item, k) item, k = self.predecessor(key, None) elif self._NULLABLE == sym[0:len(self._NULLABLE)]: ast[i] = self.deriveEpsilon(sym) else: key = (item, k) why = self.causal(key) ast[i] = self.buildTree(sym, why[0], tokens, why[1]) item, k = self.predecessor(key, why) pass pass return ast # # # GenericASTBuilder automagically constructs a concrete/abstract syntax tree # for a given input. The extra argument is a class (not an instance!) # which supports the "__setslice__" and "__len__" methods. # # XXX - silently overrides any user code in methods. # class GenericASTBuilder(GenericParser): def __init__(self, AST, start, debug=DEFAULT_DEBUG): if 'SPARK_PARSER_COVERAGE' in os.environ: coverage_path = os.environ['SPARK_PARSER_COVERAGE'] else: coverage_path = None GenericParser.__init__(self, start, debug=debug, coverage_path=coverage_path) self.AST = AST def preprocess(self, rule, func): rebind = lambda lhs, self=self: \ lambda args, lhs=lhs, self=self: \ self.buildASTNode(args, lhs) lhs, rhs = rule return rule, rebind(lhs) def buildASTNode(self, args, lhs): children = [] for arg in args: if isinstance(arg, self.AST): children.append(arg) else: children.append(self.terminal(arg)) return self.nonterminal(lhs, children) def terminal(self, token): return token def nonterminal(self, type, args): rv = self.AST(type) rv[:len(args)] = args return rv PKë™?JHÒÒ1<<spark_parser/ast.pyimport sys PYTHON3 = (sys.version_info >= (3, 0)) if PYTHON3: intern = sys.intern from collections import UserList else: from UserList import UserList class AST(UserList): def __init__(self, kind, kids=[]): self.type = intern(kind) UserList.__init__(self, kids) def __getslice__(self, low, high): return self.data[low:high] def __eq__(self, o): if isinstance(o, AST): return (self.type == o.type and UserList.__eq__(self, o)) else: return self.type == o def __hash__(self): return hash(self.type) def __repr__(self, indent=''): return self.__repr1__(indent, None) def __repr1__(self, indent, sibNum=None): rv = str(self.type) if sibNum is not None: rv = "%d. %s" % (sibNum, rv) enumerate_children = False if len(self) > 1: rv += " (%d)" % (len(self)) enumerate_children = True rv = indent + rv indent += ' ' i = 0 for node in self: if hasattr(node, '__repr1__'): if enumerate_children: child = node.__repr1__(indent, i) else: child = node.__repr1__(indent, None) else: if enumerate_children: child = indent + "%d. %s" % (i, str(node)) else: child = indent + str(node) pass rv += "\n" + child i += 1 return rv class GenericASTTraversalPruningException: pass class GenericASTTraversal: ''' GenericASTTraversal is a Visitor pattern according to Design Patterns. For each node it attempts to invoke the method n_, falling back onto the default() method if the n_* can't be found. The preorder traversal also looks for an exit hook named n__exit (no default routine is called if it's not found). To prematurely halt traversal of a subtree, call the prune() method -- this only makes sense for a preorder traversal. Node type is determined via the typestring() method. ''' def __init__(self, ast): self.ast = ast def typestring(self, node): return node.type def prune(self): raise GenericASTTraversalPruningException def preorder(self, node=None): """Walk the tree in preorder. For each node with typestring name *name* if the node has a method called n_*name*, call that before walking children. If there is no method define, call a self.default(node) instead. Subclasses of GenericASTTtraversal ill probably want to override this method. If the node has a method called *name*_exit, that is called after all children have been called. So in this sense this function is both preorder and postorder combined. """ if node is None: node = self.ast try: name = 'n_' + self.typestring(node) if hasattr(self, name): func = getattr(self, name) func(node) else: self.default(node) except GenericASTTraversalPruningException: return for kid in node: self.preorder(kid) name = name + '_exit' if hasattr(self, name): func = getattr(self, name) func(node) def postorder(self, node=None): """Walk the tree in postorder. For each node with typestring name *name* if the node has a method called n_*name*, call that before walking children. If there is no method define, call a self.default(node) instead. Subclasses of GenericASTTtraversal ill probably want to override this method. If the node has a method called *name*_exit, that is called after all children have been called. So in this sense this function is both preorder and postorder combined. """ if node is None: node = self.ast try: first = iter(node) except TypeError: first = None if first: for kid in node: self.postorder(kid) try: name = 'n_' + self.typestring(node) if hasattr(self, name): func = getattr(self, name) func(node) else: self.default(node) except GenericASTTraversalPruningException: return name = name + '_exit' if hasattr(self, name): func = getattr(self, name) func(node) def default(self, node): """Default acttion to take on an ASTNode. Our defualt is to do nothing. Subclasses will probably want to define this for other behavior.""" pass PK×›?J‰Ú|.‹‹spark_parser/spark.pyc;ò )‘Xc@sìdZdkZdkZdkZdkZeidd!djodklZd„Z nd„Z d„Z d fd „ƒYZ hd e <d e <d e <de<de<de si(sselfs_STARTs_BOFsstartsrulesaddRule(sselfsstartsrule((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pysaugmentscCsBh|_g}x­t|iiƒƒD]–}|dd}d|i|        cCsHtdgƒ}x.|i|iD]}|ii|dfƒq W|SdS(Ni(s_Statess0sselfsnewruless_STARTsrulesitemssappend(sselfss0srule((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys makeState0Bs cCsht|i|iƒdjot|ƒdjodSn|i|iddd}|id|ƒSdS(Niii(slensselfsnewruless_STARTstokenssrulessstartsgoto(sselfstokenssstart((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys finalStateHs0c Cs£g} xGt|iiƒƒD]0} x'| D]}| i|dd|fƒq)WqWxL| D]D\}}}}|\}} t | ƒ}x||jo | |} | |ijp |i|  od}|d}q„nt| ƒ}|i| ||<|t|ƒf}| i||d||fƒd}|d}q„W|o|i|}|| f}n||ijo|i|i|ƒn|g|i|<||i|cCs dGHgi}|iiƒD]}|o||ƒqq~} tƒ} x | D]˜}x|i|i D]}\}} |\} } | djoX|o3| idi| | ƒddi| | ƒƒqä| idi| | ƒƒqgqgWqPWxt| ƒD] }|GHqùWdS(svShow the stacks of completed symbols. We get this by inspecting the current transitions possible and from that extracting the set of states we are in, and from there we look at the set of symbols before the "dot". If full is True, we show the entire rule with the dot placement. Otherwise just the rule up to the dot. s -- Stacks of completed symbols:is s . N(sappends_[1]sselfsedgessvaluessssstatesssets state_stacksstatesitemssrulesdotslhssrhssfullsaddsjoinssortedsstack(sselfstokenssisfulls_[1]sstatesssrulesstacks state_stacksstatesslhssrhssdot((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys errorstack…s&;    3 cCs|o ||_nddfddfgg}h|_|iov|iƒh|_h|_|iƒt |_hhf\|_ |_ hd|i ƒ<|_ |id|iƒnxvtt|ƒƒD]<}|igƒ||gjoPn|i|||ƒqËW|igƒ|it|t|ƒƒ|i|ƒdf}||djo{t|ƒdjoT|ido.|i||dt|idƒdjƒn|i||dƒqÒ|ittƒn|itj o|iƒn|i|i||t|ƒdƒSdS(s…This is the main entry point from outside. Passing in a debug dictionary changes the default debug setting. iiiiþÿÿÿs errorstacksfullN( sdebugsselfssetsslinkss ruleschangeds computeNullsnewrulessnew2olds makeNewRulessFalsesedgesscoress makeState0sstatess makeStates_BOFsrangeslenstokenssisappendsmakeSetsNones finalStates finalitems errorstacksstrserrors profile_infosdump_profile_infos buildTrees_START(sselfstokenssdebugsissetss finalitem((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pysparse¢sB          .cCs|i|iƒSdS(N(ssyms startswithsselfs _NULLABLE(sselfssym((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys isnullableÒsicCsY|\}}t|ƒ}x6||jo(|i||ƒ oPn|d}qW|SdS(Ni(sxxx_todo_changemeslhssrhsslensnspossselfs isnullable(sselfsxxx_todo_changemespossrhssnslhs((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pysskipÖs   cCs|tj pt‚g}xl|i|iD]Z\}}|\}} | ||d!|fjo'|i ||i ||dƒfƒq+q+Wtt|ƒƒ}||ijo|i|Snt|iƒ}|i|scCss|idod||fGHng}xB|i|iD]0}||jo|i|i ||ƒƒq7q7W|SdS(Ns transitionsGotoST( sselfsdebugsstsstatesrvsstatessTstsappendsgoto(sselfsstatesststsrv((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pysgotoSTCs !cCs„|tjo"||jo|i|ƒq€nR||f}||jog|i|<|i|ƒn|i|i||fƒdS(N( s predecessorsNonesitemssetsappendsiskeysselfslinksscausal(sselfssetsitemsis predecessorscausalskey((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pysaddLs     c Cs+||||df\} } |tj o||}|i|ƒ}n t}t}|tj o|i |f\}}n|i |f\}}x•| D]}||f}|\} }|| |ƒ} x|| D]t}|tj oa|i| ||f|d|ƒ|i|tƒ}|tj o|i| ||dfƒq>qÊqÊW||joq–nxÊ|i| iD]¸}|\}}|ido|i||||ƒn|itj o|i|ƒn||ijo|o|i|djo|i |||||ƒ}nt}|i"|||||ƒ} |o~n| o"|ido d|GHqgqgqhnx´||D]¨}|\}}|i||ƒ}|tj ow|||f}||f}|i| ||f|||ƒ|i|tƒ}|tj o|i| ||fƒqqsqsWqgWq–WdS(NisreducesASTsReduce %s invalid by check()ssetssiscursnextstokenssNonestokensselfs typestringsttypesgotoTsfnsargsgotoSTsitemsptrsstatesparentsaddsksgotosnksstatesscompletesruleslhssrhssdebugs debug_reduces profile_infos profile_rules check_reduces reduce_astsastsreduce_is_invalidsinvalidspitemspstatespparentswhyspptr(sselfstokensssetssispitemspparentsargsptrsnkscursaddsinvalidsnextsstatesttypespptrsparentsastsrhsswhysfnspstatesksrulesitemstokenslhs((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pysmakeSetWsl        &         cCsV||||df\}} |tj o |i|ƒpt}x|D]}||f}|\} }|tj oë|i i| |ftƒ}|tj o¿||f} | |df}| | jog|i|<| i| ƒn|i|i|tfƒ|i i|tftƒ}|tj o2||df} | | jo| i| ƒqVqZqøn›|i| |ƒ} x…| D]}}|tj oj|i| ||f|d|ƒ|i i|tftƒ}|tj o|i| ||dfƒqôqwqwW||joqHnx?|i| iD]-}|\}}x||D] }|\}}|i i||ftƒ}|tj oÒ|||f}||f} ||f} | |f}| |jog|i|<|i| ƒn|i|i| |fƒ|i i|tftƒ}|tj o.||f} | |jo|i| ƒqBqFq:q:WqWqHWdS(Ni(!ssetssiscursnextstokensNonesselfs typestringsttypesitemsptrsstatesparentsedgessgetsksnewskeyslinkssappendsnksgotoSTsaddsstatesscompletesruleslhssrhsspitemspstatespparentswhyspptr(sselfstokenssetssispitemspparentsptrsnkscursaddsnextsstatesnewspptrsparentsrhsskeysttypeswhyspstatesksrulesitemslhs((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys makeSet_fast’sf#            &             cCsEx0|i|D]!\}}||jo|SqqWdpt‚dS(Ni(sselfslinksskeyspscscausalsAssertionError(sselfskeyscausalspsc((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys predecessorÝs    cCs‰|i|}t|ƒdjo|ddSng}h}x5|D]-\}}|d}|i |ƒ|||…ssreverseN(ssortedsselfs profile_infositemssFalse(sself((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pysget_profile_info‚s   cCs™t|itƒo3t|idƒ}ti|i|ƒ|i ƒnPx4|i ƒD]&\}}|ii d||fƒqSW|ii dddƒdS(Nswbs %s -- %d s-i(s (s isinstancesselfs coverage_pathsstrsopensfpspicklesdumps profile_infosclosesget_profile_infosrulescountswrite(sselfsfpscountsrule((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pysdump_profile_info‰s c CsB|d} tgt| ƒ}xtt| ƒdddƒD]ü}| |} | |i joN| |i jo:||d||<||f} |i| tƒ\}}q6q:|i| dt|iƒ!jo|i| ƒ||¿s(slhssself(slhssself((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys¾s(sselfsrebindsruleslhssrhs(sselfsrulesfuncsrebindslhssrhs((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys preprocess½s  cCseg}xH|D]@}t||iƒo|i|ƒq |i|i|ƒƒq W|i||ƒSdS(N( schildrensargssargs isinstancesselfsASTsappendsterminals nonterminalslhs(sselfsargsslhssargschildren((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys buildASTNodeÄscCs|SdS(N(stoken(sselfstoken((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pysterminalÍscCs'|i|ƒ}||t|ƒ*|SdS(N(sselfsASTstypesrvsargsslen(sselfstypesargssrv((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys nonterminalÐs(s__name__s __module__s DEFAULT_DEBUGs__init__s preprocesss buildASTNodesterminals nonterminal(((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pysGenericASTBuilder³s   (s__doc__sosspicklesressyssversionssetssSetssetssorteds _namelistsrule2strs_StatesFalsesNonesTrues DEFAULT_DEBUGsobjects GenericParsersGenericASTBuilder( ssets DEFAULT_DEBUGsoss_StatesGenericASTBuilderssyssres _namelistssorteds GenericParserspicklesrule2str((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys?s$    <ÿÿÿvPKç>JË»ddspark_parser/version.py# This file is suitable for sourcing inside bash as # well as importing into Python VERSION='1.6.0' PKí™?JnñˆÌÌspark_parser/__init__.pyimport sys from spark_parser.version import VERSION __version__ = 'SPARK-%s Python2 and Python3 compatible' % VERSION __docformat__ = 'restructuredtext' PYTHON3 = (sys.version_info >= (3, 0)) from spark_parser.ast import AST as AST from spark_parser.ast import GenericASTTraversal as GenericASTTraversal from spark_parser.ast import GenericASTTraversalPruningException as GenericASTTraversalPruningException from spark_parser.spark import DEFAULT_DEBUG from spark_parser.spark import GenericParser as GenericParser from spark_parser.spark import GenericASTBuilder as GenericASTBuilder from spark_parser.scanner import GenericScanner as GenericScanner from spark_parser.scanner import GenericToken as GenericToken PK×›?J·µ spark_parser/scanner.pyc;ò QhUWc@sBdZdkZd„Zdfd„ƒYZdfd„ƒYZdS(sP Scanning and Token classes that might be useful in creating specific scanners. NcCs gh|igf\}}}xw|D]o}x|iD]}|i|ƒq5WxEt |i i ƒƒD].}||jo|i|ƒd|||iiiƒD]*\}}t |d|ƒ|i|ds    cCs.t||ƒi}d|d|f}|SdS(Ns (?P<%s>%s)i(sgetattrsselfsnames__doc__sdocsrv(sselfsnamesdocsrv((s4build/bdist.linux-x86_64/egg/spark_parser/scanner.pysmakeREFscCs‚g}xRtt|ƒƒD]>}|d djo |djo|i|i|ƒƒqqW|i|idƒƒdi|ƒSdS(Nist_s t_defaults|(srvslists _namelistsselfsnamesappendsmakeREsjoin(sselfsrvsname((s4build/bdist.linux-x86_64/egg/spark_parser/scanner.pysreflectKscCsd||fGHt‚dS(sTSimple-minded error handler. see py2_scan for another possibility.' s"Lexical error in %s at position %sN(sssposs SystemExit(sselfssspos((s4build/bdist.linux-x86_64/egg/spark_parser/scanner.pyserrorSscCsÊd}t|ƒ}x±||jo£|ii||ƒ}|tjo|i ||ƒn|i ƒ}xNt t|ƒƒD]:}||o ||i jo|i |||ƒqwqwW|iƒ}qWdS(Ni(sposslensssnsselfsresmatchsmsNoneserrorsgroupssrangesis index2funcsend(sselfsssismspossnsgroups((s4build/bdist.linux-x86_64/egg/spark_parser/scanner.pystokenizeZs    cCsdS(s( \n )+N((sselfss((s4build/bdist.linux-x86_64/egg/spark_parser/scanner.pys t_defaulths( s__name__s __module__s__doc__s__init__smakeREsreflectserrorstokenizes t_default(((s4build/bdist.linux-x86_64/egg/spark_parser/scanner.pysGenericScanner1s      (s__doc__sres _namelists GenericTokensGenericScanner(sres GenericTokensGenericScanners _namelist((s4build/bdist.linux-x86_64/egg/spark_parser/scanner.pys?s  PK×›?JN…ÝB  spark_parser/ast.pyc;ò š(‘Xc@sŒdkZeiddfjZeoeiZdklZndklZdefd„ƒYZdfd„ƒYZdfd „ƒYZdS( Nii(sUserListsASTcBsGtZgd„Zd„Zd„Zd„Zdd„Zed„ZRS(NcCs#t|ƒ|_ti||ƒdS(N(sinternskindsselfstypesUserLists__init__skids(sselfskindskids((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pys__init__ scCs|i||!SdS(N(sselfsdataslowshigh(sselfslowshigh((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pys __getslice__scCsHt|tƒo'|i|ijoti||ƒSn|i|jSdS(N(s isinstancesosASTsselfstypesUserLists__eq__(sselfso((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pys__eq__s'cCst|iƒSdS(N(shashsselfstype(sself((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pys__hash__sscCs|i|tƒSdS(N(sselfs __repr1__sindentsNone(sselfsindent((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pys__repr__scCs*t|iƒ}|tj od||f}nt}t|ƒdjo|dt|ƒ7}t }n||}|d7}d}xž|D]–}t |dƒo3|o|i||ƒ}q|i|tƒ}n6|o|d|t|ƒf}n|t|ƒ}|d|7}|d7}qˆW|SdS(Ns%d. %sis (%d)s is __repr1__s (sstrsselfstypesrvssibNumsNonesFalsesenumerate_childrenslensTruesindentsisnodeshasattrs __repr1__schild(sselfsindentssibNumsnodesisrvschildsenumerate_children((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pys __repr1__!s.    ( s__name__s __module__s__init__s __getslice__s__eq__s__hash__s__repr__sNones __repr1__(((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pysAST s      s#GenericASTTraversalPruningExceptioncBstZRS(N(s__name__s __module__(((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pys#GenericASTTraversalPruningException<ssGenericASTTraversalcBsJtZdZd„Zd„Zd„Zed„Zed„Zd„Z RS(s GenericASTTraversal is a Visitor pattern according to Design Patterns. For each node it attempts to invoke the method n_, falling back onto the default() method if the n_* can't be found. The preorder traversal also looks for an exit hook named n__exit (no default routine is called if it's not found). To prematurely halt traversal of a subtree, call the prune() method -- this only makes sense for a preorder traversal. Node type is determined via the typestring() method. cCs ||_dS(N(sastsself(sselfsast((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pys__init__IscCs |iSdS(N(snodestype(sselfsnode((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pys typestringLscCs t‚dS(N(s#GenericASTTraversalPruningException(sself((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pyspruneOscCsÞ|tjo |i}nyQd|i|ƒ}t||ƒot||ƒ}||ƒn|i |ƒWnt j o dSnXx|D]}|i |ƒqŒW|d}t||ƒot||ƒ}||ƒndS(sWalk the tree in preorder. For each node with typestring name *name* if the node has a method called n_*name*, call that before walking children. If there is no method define, call a self.default(node) instead. Subclasses of GenericASTTtraversal ill probably want to override this method. If the node has a method called *name*_exit, that is called after all children have been called. So in this sense this function is both preorder and postorder combined. sn_Ns_exit( snodesNonesselfsasts typestringsnameshasattrsgetattrsfuncsdefaults#GenericASTTraversalPruningExceptionskidspreorder(sselfsnodesnamesfuncskid((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pyspreorderRs$     cCs|tjo |i}nyt|ƒ}Wntj o t}nX|o"x|D]}|i|ƒqTWnyQd|i |ƒ}t ||ƒot ||ƒ}||ƒn|i|ƒWntj o dSnX|d}t ||ƒot ||ƒ}||ƒndS(sWalk the tree in postorder. For each node with typestring name *name* if the node has a method called n_*name*, call that before walking children. If there is no method define, call a self.default(node) instead. Subclasses of GenericASTTtraversal ill probably want to override this method. If the node has a method called *name*_exit, that is called after all children have been called. So in this sense this function is both preorder and postorder combined. sn_Ns_exit(snodesNonesselfsastsitersfirsts TypeErrorskids postorders typestringsnameshasattrsgetattrsfuncsdefaults#GenericASTTraversalPruningException(sselfsnodesnamesfuncskidsfirst((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pys postorderss.      cCsdS(sDefault acttion to take on an ASTNode. Our defualt is to do nothing. Subclasses will probably want to define this for other behavior.N((sselfsnode((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pysdefaultšs( s__name__s __module__s__doc__s__init__s typestringsprunesNonespreorders postordersdefault(((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pysGenericASTTraversal?s     ! '( ssyss version_infosPYTHON3sinterns collectionssUserListsASTs#GenericASTTraversalPruningExceptionsGenericASTTraversal(s#GenericASTTraversalPruningExceptionsASTsGenericASTTraversalssyssUserListsinternsPYTHON3((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pys?s   0PK×›?J¨D, ´EGG-INFO/PKG-INFOPK×›?JÚ~N{ ´¾ EGG-INFO/top_level.txtPK×›?J#ºˆxTT´ÿ EGG-INFO/SOURCES.txtPK×›?J“×2´… EGG-INFO/dependency_links.txtPKsXÊH“×2´Á EGG-INFO/zip-safePK×›?JÀ”]Àèè&ýñ EGG-INFO/scripts/spark-parser-coveragePK\AÆHqôÿg ´(spark_parser/scanner.pyPK×›?JŸH¯Ì©©´_4spark_parser/__init__.pycPK×›?J‰Z0ö¬¬´?9spark_parser/version.pycPK$š?JŸ7úß˄˄´!:spark_parser/spark.pyPKë™?JHÒÒ1<<´¿spark_parser/ast.pyPK×›?J‰Ú|.‹‹´ŒÒspark_parser/spark.pycPKç>JË»dd´À]spark_parser/version.pyPKí™?JnñˆÌÌ´Y^spark_parser/__init__.pyPK×›?J·µ ´[aspark_parser/scanner.pycPK×›?JN…ÝB  ´£wspark_parser/ast.pycPKPâ“