PK<|I|»´dŒ Œ EGG-INFO/PKG-INFOMetadata-Version: 1.0 Name: spark-parser Version: 1.5.1 Summary: An Early-Algorithm Context-free grammar Parser Toolkit Home-page: https://github.com/rocky/python-spark/ Author: Rocky Bernstein Author-email: rb@dustyfeet.com License: MIT Description: |Supported Python Versions| SPARK ===== SPARK stands for Scanning, Parsing, and Rewriting Kit. It uses Jay Early's algorithm for parsing context free grammars, and comes with some generic Abstract Syntax Tree routines. There is also a prototype scanner which does its job by combining Python regular expressions. The original version of this was written by John Aycock for his Ph.d thesis and was described in his 1998 paper: "Compiling Little Languages in Python" at the 7th International Python Conference. The current incarnation of this code is maintained (or not) by Rocky Bernstein. Note: Early algorithm parsers are almost linear when given an LR grammar. These are grammars which are left-recursive. Installation ------------ This uses `setup.py`, so it follows the standard Python routine: :: python setup.py install # may need sudo # or if you have pyenv: python setup.py develop Example ------- The github `example` directory_ has a worked-out examples; Package uncompyle6_ uses this and contains a much larger example. See Also -------- * features_ * http://pages.cpsc.ucalgary.ca/~aycock/spark/ (Old and not very well maintained) * https://pypi.python.org/pypi/uncompyle6/ .. _features: https://github.com/rocky/python-spark/blob/master/NEW-FEATURES.rst .. _directory: https://github.com/rocky/python-spark/tree/master/example .. _uncompyle6: https://pypi.python.org/pypi/uncompyle6/ .. |downloads| image:: https://img.shields.io/pypi/dd/spark.svg .. |Supported Python Versions| image:: https://img.shields.io/pypi/pyversions/spark_parser.svg Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2.3 Classifier: Programming Language :: Python :: 2.4 Classifier: Programming Language :: Python :: 2.5 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Topic :: Software Development :: Code Generators Classifier: Topic :: Software Development :: Libraries :: Python Modules PK<|IÚ~N{ EGG-INFO/top_level.txtspark_parser PK<|IŽý–¢ääEGG-INFO/SOURCES.txtChangeLog LICENSE MANIFEST.in Makefile NEW-FEATURES.rst README.rst TODO.rst setup.py example/README.md example/expr/README.md example/expr/expr.py example/expr/expr1.txt example/expr/expr2.txt example/expr2/README.md example/expr2/__init__.py example/expr2/eval.py example/expr2/parser.py example/expr2/parser.pyc example/expr2/scanner.py example/expr2/scanner.pyc example/expr2/__pycache__/parser.cpython-33.pyc example/expr2/__pycache__/scanner.cpython-33.pyc example/python2/.gitignore example/python2/Makefile example/python2/README.md example/python2/__init__.py example/python2/fn.py example/python2/fn.py~ example/python2/py2_format.py example/python2/py2_format.pyc example/python2/py2_parser.py example/python2/py2_parser.pyc example/python2/py2_scan.py example/python2/py2_scan.pyc example/python2/py2_token.py example/python2/py2_token.pyc example/python2/py2_token.py~ example/python2/python26.gr example/python2/reflow.py example/python2/__pycache__/py2_format.cpython-33.pyc example/python2/__pycache__/py2_format.cpython-34.pyc example/python2/__pycache__/py2_format.cpython-35.pyc example/python2/__pycache__/py2_parser.cpython-33.pyc example/python2/__pycache__/py2_parser.cpython-34.pyc example/python2/__pycache__/py2_parser.cpython-35.pyc example/python2/__pycache__/py2_scan.cpython-33.pyc example/python2/__pycache__/py2_scan.cpython-34.pyc example/python2/__pycache__/py2_scan.cpython-35.pyc example/python2/__pycache__/py2_token.cpython-33.pyc example/python2/__pycache__/py2_token.cpython-34.pyc example/python2/__pycache__/py2_token.cpython-35.pyc example/python2/test/Makefile example/python2/test/helper.py example/python2/test/helper.pyc example/python2/test/test_class.py example/python2/test/test_class.py~ example/python2/test/test_format.py example/python2/test/test_format_inline.py example/python2/test/test_parse.py example/python2/test/test_parse_inline.py example/python2/test/test_scan.py example/python2/test/test_scan_inline.py example/python2/test/__pycache__/helper.cpython-33.pyc example/python2/test/__pycache__/helper.cpython-34.pyc example/python2/test/__pycache__/helper.cpython-35.pyc example/python2/test/format/assert.py example/python2/test/format/assert.right example/python2/test/format/def-bug.py-notyet example/python2/test/format/def.py example/python2/test/format/def.right example/python2/test/format/exec.py example/python2/test/format/exec.right example/python2/test/format/expr.py example/python2/test/format/expr.right example/python2/test/format/global.py example/python2/test/format/global.right example/python2/test/format/if.py example/python2/test/format/if.right example/python2/test/format/imports.py example/python2/test/format/imports.right example/python2/test/format/while.py example/python2/test/format/while.right example/python2/test/format/with-bug.py-notyet example/python2/test/format/with.py example/python2/test/format/with.py.~reduce-checks~ example/python2/test/format/with.right example/python2/test/parse/assert.py example/python2/test/parse/assert.right example/python2/test/parse/def.py example/python2/test/parse/def.right example/python2/test/parse/exec.py example/python2/test/parse/exec.right example/python2/test/parse/global.py example/python2/test/parse/global.right example/python2/test/parse/if.py example/python2/test/parse/if.right example/python2/test/parse/imports.py example/python2/test/parse/imports.right example/python2/test/parse/while.py example/python2/test/parse/while.right example/python2/test/scan/.gitignore example/python2/test/scan/expr1.py example/python2/test/scan/expr1.right example/python2/test/scan/indent.right example/python2/test/scan/indent1.py example/python2/test/scan/indent1.right example/python2/test/scan/syms.py example/python2/test/scan/syms.right spark_parser/__init__.py spark_parser/ast.py spark_parser/scanner.py spark_parser/spark.py spark_parser/version.py spark_parser.egg-info/PKG-INFO spark_parser.egg-info/SOURCES.txt spark_parser.egg-info/dependency_links.txt spark_parser.egg-info/top_level.txt spark_parser.egg-info/zip-safe test/test_checker.py test/test_checker.pyc test/test_misc.py test/test_misc.pyc test/test_spark.py test/test_spark.pyc test/__pycache__/test_checker.cpython-33.pyc test/__pycache__/test_checker.cpython-34.pyc test/__pycache__/test_checker.cpython-35.pyc test/__pycache__/test_misc.cpython-33.pyc test/__pycache__/test_misc.cpython-34.pyc test/__pycache__/test_misc.cpython-35.pyc test/__pycache__/test_spark.cpython-33.pyc test/__pycache__/test_spark.cpython-34.pyc test/__pycache__/test_spark.cpython-35.pycPK<|I“×2EGG-INFO/dependency_links.txt PKsXÊH“×2EGG-INFO/zip-safe PK\AÆHqôÿg spark_parser/scanner.py""" Scanning and Token classes that might be useful in creating specific scanners. """ import re def _namelist(instance): namelist, namedict, classlist = [], {}, [instance.__class__] for c in classlist: for b in c.__bases__: classlist.append(b) for name in list(c.__dict__.keys()): if name not in namedict: namelist.append(name) namedict[name] = 1 return namelist class GenericToken: """A sample Token class that can be used in scanning""" def __init__(self, kind, attr=None): self.type = kind self.attr = attr def __eq__(self, o): """ '==', but it's okay if offsets and linestarts are different""" if isinstance(o, GenericToken): return (self.type == o.type) and (self.attr == o.attr) else: return self.type == o def __str__(self): if self.attr: return 'type: %s, value: %r' % (self.type, self.attr) else: return "type: %s" % self.type def __repr__(self): return self.attr or self.type # Used in generic table-driven semantics routines def __hash__(self): return hash(self.attr) # Used in generic table-driven semantics routines def __getitem__(self, i): raise IndexError class GenericScanner: """A class which can be used subclass off of to make specific sets of scanners. Scanner methods that are subclassed off of this that begin with t_ will be introspected in their documentation string and uses as a regular expression in a token pattern. For example: def t_add_op(self, s): r'[+-]' t = GenericToken(type='ADD_OP', attr=s) self.rv.append(t) """ def __init__(self): pattern = self.reflect() self.re = re.compile(pattern, re.VERBOSE) self.index2func = {} for name, number in self.re.groupindex.items(): self.index2func[number-1] = getattr(self, 't_' + name) def makeRE(self, name): doc = getattr(self, name).__doc__ rv = '(?P<%s>%s)' % (name[2:], doc) return rv def reflect(self): rv = [] for name in list(_namelist(self)): if name[:2] == 't_' and name != 't_default': rv.append(self.makeRE(name)) rv.append(self.makeRE('t_default')) return '|'.join(rv) def error(self, s, pos): """Simple-minded error handler. see py2_scan for another possibility.' """ print("Lexical error in %s at position %s" % (s, pos)) raise SystemExit def tokenize(self, s): pos = 0 n = len(s) while pos < n: m = self.re.match(s, pos) if m is None: self.error(s, pos) groups = m.groups() for i in range(len(groups)): if groups[i] and i in self.index2func: self.index2func[i](groups[i]) pos = m.end() def t_default(self, s): r'( \n )+' pass PK<|IŸè?LLspark_parser/__init__.pyc;ò —\Wc@sdkZdklZdeZdZeiddfjZeoldklZdkl Z dkl Z d k l Z d k l Z d k lZd klZd klZnidklZdkl Z dkl Z d kl Z d kl Z d klZd klZd klZdS(N(sVERSIONs'SPARK-%s Python2 and Python3 compatiblesrestructuredtextii(sAST(sGenericASTTraversal(s#GenericASTTraversalPruningException(s DEFAULT_DEBUG(s GenericParser(sGenericASTBuilder(sGenericScanner(s GenericToken(ssyssspark_parser.versionsVERSIONs __version__s __docformat__s version_infosPYTHON3sspark_parser.astsASTsGenericASTTraversals#GenericASTTraversalPruningExceptionsspark_parser.sparks DEFAULT_DEBUGs GenericParsersGenericASTBuildersspark_parser.scannersGenericScanners GenericTokensastssparksscanner( sGenericScanners#GenericASTTraversalPruningExceptions GenericParsers DEFAULT_DEBUGsASTs GenericTokens __docformat__ssyssGenericASTTraversalsVERSIONsGenericASTBuilders __version__sPYTHON3((s5build/bdist.linux-x86_64/egg/spark_parser/__init__.pys?s*                 PK<|Iÿ?a¬¬spark_parser/version.pyc;ò ¸#= 0: start = index - 2 else: start = 0 tokens = [str(tokens[i]) for i in range(start, index+1)] print("Token context:\n\t%s" % ("\n\t".join(tokens))) raise SystemExit def errorstack(self, tokens, i, full=False): """Show the stacks of completed symbols. We get this by inspecting the current transitions possible and from that extracting the set of states we are in, and from there we look at the set of symbols before the "dot". If full is True, we show the entire rule with the dot placement. Otherwise just the rule up to the dot. """ print("\n-- Stacks of completed symbols:") states = [s for s in self.edges.values() if s] # States now has the set of states we are in state_stack = set() for state in states: # Find rules which can follow, but keep only # the part before the dot for rule, dot in self.states[state].items: lhs, rhs = rule if dot > 0: if full: state_stack.add(' '.join(rhs[:dot]) + ' . ' + ' '.join(rhs[dot:])) else: state_stack.add(' '.join(rhs[:dot])) pass pass pass for stack in sorted(state_stack): print(stack) def parse(self, tokens, debug=None): """This is the main entry point from outside. Passing in a debug dictionary changes the default debug setting. """ if debug: self.debug = debug sets = [ [(1, 0), (2, 0)] ] self.links = {} if self.ruleschanged: self.computeNull() self.newrules = {} self.new2old = {} self.makeNewRules() self.ruleschanged = False self.edges, self.cores = {}, {} self.states = { 0: self.makeState0() } self.makeState(0, self._BOF) for i in range(len(tokens)): sets.append([]) if sets[i] == []: break self.makeSet(tokens, sets, i) else: sets.append([]) self.makeSet(None, sets, len(tokens)) finalitem = (self.finalState(tokens), 0) if finalitem not in sets[-2]: if len(tokens) > 0: if self.debug['errorstack']: self.errorstack(tokens, i-1, str(self.debug['errorstack']) == 'full') self.error(tokens, i-1) else: self.error(None, None) return self.buildTree(self._START, finalitem, tokens, len(sets)-2) def isnullable(self, sym): # For symbols in G_e only. return sym.startswith(self._NULLABLE) def skip(self, xxx_todo_changeme, pos=0): (lhs, rhs) = xxx_todo_changeme n = len(rhs) while pos < n: if not self.isnullable(rhs[pos]): break pos = pos + 1 return pos def makeState(self, state, sym): assert sym is not None # print(sym) # debug # # Compute \epsilon-kernel state's core and see if # it exists already. # kitems = [] for rule, pos in self.states[state].items: lhs, rhs = rule if rhs[pos:pos+1] == (sym,): kitems.append((rule, self.skip(rule, pos+1))) tcore = tuple(sorted(kitems)) if tcore in self.cores: return self.cores[tcore] # # Nope, doesn't exist. Compute it and the associated # \epsilon-nonkernel state together; we'll need it right away. # k = self.cores[tcore] = len(self.states) K, NK = _State(k, kitems), _State(k+1, []) self.states[k] = K predicted = {} edges = self.edges rules = self.newrules for X in K, NK: worklist = X.items for item in worklist: rule, pos = item lhs, rhs = rule if pos == len(rhs): X.complete.append(rule) continue nextSym = rhs[pos] key = (X.stateno, nextSym) if nextSym not in rules: if key not in edges: edges[key] = None X.T.append(nextSym) else: edges[key] = None if nextSym not in predicted: predicted[nextSym] = 1 for prule in rules[nextSym]: ppos = self.skip(prule) new = (prule, ppos) NK.items.append(new) # # Problem: we know K needs generating, but we # don't yet know about NK. Can't commit anything # regarding NK to self.edges until we're sure. Should # we delay committing on both K and NK to avoid this # hacky code? This creates other problems.. # if X is K: edges = {} if NK.items == []: return k # # Check for \epsilon-nonkernel's core. Unfortunately we # need to know the entire set of predicted nonterminals # to do this without accidentally duplicating states. # tcore = tuple(sorted(predicted.keys())) if tcore in self.cores: self.edges[(k, None)] = self.cores[tcore] return k nk = self.cores[tcore] = self.edges[(k, None)] = NK.stateno self.edges.update(edges) self.states[nk] = NK return k def goto(self, state, sym): key = (state, sym) if key not in self.edges: # # No transitions from state on sym. # return None rv = self.edges[key] if rv is None: # # Target state isn't generated yet. Remedy this. # rv = self.makeState(state, sym) self.edges[key] = rv return rv def gotoT(self, state, t): if self.debug['rules']: print("Terminal", t, state) return [self.goto(state, t)] def gotoST(self, state, st): if self.debug['transition']: print("GotoST", st, state) rv = [] for t in self.states[state].T: if st == t: rv.append(self.goto(state, t)) return rv def add(self, set, item, i=None, predecessor=None, causal=None): if predecessor is None: if item not in set: set.append(item) else: key = (item, i) if item not in set: self.links[key] = [] set.append(item) self.links[key].append((predecessor, causal)) def makeSet(self, tokens, sets, i): cur, next = sets[i], sets[i+1] if tokens is not None: token = tokens[i] ttype = self.typestring(token) else: ttype = None token = None if ttype is not None: fn, arg = self.gotoT, ttype else: fn, arg = self.gotoST, token for item in cur: ptr = (item, i) state, parent = item add = fn(state, arg) for k in add: if k is not None: self.add(next, (k, parent), i+1, ptr) nk = self.goto(k, None) if nk is not None: self.add(next, (nk, i+1)) if parent == i: continue for rule in self.states[state].complete: lhs, rhs = rule if self.debug['reduce']: self.debug_reduce(rule, tokens, parent, i) if lhs in self.check_reduce and tokens: if self.check_reduce[lhs] == 'AST': ast = self.reduce_ast(rule, tokens, item, i, sets) else: ast = None invalid = self.reduce_is_invalid(rule, ast, tokens, parent, i) if ast: del ast if invalid: if self.debug['reduce']: print("Reduce %s invalid by check" % lhs) continue pass for pitem in sets[parent]: pstate, pparent = pitem k = self.goto(pstate, lhs) if k is not None: why = (item, i, rule) pptr = (pitem, parent) self.add(cur, (k, pparent), i, pptr, why) nk = self.goto(k, None) if nk is not None: self.add(cur, (nk, i)) def makeSet_fast(self, token, sets, i): # # Call *only* when the entire state machine has been built! # It relies on self.edges being filled in completely, and # then duplicates and inlines code to boost speed at the # cost of extreme ugliness. # cur, next = sets[i], sets[i+1] ttype = token is not None and self.typestring(token) or None for item in cur: ptr = (item, i) state, parent = item if ttype is not None: k = self.edges.get((state, ttype), None) if k is not None: # self.add(next, (k, parent), i+1, ptr) # INLINED --------v new = (k, parent) key = (new, i+1) if new not in next: self.links[key] = [] next.append(new) self.links[key].append((ptr, None)) # INLINED --------^ # nk = self.goto(k, None) nk = self.edges.get((k, None), None) if nk is not None: # self.add(next, (nk, i+1)) # INLINED -------------v new = (nk, i+1) if new not in next: next.append(new) # INLINED ---------------^ else: add = self.gotoST(state, token) for k in add: if k is not None: self.add(next, (k, parent), i+1, ptr) # nk = self.goto(k, None) nk = self.edges.get((k, None), None) if nk is not None: self.add(next, (nk, i+1)) if parent == i: continue for rule in self.states[state].complete: lhs, rhs = rule for pitem in sets[parent]: pstate, pparent = pitem # k = self.goto(pstate, lhs) k = self.edges.get((pstate, lhs), None) if k is not None: why = (item, i, rule) pptr = (pitem, parent) # self.add(cur, (k, pparent), i, pptr, why) # INLINED ---------v new = (k, pparent) key = (new, i) if new not in cur: self.links[key] = [] cur.append(new) self.links[key].append((pptr, why)) # INLINED ----------^ # nk = self.goto(k, None) nk = self.edges.get((k, None), None) if nk is not None: # self.add(cur, (nk, i)) # INLINED ---------v new = (nk, i) if new not in cur: cur.append(new) # INLINED ----------^ def predecessor(self, key, causal): for p, c in self.links[key]: if c == causal: return p assert 0 def causal(self, key): links = self.links[key] if len(links) == 1: return links[0][1] choices = [] rule2cause = {} for p, c in links: rule = c[2] choices.append(rule) rule2cause[rule] = c return rule2cause[self.ambiguity(choices)] def deriveEpsilon(self, nt): if len(self.newrules[nt]) > 1: rule = self.ambiguity(self.newrules[nt]) else: rule = self.newrules[nt][0] # print(rule) # debug rhs = rule[1] attr = [None] * len(rhs) for i in range(len(rhs)-1, -1, -1): attr[i] = self.deriveEpsilon(rhs[i]) return self.rule2func[self.new2old[rule]](attr) def buildTree(self, nt, item, tokens, k): if self.debug['rules']: print("NT", nt) state, parent = item choices = [] for rule in self.states[state].complete: if rule[0] == nt: choices.append(rule) rule = choices[0] if len(choices) > 1: rule = self.ambiguity(choices) # print(rule) # debug rhs = rule[1] attr = [None] * len(rhs) for i in range(len(rhs)-1, -1, -1): sym = rhs[i] if sym not in self.newrules: if sym != self._BOF: attr[i] = tokens[k-1] key = (item, k) item, k = self.predecessor(key, None) # elif self.isnullable(sym): elif self._NULLABLE == sym[0:len(self._NULLABLE)]: attr[i] = self.deriveEpsilon(sym) else: key = (item, k) why = self.causal(key) attr[i] = self.buildTree(sym, why[0], tokens, why[1]) item, k = self.predecessor(key, why) return self.rule2func[self.new2old[rule]](attr) def ambiguity(self, rules): # # XXX - problem here and in collectRules() if the same rule # appears in >1 method. Also undefined results if rules # causing the ambiguity appear in the same method. # sortlist = [] name2index = {} for i in range(len(rules)): lhs, rhs = rule = rules[i] name = self.rule2name[self.new2old[rule]] sortlist.append((len(rhs), name)) name2index[name] = i sortlist.sort() list = [a_b[1] for a_b in sortlist] return rules[name2index[self.resolve(list)]] def resolve(self, list): ''' Resolve ambiguity in favor of the shortest RHS. Since we walk the tree from the top down, this should effectively resolve in favor of a "shift". ''' return list[0] def dumpGrammar(self): """ Print grammar rules """ for rule in sorted(self.rule2name.items()): print("%s" % rule2str(rule)) return def checkGrammar(self): ''' Check grammar ''' lhs, rhs, tokens, right_recursive = self.checkSets() if len(lhs) > 0: print("LHS symbols not used on the RHS:") print(sorted(lhs)) if len(rhs) > 0: print("RHS symbols not used on the LHS:") print(sorted(rhs)) if len(right_recursive) > 0: print("Right recursive rules:") for rule in right_recursive: print("%s ::= %s" % (rule[0], ' '.join(rule[1]))) pass pass def checkSets(self): ''' Check grammar ''' lhs_set = set() rhs_set = set() token_set = set() right_recursive = [] for lhs in self.rules: rules_for_lhs = self.rules[lhs] lhs_set.add(lhs) for rule in rules_for_lhs: rhs = rule[1] for sym in rhs: # We assume any symbol starting with an uppercase letter is # terminal, and anything else is a nonterminal if re.match("^[A-Z]", sym): token_set.add(sym) else: rhs_set.add(sym) if len(rhs) > 0 and lhs == rhs[-1]: right_recursive.append([lhs, rhs]) pass pass lhs_set.remove(self._START) rhs_set.remove(self._BOF) missing_lhs = lhs_set - rhs_set missing_rhs = rhs_set - lhs_set return (missing_lhs, missing_rhs, token_set, right_recursive) def debug_reduce(self, rule, tokens, parent, i): print("%s ::= %s" % (rule[0], ' '.join(rule[1]))) def reduce_ast(self, rule, tokens, item, k, sets): rhs = rule[1] ast = [None] * len(rhs) for i in range(len(rhs)-1, -1, -1): sym = rhs[i] if sym not in self.newrules: if sym != self._BOF: ast[i] = tokens[k-1] key = (item, k) item, k = self.predecessor(key, None) elif self._NULLABLE == sym[0:len(self._NULLABLE)]: ast[i] = self.deriveEpsilon(sym) else: key = (item, k) why = self.causal(key) ast[i] = self.buildTree(sym, why[0], tokens, why[1]) item, k = self.predecessor(key, why) pass pass return ast # # # GenericASTBuilder automagically constructs a concrete/abstract syntax tree # for a given input. The extra argument is a class (not an instance!) # which supports the "__setslice__" and "__len__" methods. # # XXX - silently overrides any user code in methods. # class GenericASTBuilder(GenericParser): def __init__(self, AST, start, debug=DEFAULT_DEBUG): GenericParser.__init__(self, start, debug=debug) self.AST = AST def preprocess(self, rule, func): rebind = lambda lhs, self=self: \ lambda args, lhs=lhs, self=self: \ self.buildASTNode(args, lhs) lhs, rhs = rule return rule, rebind(lhs) def buildASTNode(self, args, lhs): children = [] for arg in args: if isinstance(arg, self.AST): children.append(arg) else: children.append(self.terminal(arg)) return self.nonterminal(lhs, children) def terminal(self, token): return token def nonterminal(self, type, args): rv = self.AST(type) rv[:len(args)] = args return rv PKô;|IHÒÒ1<<spark_parser/ast.pyimport sys PYTHON3 = (sys.version_info >= (3, 0)) if PYTHON3: intern = sys.intern from collections import UserList else: from UserList import UserList class AST(UserList): def __init__(self, kind, kids=[]): self.type = intern(kind) UserList.__init__(self, kids) def __getslice__(self, low, high): return self.data[low:high] def __eq__(self, o): if isinstance(o, AST): return (self.type == o.type and UserList.__eq__(self, o)) else: return self.type == o def __hash__(self): return hash(self.type) def __repr__(self, indent=''): return self.__repr1__(indent, None) def __repr1__(self, indent, sibNum=None): rv = str(self.type) if sibNum is not None: rv = "%d. %s" % (sibNum, rv) enumerate_children = False if len(self) > 1: rv += " (%d)" % (len(self)) enumerate_children = True rv = indent + rv indent += ' ' i = 0 for node in self: if hasattr(node, '__repr1__'): if enumerate_children: child = node.__repr1__(indent, i) else: child = node.__repr1__(indent, None) else: if enumerate_children: child = indent + "%d. %s" % (i, str(node)) else: child = indent + str(node) pass rv += "\n" + child i += 1 return rv class GenericASTTraversalPruningException: pass class GenericASTTraversal: ''' GenericASTTraversal is a Visitor pattern according to Design Patterns. For each node it attempts to invoke the method n_, falling back onto the default() method if the n_* can't be found. The preorder traversal also looks for an exit hook named n__exit (no default routine is called if it's not found). To prematurely halt traversal of a subtree, call the prune() method -- this only makes sense for a preorder traversal. Node type is determined via the typestring() method. ''' def __init__(self, ast): self.ast = ast def typestring(self, node): return node.type def prune(self): raise GenericASTTraversalPruningException def preorder(self, node=None): """Walk the tree in preorder. For each node with typestring name *name* if the node has a method called n_*name*, call that before walking children. If there is no method define, call a self.default(node) instead. Subclasses of GenericASTTtraversal ill probably want to override this method. If the node has a method called *name*_exit, that is called after all children have been called. So in this sense this function is both preorder and postorder combined. """ if node is None: node = self.ast try: name = 'n_' + self.typestring(node) if hasattr(self, name): func = getattr(self, name) func(node) else: self.default(node) except GenericASTTraversalPruningException: return for kid in node: self.preorder(kid) name = name + '_exit' if hasattr(self, name): func = getattr(self, name) func(node) def postorder(self, node=None): """Walk the tree in postorder. For each node with typestring name *name* if the node has a method called n_*name*, call that before walking children. If there is no method define, call a self.default(node) instead. Subclasses of GenericASTTtraversal ill probably want to override this method. If the node has a method called *name*_exit, that is called after all children have been called. So in this sense this function is both preorder and postorder combined. """ if node is None: node = self.ast try: first = iter(node) except TypeError: first = None if first: for kid in node: self.postorder(kid) try: name = 'n_' + self.typestring(node) if hasattr(self, name): func = getattr(self, name) func(node) else: self.default(node) except GenericASTTraversalPruningException: return name = name + '_exit' if hasattr(self, name): func = getattr(self, name) func(node) def default(self, node): """Default acttion to take on an ASTNode. Our defualt is to do nothing. Subclasses will probably want to define this for other behavior.""" pass PK<|I©2Q!Y|Y|spark_parser/spark.pyc;ò ­#ísi(sselfs_STARTs_BOFsstartsrulesaddRule(sselfsstartsrule((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pysaugmentëscCsBh|_g}x­t|iiƒƒD]–}|dd}d|i|        cCsHtdgƒ}x.|i|iD]}|ii|dfƒq W|SdS(Ni(s_Statess0sselfsnewruless_STARTsrulesitemssappend(sselfss0srule((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys makeState0s cCsht|i|iƒdjot|ƒdjodSn|i|iddd}|id|ƒSdS(Niii(slensselfsnewruless_STARTstokenssrulessstartsgoto(sselfstokenssstart((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys finalStates0c Cs£g} xGt|iiƒƒD]0} x'| D]}| i|dd|fƒq)WqWxL| D]D\}}}}|\}} t | ƒ}x||jo | |} | |ijp |i|  od}|d}q„nt| ƒ}|i| ||<|t|ƒf}| i||d||fƒd}|d}q„W|o|i|}|| f}n||ijo|i|i|ƒn|g|i|<||i|cCs dGHgi}|iiƒD]}|o||ƒqq~} tƒ} x | D]˜}x|i|i D]}\}} |\} } | djoX|o3| idi| | ƒddi| | ƒƒqä| idi| | ƒƒqgqgWqPWxt| ƒD] }|GHqùWdS(svShow the stacks of completed symbols. We get this by inspecting the current transitions possible and from that extracting the set of states we are in, and from there we look at the set of symbols before the "dot". If full is True, we show the entire rule with the dot placement. Otherwise just the rule up to the dot. s -- Stacks of completed symbols:is s . N(sappends_[1]sselfsedgessvaluessssstatesssets state_stacksstatesitemssrulesdotslhssrhssfullsaddsjoinssortedsstack(sselfstokenssisfulls_[1]sstatesssrulesstacks state_stacksstatesslhssrhssdot((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys errorstackVs&;    3 cCsù|o ||_nddfddfgg}h|_|iov|iƒh|_h|_|iƒt |_hhf\|_ |_ hd|i ƒ<|_ |id|iƒnxvtt|ƒƒD]<}|igƒ||gjoPn|i|||ƒqËW|igƒ|it|t|ƒƒ|i|ƒdf}||djo{t|ƒdjoT|ido.|i||dt|idƒdjƒn|i||dƒqÒ|ittƒn|i|i||t|ƒdƒSdS(s…This is the main entry point from outside. Passing in a debug dictionary changes the default debug setting. iiiiþÿÿÿs errorstacksfullN(sdebugsselfssetsslinkss ruleschangeds computeNullsnewrulessnew2olds makeNewRulessFalsesedgesscoress makeState0sstatess makeStates_BOFsrangeslenstokenssisappendsmakeSetsNones finalStates finalitems errorstacksstrserrors buildTrees_START(sselfstokenssdebugsissetss finalitem((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pysparsess>          .cCs|i|iƒSdS(N(ssyms startswithsselfs _NULLABLE(sselfssym((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys isnullable sicCsY|\}}t|ƒ}x6||jo(|i||ƒ oPn|d}qW|SdS(Ni(sxxx_todo_changemeslhssrhsslensnspossselfs isnullable(sselfsxxx_todo_changemespossrhssnslhs((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pysskip¤s   cCs|tj pt‚g}xl|i|iD]Z\}}|\}} | ||d!|fjo'|i ||i ||dƒfƒq+q+Wtt|ƒƒ}||ijo|i|Snt|iƒ}|i|qÊqÊW||joq–nx©|i| iD]—}|\}}|ido|i||||ƒn||ijo|o|i|djo|i|||||ƒ}nt}|i |||||ƒ} |o~n| o"|ido d|GHqgqgqGnx´||D]¨}|\}}|i||ƒ}|tj ow|||f}||f}|i| ||f|||ƒ|i|tƒ}|tj o|i| ||fƒqúqRqRWqgWq–WdS(NisreducesASTsReduce %s invalid by check('ssetssiscursnextstokenssNonestokensselfs typestringsttypesgotoTsfnsargsgotoSTsitemsptrsstatesparentsaddsksgotosnksstatesscompletesruleslhssrhssdebugs debug_reduces check_reduces reduce_astsastsreduce_is_invalidsinvalidspitemspstatespparentswhyspptr(sselfstokensssetssispitemspparentsargsptrsnkscursaddsinvalidsnextsstatesttypespptrsparentsastsrhsswhysfnspstatesksrulesitemstokenslhs((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pysmakeSet%sh        &         cCsV||||df\}} |tj o |i|ƒpt}x|D]}||f}|\} }|tj oë|i i| |ftƒ}|tj o¿||f} | |df}| | jog|i|<| i| ƒn|i|i|tfƒ|i i|tftƒ}|tj o2||df} | | jo| i| ƒqVqZqøn›|i| |ƒ} x…| D]}}|tj oj|i| ||f|d|ƒ|i i|tftƒ}|tj o|i| ||dfƒqôqwqwW||joqHnx?|i| iD]-}|\}}x||D] }|\}}|i i||ftƒ}|tj oÒ|||f}||f} ||f} | |f}| |jog|i|<|i| ƒn|i|i| |fƒ|i i|tftƒ}|tj o.||f} | |jo|i| ƒqBqFq:q:WqWqHWdS(Ni(!ssetssiscursnextstokensNonesselfs typestringsttypesitemsptrsstatesparentsedgessgetsksnewskeyslinkssappendsnksgotoSTsaddsstatesscompletesruleslhssrhsspitemspstatespparentswhyspptr(sselfstokenssetssispitemspparentsptrsnkscursaddsnextsstatesnewspptrsparentsrhsskeysttypeswhyspstatesksrulesitemslhs((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys makeSet_fast^sf#            &             cCsEx0|i|D]!\}}||jo|SqqWdpt‚dS(Ni(sselfslinksskeyspscscausalsAssertionError(sselfskeyscausalspsc((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys predecessor©s    cCs‰|i|}t|ƒdjo|ddSng}h}x5|D]-\}}|d}|i |ƒ|||sc CsB|d} tgt| ƒ}xtt| ƒdddƒD]ü}| |} | |i joN| |i jo:||d||<||f} |i| tƒ\}}q6q:|i| dt|iƒ!jo|i| ƒ||gs(slhssself(slhssself((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pysfs(sselfsrebindsruleslhssrhs(sselfsrulesfuncsrebindslhssrhs((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys preprocesses  cCseg}xH|D]@}t||iƒo|i|ƒq |i|i|ƒƒq W|i||ƒSdS(N( schildrensargssargs isinstancesselfsASTsappendsterminals nonterminalslhs(sselfsargsslhssargschildren((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys buildASTNodelscCs|SdS(N(stoken(sselfstoken((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pysterminaluscCs'|i|ƒ}||t|ƒ*|SdS(N(sselfsASTstypesrvsargsslen(sselfstypesargssrv((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys nonterminalxs(s__name__s __module__s DEFAULT_DEBUGs__init__s preprocesss buildASTNodesterminals nonterminal(((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pysGenericASTBuilder`s    (s__doc__sossressyssversionssetssSetssetssorteds _namelistsrule2strs_StatesFalsesNonesTrues DEFAULT_DEBUGsobjects GenericParsersGenericASTBuilder( ssets DEFAULT_DEBUGsrule2strs_StatesGenericASTBuilderssyssres _namelistssorteds GenericParsersos((s2build/bdist.linux-x86_64/egg/spark_parser/spark.pys?s    <ÿÿÿ#PKú;|I÷Û`ýddspark_parser/version.py# This file is suitable for sourcing inside bash as # well as importing into Python VERSION='1.5.1' PKÇAËHWîK¿¿spark_parser/__init__.pyimport sys from spark_parser.version import VERSION __version__ = 'SPARK-%s Python2 and Python3 compatible' % VERSION __docformat__ = 'restructuredtext' PYTHON3 = (sys.version_info >= (3, 0)) if PYTHON3: from spark_parser.ast import AST as AST from spark_parser.ast import GenericASTTraversal as GenericASTTraversal from spark_parser.ast import GenericASTTraversalPruningException as GenericASTTraversalPruningException from spark_parser.spark import DEFAULT_DEBUG from spark_parser.spark import GenericParser as GenericParser from spark_parser.spark import GenericASTBuilder as GenericASTBuilder from spark_parser.scanner import GenericScanner as GenericScanner from spark_parser.scanner import GenericToken as GenericToken else: from ast import AST as AST from ast import GenericASTTraversal as GenericASTTraversal from ast import GenericASTTraversalPruningException as GenericASTTraversalPruningException from spark import DEFAULT_DEBUG from spark import GenericParser as GenericParser from spark import GenericASTBuilder as GenericASTBuilder from scanner import GenericScanner as GenericScanner from scanner import GenericToken as GenericToken PK<|I·µ spark_parser/scanner.pyc;ò QhUWc@sBdZdkZd„Zdfd„ƒYZdfd„ƒYZdS(sP Scanning and Token classes that might be useful in creating specific scanners. NcCs gh|igf\}}}xw|D]o}x|iD]}|i|ƒq5WxEt |i i ƒƒD].}||jo|i|ƒd|||iiiƒD]*\}}t |d|ƒ|i|ds    cCs.t||ƒi}d|d|f}|SdS(Ns (?P<%s>%s)i(sgetattrsselfsnames__doc__sdocsrv(sselfsnamesdocsrv((s4build/bdist.linux-x86_64/egg/spark_parser/scanner.pysmakeREFscCs‚g}xRtt|ƒƒD]>}|d djo |djo|i|i|ƒƒqqW|i|idƒƒdi|ƒSdS(Nist_s t_defaults|(srvslists _namelistsselfsnamesappendsmakeREsjoin(sselfsrvsname((s4build/bdist.linux-x86_64/egg/spark_parser/scanner.pysreflectKscCsd||fGHt‚dS(sTSimple-minded error handler. see py2_scan for another possibility.' s"Lexical error in %s at position %sN(sssposs SystemExit(sselfssspos((s4build/bdist.linux-x86_64/egg/spark_parser/scanner.pyserrorSscCsÊd}t|ƒ}x±||jo£|ii||ƒ}|tjo|i ||ƒn|i ƒ}xNt t|ƒƒD]:}||o ||i jo|i |||ƒqwqwW|iƒ}qWdS(Ni(sposslensssnsselfsresmatchsmsNoneserrorsgroupssrangesis index2funcsend(sselfsssismspossnsgroups((s4build/bdist.linux-x86_64/egg/spark_parser/scanner.pystokenizeZs    cCsdS(s( \n )+N((sselfss((s4build/bdist.linux-x86_64/egg/spark_parser/scanner.pys t_defaulths( s__name__s __module__s__doc__s__init__smakeREsreflectserrorstokenizes t_default(((s4build/bdist.linux-x86_64/egg/spark_parser/scanner.pysGenericScanner1s      (s__doc__sres _namelists GenericTokensGenericScanner(sres GenericTokensGenericScanners _namelist((s4build/bdist.linux-x86_64/egg/spark_parser/scanner.pys?s  PK<|IBŽB  spark_parser/ast.pyc;ò ­#, falling back onto the default() method if the n_* can't be found. The preorder traversal also looks for an exit hook named n__exit (no default routine is called if it's not found). To prematurely halt traversal of a subtree, call the prune() method -- this only makes sense for a preorder traversal. Node type is determined via the typestring() method. cCs ||_dS(N(sastsself(sselfsast((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pys__init__IscCs |iSdS(N(snodestype(sselfsnode((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pys typestringLscCs t‚dS(N(s#GenericASTTraversalPruningException(sself((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pyspruneOscCsÞ|tjo |i}nyQd|i|ƒ}t||ƒot||ƒ}||ƒn|i |ƒWnt j o dSnXx|D]}|i |ƒqŒW|d}t||ƒot||ƒ}||ƒndS(sWalk the tree in preorder. For each node with typestring name *name* if the node has a method called n_*name*, call that before walking children. If there is no method define, call a self.default(node) instead. Subclasses of GenericASTTtraversal ill probably want to override this method. If the node has a method called *name*_exit, that is called after all children have been called. So in this sense this function is both preorder and postorder combined. sn_Ns_exit( snodesNonesselfsasts typestringsnameshasattrsgetattrsfuncsdefaults#GenericASTTraversalPruningExceptionskidspreorder(sselfsnodesnamesfuncskid((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pyspreorderRs$     cCs|tjo |i}nyt|ƒ}Wntj o t}nX|o"x|D]}|i|ƒqTWnyQd|i |ƒ}t ||ƒot ||ƒ}||ƒn|i|ƒWntj o dSnX|d}t ||ƒot ||ƒ}||ƒndS(sWalk the tree in postorder. For each node with typestring name *name* if the node has a method called n_*name*, call that before walking children. If there is no method define, call a self.default(node) instead. Subclasses of GenericASTTtraversal ill probably want to override this method. If the node has a method called *name*_exit, that is called after all children have been called. So in this sense this function is both preorder and postorder combined. sn_Ns_exit(snodesNonesselfsastsitersfirsts TypeErrorskids postorders typestringsnameshasattrsgetattrsfuncsdefaults#GenericASTTraversalPruningException(sselfsnodesnamesfuncskidsfirst((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pys postorderss.      cCsdS(sDefault acttion to take on an ASTNode. Our defualt is to do nothing. Subclasses will probably want to define this for other behavior.N((sselfsnode((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pysdefaultšs( s__name__s __module__s__doc__s__init__s typestringsprunesNonespreorders postordersdefault(((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pysGenericASTTraversal?s     ! '( ssyss version_infosPYTHON3sinterns collectionssUserListsASTs#GenericASTTraversalPruningExceptionsGenericASTTraversal(s#GenericASTTraversalPruningExceptionsASTsGenericASTTraversalssyssUserListsinternsPYTHON3((s0build/bdist.linux-x86_64/egg/spark_parser/ast.pys?s   0PK<|I|»´dŒ Œ ´EGG-INFO/PKG-INFOPK<|IÚ~N{ ´» EGG-INFO/top_level.txtPK<|IŽý–¢ää´ü EGG-INFO/SOURCES.txtPK<|I“×2´EGG-INFO/dependency_links.txtPKsXÊH“×2´NEGG-INFO/zip-safePK\AÆHqôÿg ´~spark_parser/scanner.pyPK<|IŸè?LL´À*spark_parser/__init__.pycPK<|Iÿ?a¬¬´C0spark_parser/version.pycPKô;|I¸élW5x5x´%1spark_parser/spark.pyPKô;|IHÒÒ1<<´©spark_parser/ast.pyPK<|I©2Q!Y|Y|´ú¼spark_parser/spark.pycPKú;|I÷Û`ýdd´‡9spark_parser/version.pyPKÇAËHWîK¿¿´ :spark_parser/__init__.pyPK<|I·µ ´?spark_parser/scanner.pycPK<|IBŽB  ´]Uspark_parser/ast.pycPKüœq