Tokenparser’s documentation!

Contents:

Tokenparser

class Tokenparser.Tokenparser

Usage:

import Tokenparser

p = Tokenparser.Tokenparser()
p.upTo('FIELD','A')  # DDD
p.skip('A') #A
p.fromTo('FIELD2','C','D')  # CEEED
p.parse("DDDACEEED")
print(p.matches())  # -> {"FIELD": "DDD", "FIELD2" : "CEEED"}
print(p.matches())  # -> {"FIELD": "DDD", "FIELD2" : "CEEED"}
p.clearMatches()
print(p.matches())  # -> {}
clearMatches()

Clean result of parsing.

fromTo(field, fr, to)
Parameters:
  • field – string
  • from – char
  • to – char

Capture all characters from from to to (include to) into field

matches() → dict

Return dict with field as key, and token as value. Don’t clean result.

multilinesParse(iterable_input) → list
Parameters:iterable_input – iterable object of string (tuple, list, etc)

Return list of dict such as in matches, but only for strings which are parsed fully.

parse(input) → bool
Parameters:input – input string for parsing

Parse giving input string. In the case of full compliance with the rules return True, otherwise - False

You can retrieve the result with Tokenparser.matches()

skip(char)
Parameters:char – skiping char

Skip giving symbol

skipTo(char)
Parameters:char – letter to the occurrence of which skipping characters

Skip all characters until the occurrence of char (exclude char)

upTo(field, char)
Parameters:
  • field – field’s name
  • char – letter to the occurrence of which capturing characters

Capture all characters until the occurrence of char (include char)

Indices and tables