nltk.tokenize.treebank module¶
Penn Treebank Tokenizer
The Treebank tokenizer uses regular expressions to tokenize text as in Penn Treebank. This implementation is a port of the tokenizer sed script written by Robert McIntyre and available at http://www.cis.upenn.edu/~treebank/tokenizer.sed.
- class nltk.tokenize.treebank.TreebankWordDetokenizer[source]¶
Bases:
TokenizerI
The Treebank detokenizer uses the reverse regex operations corresponding to the Treebank tokenizer’s regexes.
Note:
There’re additional assumption mades when undoing the padding of
[;@#$%&]
punctuation symbols that isn’t presupposed in the TreebankTokenizer.- There’re additional regexes added in reversing the parentheses tokenization,
such as the
r'([\]\)\}\>])\s([:;,.])'
, which removes the additional right padding added to the closing parentheses precedding[:;,.]
.
It’s not possible to return the original whitespaces as they were because there wasn’t explicit records of where ‘n’, ‘t’ or ‘s’ were removed at the text.split() operation.
>>> from nltk.tokenize.treebank import TreebankWordTokenizer, TreebankWordDetokenizer >>> s = '''Good muffins cost $3.88\nin New York. Please buy me\ntwo of them.\nThanks.''' >>> d = TreebankWordDetokenizer() >>> t = TreebankWordTokenizer() >>> toks = t.tokenize(s) >>> d.detokenize(toks) 'Good muffins cost $3.88 in New York. Please buy me two of them. Thanks.'
The MXPOST parentheses substitution can be undone using the
convert_parentheses
parameter:>>> s = '''Good muffins cost $3.88\nin New (York). Please (buy) me\ntwo of them.\n(Thanks).''' >>> expected_tokens = ['Good', 'muffins', 'cost', '$', '3.88', 'in', ... 'New', '-LRB-', 'York', '-RRB-', '.', 'Please', '-LRB-', 'buy', ... '-RRB-', 'me', 'two', 'of', 'them.', '-LRB-', 'Thanks', '-RRB-', '.'] >>> expected_tokens == t.tokenize(s, convert_parentheses=True) True >>> expected_detoken = 'Good muffins cost $3.88 in New (York). Please (buy) me two of them. (Thanks).' >>> expected_detoken == d.detokenize(t.tokenize(s, convert_parentheses=True), convert_parentheses=True) True
During tokenization it’s safe to add more spaces but during detokenization, simply undoing the padding doesn’t really help.
During tokenization, left and right pad is added to
[!?]
, when detokenizing, only left shift the[!?]
is needed. Thus(re.compile(r'\s([?!])'), r'\g<1>')
.During tokenization
[:,]
are left and right padded but when detokenizing, only left shift is necessary and we keep right pad after comma/colon if the string after is a non-digit. Thus(re.compile(r'\s([:,])\s([^\d])'), r'\1 \2')
.
>>> from nltk.tokenize.treebank import TreebankWordDetokenizer >>> toks = ['hello', ',', 'i', 'ca', "n't", 'feel', 'my', 'feet', '!', 'Help', '!', '!'] >>> twd = TreebankWordDetokenizer() >>> twd.detokenize(toks) "hello, i can't feel my feet! Help!!"
>>> toks = ['hello', ',', 'i', "can't", 'feel', ';', 'my', 'feet', '!', ... 'Help', '!', '!', 'He', 'said', ':', 'Help', ',', 'help', '?', '!'] >>> twd.detokenize(toks) "hello, i can't feel; my feet! Help!! He said: Help, help?!"
- CONTRACTIONS2 = [re.compile('(?i)\\b(can)\\s(not)\\b', re.IGNORECASE), re.compile("(?i)\\b(d)\\s('ye)\\b", re.IGNORECASE), re.compile('(?i)\\b(gim)\\s(me)\\b', re.IGNORECASE), re.compile('(?i)\\b(gon)\\s(na)\\b', re.IGNORECASE), re.compile('(?i)\\b(got)\\s(ta)\\b', re.IGNORECASE), re.compile('(?i)\\b(lem)\\s(me)\\b', re.IGNORECASE), re.compile("(?i)\\b(more)\\s('n)\\b", re.IGNORECASE), re.compile('(?i)\\b(wan)\\s(na)(?=\\s)', re.IGNORECASE)]¶
- CONTRACTIONS3 = [re.compile("(?i) ('t)\\s(is)\\b", re.IGNORECASE), re.compile("(?i) ('t)\\s(was)\\b", re.IGNORECASE)]¶
- CONVERT_PARENTHESES = [(re.compile('-LRB-'), '('), (re.compile('-RRB-'), ')'), (re.compile('-LSB-'), '['), (re.compile('-RSB-'), ']'), (re.compile('-LCB-'), '{'), (re.compile('-RCB-'), '}')]¶
- DOUBLE_DASHES = (re.compile(' -- '), '--')¶
- ENDING_QUOTES = [(re.compile("([^' ])\\s('ll|'LL|'re|'RE|'ve|'VE|n't|N'T) "), '\\1\\2 '), (re.compile("([^' ])\\s('[sS]|'[mM]|'[dD]|') "), '\\1\\2 '), (re.compile("(\\S)\\s(\\'\\')"), '\\1\\2'), (re.compile("(\\'\\')\\s([.,:)\\]>};%])"), '\\1\\2'), (re.compile("''"), '"')]¶
- PARENS_BRACKETS = [(re.compile('([\\[\\(\\{\\<])\\s'), '\\g<1>'), (re.compile('\\s([\\]\\)\\}\\>])'), '\\g<1>'), (re.compile('([\\]\\)\\}\\>])\\s([:;,.])'), '\\1\\2')]¶
- PUNCTUATION = [(re.compile("([^'])\\s'\\s"), "\\1' "), (re.compile('\\s([?!])'), '\\g<1>'), (re.compile('([^\\.])\\s(\\.)([\\]\\)}>"\\\']*)\\s*$'), '\\1\\2\\3'), (re.compile('([#$])\\s'), '\\g<1>'), (re.compile('\\s([;%])'), '\\g<1>'), (re.compile('\\s\\.\\.\\.\\s'), '...'), (re.compile('\\s([:,])'), '\\1')]¶
- STARTING_QUOTES = [(re.compile('([ (\\[{<])\\s``'), '\\1``'), (re.compile('(``)\\s'), '\\1'), (re.compile('``'), '"')]¶
- detokenize(tokens: List[str], convert_parentheses: bool = False) str [source]¶
Duck-typing the abstract tokenize().
- Parameters
tokens (List[str]) –
convert_parentheses (bool) –
- Return type
str
- tokenize(tokens: List[str], convert_parentheses: bool = False) str [source]¶
Treebank detokenizer, created by undoing the regexes from the TreebankWordTokenizer.tokenize.
- Parameters
tokens (List[str]) – A list of strings, i.e. tokenized text.
convert_parentheses (bool, optional) – if True, replace PTB symbols with parentheses, e.g. -LRB- to (. Defaults to False.
- Returns
str
- Return type
str
- class nltk.tokenize.treebank.TreebankWordTokenizer[source]¶
Bases:
TokenizerI
The Treebank tokenizer uses regular expressions to tokenize text as in Penn Treebank.
This tokenizer performs the following steps:
split standard contractions, e.g.
don't
->do n't
andthey'll
->they 'll
treat most punctuation characters as separate tokens
split off commas and single quotes, when followed by whitespace
separate periods that appear at the end of line
>>> from nltk.tokenize import TreebankWordTokenizer >>> s = '''Good muffins cost $3.88\nin New York. Please buy me\ntwo of them.\nThanks.''' >>> TreebankWordTokenizer().tokenize(s) ['Good', 'muffins', 'cost', '$', '3.88', 'in', 'New', 'York.', 'Please', 'buy', 'me', 'two', 'of', 'them.', 'Thanks', '.'] >>> s = "They'll save and invest more." >>> TreebankWordTokenizer().tokenize(s) ['They', "'ll", 'save', 'and', 'invest', 'more', '.'] >>> s = "hi, my name can't hello," >>> TreebankWordTokenizer().tokenize(s) ['hi', ',', 'my', 'name', 'ca', "n't", 'hello', ',']
- CONTRACTIONS2 = [re.compile('(?i)\\b(can)(?#X)(not)\\b', re.IGNORECASE), re.compile("(?i)\\b(d)(?#X)('ye)\\b", re.IGNORECASE), re.compile('(?i)\\b(gim)(?#X)(me)\\b', re.IGNORECASE), re.compile('(?i)\\b(gon)(?#X)(na)\\b', re.IGNORECASE), re.compile('(?i)\\b(got)(?#X)(ta)\\b', re.IGNORECASE), re.compile('(?i)\\b(lem)(?#X)(me)\\b', re.IGNORECASE), re.compile("(?i)\\b(more)(?#X)('n)\\b", re.IGNORECASE), re.compile('(?i)\\b(wan)(?#X)(na)(?=\\s)', re.IGNORECASE)]¶
- CONTRACTIONS3 = [re.compile("(?i) ('t)(?#X)(is)\\b", re.IGNORECASE), re.compile("(?i) ('t)(?#X)(was)\\b", re.IGNORECASE)]¶
- CONVERT_PARENTHESES = [(re.compile('\\('), '-LRB-'), (re.compile('\\)'), '-RRB-'), (re.compile('\\['), '-LSB-'), (re.compile('\\]'), '-RSB-'), (re.compile('\\{'), '-LCB-'), (re.compile('\\}'), '-RCB-')]¶
- DOUBLE_DASHES = (re.compile('--'), ' -- ')¶
- ENDING_QUOTES = [(re.compile("''"), " '' "), (re.compile('"'), " '' "), (re.compile("([^' ])('[sS]|'[mM]|'[dD]|') "), '\\1 \\2 '), (re.compile("([^' ])('ll|'LL|'re|'RE|'ve|'VE|n't|N'T) "), '\\1 \\2 ')]¶
- PARENS_BRACKETS = (re.compile('[\\]\\[\\(\\)\\{\\}\\<\\>]'), ' \\g<0> ')¶
- PUNCTUATION = [(re.compile('([:,])([^\\d])'), ' \\1 \\2'), (re.compile('([:,])$'), ' \\1 '), (re.compile('\\.\\.\\.'), ' ... '), (re.compile('[;@#$%&]'), ' \\g<0> '), (re.compile('([^\\.])(\\.)([\\]\\)}>"\\\']*)\\s*$'), '\\1 \\2\\3 '), (re.compile('[?!]'), ' \\g<0> '), (re.compile("([^'])' "), "\\1 ' ")]¶
- STARTING_QUOTES = [(re.compile('^\\"'), '``'), (re.compile('(``)'), ' \\1 '), (re.compile('([ \\(\\[{<])(\\"|\\\'{2})'), '\\1 `` ')]¶
- span_tokenize(text: str) Iterator[Tuple[int, int]] [source]¶
Returns the spans of the tokens in
text
. Uses the post-hoc nltk.tokens.align_tokens to return the offset spans.>>> from nltk.tokenize import TreebankWordTokenizer >>> s = '''Good muffins cost $3.88\nin New (York). Please (buy) me\ntwo of them.\n(Thanks).''' >>> expected = [(0, 4), (5, 12), (13, 17), (18, 19), (19, 23), ... (24, 26), (27, 30), (31, 32), (32, 36), (36, 37), (37, 38), ... (40, 46), (47, 48), (48, 51), (51, 52), (53, 55), (56, 59), ... (60, 62), (63, 68), (69, 70), (70, 76), (76, 77), (77, 78)] >>> list(TreebankWordTokenizer().span_tokenize(s)) == expected True >>> expected = ['Good', 'muffins', 'cost', '$', '3.88', 'in', ... 'New', '(', 'York', ')', '.', 'Please', '(', 'buy', ')', ... 'me', 'two', 'of', 'them.', '(', 'Thanks', ')', '.'] >>> [s[start:end] for start, end in TreebankWordTokenizer().span_tokenize(s)] == expected True
- Parameters
text (str) – A string with a sentence or sentences.
- Yield
Tuple[int, int]
- Return type
Iterator[Tuple[int, int]]
- tokenize(text: str, convert_parentheses: bool = False, return_str: bool = False) List[str] [source]¶
Return a tokenized copy of text.
>>> from nltk.tokenize import TreebankWordTokenizer >>> s = '''Good muffins cost $3.88 (roughly 3,36 euros)\nin New York. Please buy me\ntwo of them.\nThanks.''' >>> TreebankWordTokenizer().tokenize(s) ['Good', 'muffins', 'cost', '$', '3.88', '(', 'roughly', '3,36', 'euros', ')', 'in', 'New', 'York.', 'Please', 'buy', 'me', 'two', 'of', 'them.', 'Thanks', '.'] >>> TreebankWordTokenizer().tokenize(s, convert_parentheses=True) ['Good', 'muffins', 'cost', '$', '3.88', '-LRB-', 'roughly', '3,36', 'euros', '-RRB-', 'in', 'New', 'York.', 'Please', 'buy', 'me', 'two', 'of', 'them.', 'Thanks', '.']
- Parameters
text (str) – A string with a sentence or sentences.
convert_parentheses (bool, optional) – if True, replace parentheses to PTB symbols, e.g. ( to -LRB-. Defaults to False.
return_str (bool, optional) – If True, return tokens as space-separated string, defaults to False.
- Returns
List of tokens from text.
- Return type
List[str]