nltk.tokenize.destructive module¶
- class nltk.tokenize.destructive.MacIntyreContractions[source]¶
Bases:
object
List of contractions adapted from Robert MacIntyre’s tokenizer.
- CONTRACTIONS2 = ['(?i)\\b(can)(?#X)(not)\\b', "(?i)\\b(d)(?#X)('ye)\\b", '(?i)\\b(gim)(?#X)(me)\\b', '(?i)\\b(gon)(?#X)(na)\\b', '(?i)\\b(got)(?#X)(ta)\\b', '(?i)\\b(lem)(?#X)(me)\\b', "(?i)\\b(more)(?#X)('n)\\b", '(?i)\\b(wan)(?#X)(na)(?=\\s)']¶
- CONTRACTIONS3 = ["(?i) ('t)(?#X)(is)\\b", "(?i) ('t)(?#X)(was)\\b"]¶
- CONTRACTIONS4 = ['(?i)\\b(whad)(dd)(ya)\\b', '(?i)\\b(wha)(t)(cha)\\b']¶
- class nltk.tokenize.destructive.NLTKWordTokenizer[source]¶
Bases:
TokenizerI
The NLTK tokenizer that has improved upon the TreebankWordTokenizer.
This is the method that is invoked by
word_tokenize()
. It assumes that the text has already been segmented into sentences, e.g. usingsent_tokenize()
.The tokenizer is “destructive” such that the regexes applied will munge the input string to a state beyond re-construction. It is possible to apply TreebankWordDetokenizer.detokenize to the tokenized outputs of NLTKDestructiveWordTokenizer.tokenize but there’s no guarantees to revert to the original string.
- CONTRACTIONS2 = [re.compile('(?i)\\b(can)(?#X)(not)\\b', re.IGNORECASE), re.compile("(?i)\\b(d)(?#X)('ye)\\b", re.IGNORECASE), re.compile('(?i)\\b(gim)(?#X)(me)\\b', re.IGNORECASE), re.compile('(?i)\\b(gon)(?#X)(na)\\b', re.IGNORECASE), re.compile('(?i)\\b(got)(?#X)(ta)\\b', re.IGNORECASE), re.compile('(?i)\\b(lem)(?#X)(me)\\b', re.IGNORECASE), re.compile("(?i)\\b(more)(?#X)('n)\\b", re.IGNORECASE), re.compile('(?i)\\b(wan)(?#X)(na)(?=\\s)', re.IGNORECASE)]¶
- CONTRACTIONS3 = [re.compile("(?i) ('t)(?#X)(is)\\b", re.IGNORECASE), re.compile("(?i) ('t)(?#X)(was)\\b", re.IGNORECASE)]¶
- CONVERT_PARENTHESES = [(re.compile('\\('), '-LRB-'), (re.compile('\\)'), '-RRB-'), (re.compile('\\['), '-LSB-'), (re.compile('\\]'), '-RSB-'), (re.compile('\\{'), '-LCB-'), (re.compile('\\}'), '-RCB-')]¶
- DOUBLE_DASHES = (re.compile('--'), ' -- ')¶
- ENDING_QUOTES = [(re.compile('([»”’])'), ' \\1 '), (re.compile("''"), " '' "), (re.compile('"'), " '' "), (re.compile("([^' ])('[sS]|'[mM]|'[dD]|') "), '\\1 \\2 '), (re.compile("([^' ])('ll|'LL|'re|'RE|'ve|'VE|n't|N'T) "), '\\1 \\2 ')]¶
- PARENS_BRACKETS = (re.compile('[\\]\\[\\(\\)\\{\\}\\<\\>]'), ' \\g<0> ')¶
- PUNCTUATION = [(re.compile('([^\\.])(\\.)([\\]\\)}>"\\\'»”’ ]*)\\s*$'), '\\1 \\2 \\3 '), (re.compile('([:,])([^\\d])'), ' \\1 \\2'), (re.compile('([:,])$'), ' \\1 '), (re.compile('\\.{2,}'), ' \\g<0> '), (re.compile('[;@#$%&]'), ' \\g<0> '), (re.compile('([^\\.])(\\.)([\\]\\)}>"\\\']*)\\s*$'), '\\1 \\2\\3 '), (re.compile('[?!]'), ' \\g<0> '), (re.compile("([^'])' "), "\\1 ' "), (re.compile('[*]'), ' \\g<0> ')]¶
- STARTING_QUOTES = [(re.compile('([«“‘„]|[`]+)'), ' \\1 '), (re.compile('^\\"'), '``'), (re.compile('(``)'), ' \\1 '), (re.compile('([ \\(\\[{<])(\\"|\\\'{2})'), '\\1 `` '), (re.compile("(?i)(\\')(?!re|ve|ll|m|t|s|d|n)(\\w)\\b", re.IGNORECASE), '\\1 \\2')]¶
- span_tokenize(text: str) Iterator[Tuple[int, int]] [source]¶
Returns the spans of the tokens in
text
. Uses the post-hoc nltk.tokens.align_tokens to return the offset spans.>>> from nltk.tokenize import NLTKWordTokenizer >>> s = '''Good muffins cost $3.88\nin New (York). Please (buy) me\ntwo of them.\n(Thanks).''' >>> expected = [(0, 4), (5, 12), (13, 17), (18, 19), (19, 23), ... (24, 26), (27, 30), (31, 32), (32, 36), (36, 37), (37, 38), ... (40, 46), (47, 48), (48, 51), (51, 52), (53, 55), (56, 59), ... (60, 62), (63, 68), (69, 70), (70, 76), (76, 77), (77, 78)] >>> list(NLTKWordTokenizer().span_tokenize(s)) == expected True >>> expected = ['Good', 'muffins', 'cost', '$', '3.88', 'in', ... 'New', '(', 'York', ')', '.', 'Please', '(', 'buy', ')', ... 'me', 'two', 'of', 'them.', '(', 'Thanks', ')', '.'] >>> [s[start:end] for start, end in NLTKWordTokenizer().span_tokenize(s)] == expected True
- Parameters
text (str) – A string with a sentence or sentences.
- Yield
Tuple[int, int]
- Return type
Iterator[Tuple[int, int]]
- tokenize(text: str, convert_parentheses: bool = False, return_str: bool = False) List[str] [source]¶
Return a tokenized copy of text.
>>> from nltk.tokenize import NLTKWordTokenizer >>> s = '''Good muffins cost $3.88 (roughly 3,36 euros)\nin New York. Please buy me\ntwo of them.\nThanks.''' >>> NLTKWordTokenizer().tokenize(s) ['Good', 'muffins', 'cost', '$', '3.88', '(', 'roughly', '3,36', 'euros', ')', 'in', 'New', 'York.', 'Please', 'buy', 'me', 'two', 'of', 'them.', 'Thanks', '.'] >>> NLTKWordTokenizer().tokenize(s, convert_parentheses=True) ['Good', 'muffins', 'cost', '$', '3.88', '-LRB-', 'roughly', '3,36', 'euros', '-RRB-', 'in', 'New', 'York.', 'Please', 'buy', 'me', 'two', 'of', 'them.', 'Thanks', '.']
- Parameters
text (str) – A string with a sentence or sentences.
convert_parentheses (bool, optional) – if True, replace parentheses to PTB symbols, e.g. ( to -LRB-. Defaults to False.
return_str (bool, optional) – If True, return tokens as space-separated string, defaults to False.
- Returns
List of tokens from text.
- Return type
List[str]