nltk.corpus.reader.util module¶
- class nltk.corpus.reader.util.ConcatenatedCorpusView[source]¶
Bases:
AbstractLazySequence
A ‘view’ of a corpus file that joins together one or more
StreamBackedCorpusViews<StreamBackedCorpusView>
. At most one file handle is left open at any time.
- class nltk.corpus.reader.util.PickleCorpusView[source]¶
Bases:
StreamBackedCorpusView
A stream backed corpus view for corpus files that consist of sequences of serialized Python objects (serialized using
pickle.dump
). One use case for this class is to store the result of running feature detection on a corpus to disk. This can be useful when performing feature detection is expensive (so we don’t want to repeat it); but the corpus is too large to store in memory. The following example illustrates this technique:>>> from nltk.corpus.reader.util import PickleCorpusView >>> from nltk.util import LazyMap >>> feature_corpus = LazyMap(detect_features, corpus) >>> PickleCorpusView.write(feature_corpus, some_fileid) >>> pcv = PickleCorpusView(some_fileid)
- BLOCK_SIZE = 100¶
- PROTOCOL = -1¶
- __init__(fileid, delete_on_gc=False)[source]¶
Create a new corpus view that reads the pickle corpus
fileid
.- Parameters
delete_on_gc – If true, then
fileid
will be deleted whenever this object gets garbage-collected.
- classmethod cache_to_tempfile(sequence, delete_on_gc=True)[source]¶
Write the given sequence to a temporary file as a pickle corpus; and then return a
PickleCorpusView
view for that temporary corpus file.- Parameters
delete_on_gc – If true, then the temporary file will be deleted whenever this object gets garbage-collected.
- class nltk.corpus.reader.util.StreamBackedCorpusView[source]¶
Bases:
AbstractLazySequence
A ‘view’ of a corpus file, which acts like a sequence of tokens: it can be accessed by index, iterated over, etc. However, the tokens are only constructed as-needed – the entire corpus is never stored in memory at once.
The constructor to
StreamBackedCorpusView
takes two arguments: a corpus fileid (specified as a string or as aPathPointer
); and a block reader. A “block reader” is a function that reads zero or more tokens from a stream, and returns them as a list. A very simple example of a block reader is:>>> def simple_block_reader(stream): ... return stream.readline().split()
This simple block reader reads a single line at a time, and returns a single token (consisting of a string) for each whitespace-separated substring on the line.
When deciding how to define the block reader for a given corpus, careful consideration should be given to the size of blocks handled by the block reader. Smaller block sizes will increase the memory requirements of the corpus view’s internal data structures (by 2 integers per block). On the other hand, larger block sizes may decrease performance for random access to the corpus. (But note that larger block sizes will not decrease performance for iteration.)
Internally,
CorpusView
maintains a partial mapping from token index to file position, with one entry per block. When a token with a given index i is requested, theCorpusView
constructs it as follows:First, it searches the toknum/filepos mapping for the token index closest to (but less than or equal to) i.
Then, starting at the file position corresponding to that index, it reads one block at a time using the block reader until it reaches the requested token.
The toknum/filepos mapping is created lazily: it is initially empty, but every time a new block is read, the block’s initial token is added to the mapping. (Thus, the toknum/filepos map has one entry per block.)
In order to increase efficiency for random access patterns that have high degrees of locality, the corpus view may cache one or more blocks.
- Note
Each
CorpusView
object internally maintains an open file object for its underlying corpus file. This file should be automatically closed when theCorpusView
is garbage collected, but if you wish to close it manually, use theclose()
method. If you access aCorpusView
’s items after it has been closed, the file object will be automatically re-opened.- Warning
If the contents of the file are modified during the lifetime of the
CorpusView
, then theCorpusView
’s behavior is undefined.- Warning
If a unicode encoding is specified when constructing a
CorpusView
, then the block reader may only callstream.seek()
with offsets that have been returned bystream.tell()
; in particular, callingstream.seek()
with relative offsets, or with offsets based on string lengths, may lead to incorrect behavior.- Variables
_block_reader – The function used to read a single block from the underlying file stream.
_toknum – A list containing the token index of each block that has been processed. In particular,
_toknum[i]
is the token index of the first token in blocki
. Together with_filepos
, this forms a partial mapping between token indices and file positions._filepos – A list containing the file position of each block that has been processed. In particular,
_toknum[i]
is the file position of the first character in blocki
. Together with_toknum
, this forms a partial mapping between token indices and file positions._stream – The stream used to access the underlying corpus file.
_len – The total number of tokens in the corpus, if known; or None, if the number of tokens is not yet known.
_eofpos – The character position of the last character in the file. This is calculated when the corpus view is initialized, and is used to decide when the end of file has been reached.
_cache – A cache of the most recently read block. It is encoded as a tuple (start_toknum, end_toknum, tokens), where start_toknum is the token index of the first token in the block; end_toknum is the token index of the first token not in the block; and tokens is a list of the tokens in the block.
- __init__(fileid, block_reader=None, startpos=0, encoding='utf8')[source]¶
Create a new corpus view, based on the file
fileid
, and read withblock_reader
. See the class documentation for more information.- Parameters
fileid – The path to the file that is read by this corpus view.
fileid
can either be a string or aPathPointer
.startpos – The file position at which the view will start reading. This can be used to skip over preface sections.
encoding – The unicode encoding that should be used to read the file’s contents. If no encoding is specified, then the file’s contents will be read as a non-unicode string (i.e., a str).
- close()[source]¶
Close the file stream associated with this corpus view. This can be useful if you are worried about running out of file handles (although the stream should automatically be closed upon garbage collection of the corpus view). If the corpus view is accessed after it is closed, it will be automatically re-opened.
- property fileid¶
The fileid of the file that is accessed by this view.
- Type
str or PathPointer
- nltk.corpus.reader.util.concat(docs)[source]¶
Concatenate together the contents of multiple documents from a single corpus, using an appropriate concatenation function. This utility function is used by corpus readers when the user requests more than one document at a time.
- nltk.corpus.reader.util.read_regexp_block(stream, start_re, end_re=None)[source]¶
Read a sequence of tokens from a stream, where tokens begin with lines that match
start_re
. Ifend_re
is specified, then tokens end with lines that matchend_re
; otherwise, tokens end whenever the next line matchingstart_re
or EOF is found.
- nltk.corpus.reader.util.read_sexpr_block(stream, block_size=16384, comment_char=None)[source]¶
Read a sequence of s-expressions from the stream, and leave the stream’s file position at the end the last complete s-expression read. This function will always return at least one s-expression, unless there are no more s-expressions in the file.
If the file ends in in the middle of an s-expression, then that incomplete s-expression is returned when the end of the file is reached.
- Parameters
block_size – The default block size for reading. If an s-expression is longer than one block, then more than one block will be read.
comment_char – A character that marks comments. Any lines that begin with this character will be stripped out. (If spaces or tabs precede the comment character, then the line will not be stripped.)