nltk.sentiment.util module¶
Utility methods for Sentiment Analysis.
- nltk.sentiment.util.demo_liu_hu_lexicon(sentence, plot=False)[source]¶
Basic example of sentiment classification using Liu and Hu opinion lexicon. This function simply counts the number of positive, negative and neutral words in the sentence and classifies it depending on which polarity is more represented. Words that do not appear in the lexicon are considered as neutral.
- Parameters
sentence – a sentence whose polarity has to be classified.
plot – if True, plot a visual representation of the sentence polarity.
- nltk.sentiment.util.demo_movie_reviews(trainer, n_instances=None, output=None)[source]¶
Train classifier on all instances of the Movie Reviews dataset. The corpus has been preprocessed using the default sentence tokenizer and WordPunctTokenizer. Features are composed of:
most frequent unigrams
- Parameters
trainer – train method of a classifier.
n_instances – the number of total reviews that have to be used for training and testing. Reviews will be equally split between positive and negative.
output – the output file where results have to be reported.
- nltk.sentiment.util.demo_sent_subjectivity(text)[source]¶
Classify a single sentence as subjective or objective using a stored SentimentAnalyzer.
- Parameters
text – a sentence whose subjectivity has to be classified.
- nltk.sentiment.util.demo_subjectivity(trainer, save_analyzer=False, n_instances=None, output=None)[source]¶
Train and test a classifier on instances of the Subjective Dataset by Pang and Lee. The dataset is made of 5000 subjective and 5000 objective sentences. All tokens (words and punctuation marks) are separated by a whitespace, so we use the basic WhitespaceTokenizer to parse the data.
- Parameters
trainer – train method of a classifier.
save_analyzer – if True, store the SentimentAnalyzer in a pickle file.
n_instances – the number of total sentences that have to be used for training and testing. Sentences will be equally split between positive and negative.
output – the output file where results have to be reported.
- nltk.sentiment.util.demo_tweets(trainer, n_instances=None, output=None)[source]¶
Train and test Naive Bayes classifier on 10000 tweets, tokenized using TweetTokenizer. Features are composed of:
1000 most frequent unigrams
100 top bigrams (using BigramAssocMeasures.pmi)
- Parameters
trainer – train method of a classifier.
n_instances – the number of total tweets that have to be used for training and testing. Tweets will be equally split between positive and negative.
output – the output file where results have to be reported.
- nltk.sentiment.util.demo_vader_instance(text)[source]¶
Output polarity scores for a text using Vader approach.
- Parameters
text – a text whose polarity has to be evaluated.
- nltk.sentiment.util.demo_vader_tweets(n_instances=None, output=None)[source]¶
Classify 10000 positive and negative tweets using Vader approach.
- Parameters
n_instances – the number of total tweets that have to be classified.
output – the output file where results have to be reported.
- nltk.sentiment.util.extract_bigram_feats(document, bigrams)[source]¶
Populate a dictionary of bigram features, reflecting the presence/absence in the document of each of the tokens in bigrams. This extractor function only considers contiguous bigrams obtained by nltk.bigrams.
- Parameters
document – a list of words/tokens.
unigrams – a list of bigrams whose presence/absence has to be checked in document.
- Returns
a dictionary of bigram features {bigram : boolean}.
>>> bigrams = [('global', 'warming'), ('police', 'prevented'), ('love', 'you')] >>> document = 'ice is melting due to global warming'.split() >>> sorted(extract_bigram_feats(document, bigrams).items()) [('contains(global - warming)', True), ('contains(love - you)', False), ('contains(police - prevented)', False)]
- nltk.sentiment.util.extract_unigram_feats(document, unigrams, handle_negation=False)[source]¶
Populate a dictionary of unigram features, reflecting the presence/absence in the document of each of the tokens in unigrams.
- Parameters
document – a list of words/tokens.
unigrams – a list of words/tokens whose presence/absence has to be checked in document.
handle_negation – if handle_negation == True apply mark_negation method to document before checking for unigram presence/absence.
- Returns
a dictionary of unigram features {unigram : boolean}.
>>> words = ['ice', 'police', 'riot'] >>> document = 'ice is melting due to global warming'.split() >>> sorted(extract_unigram_feats(document, words).items()) [('contains(ice)', True), ('contains(police)', False), ('contains(riot)', False)]
- nltk.sentiment.util.json2csv_preprocess(json_file, outfile, fields, encoding='utf8', errors='replace', gzip_compress=False, skip_retweets=True, skip_tongue_tweets=True, skip_ambiguous_tweets=True, strip_off_emoticons=True, remove_duplicates=True, limit=None)[source]¶
Convert json file to csv file, preprocessing each row to obtain a suitable dataset for tweets Semantic Analysis.
- Parameters
json_file – the original json file containing tweets.
outfile – the output csv filename.
fields – a list of fields that will be extracted from the json file and kept in the output csv file.
encoding – the encoding of the files.
errors – the error handling strategy for the output writer.
gzip_compress – if True, create a compressed GZIP file.
skip_retweets – if True, remove retweets.
skip_tongue_tweets – if True, remove tweets containing “:P” and “:-P” emoticons.
skip_ambiguous_tweets – if True, remove tweets containing both happy and sad emoticons.
strip_off_emoticons – if True, strip off emoticons from all tweets.
remove_duplicates – if True, remove tweets appearing more than once.
limit – an integer to set the number of tweets to convert. After the limit is reached the conversion will stop. It can be useful to create subsets of the original tweets json data.
- nltk.sentiment.util.mark_negation(document, double_neg_flip=False, shallow=False)[source]¶
Append _NEG suffix to words that appear in the scope between a negation and a punctuation mark.
- Parameters
document – a list of words/tokens, or a tuple (words, label).
shallow – if True, the method will modify the original document in place.
double_neg_flip – if True, double negation is considered affirmation (we activate/deactivate negation scope every time we find a negation).
- Returns
if shallow == True the method will modify the original document and return it. If shallow == False the method will return a modified document, leaving the original unmodified.
>>> sent = "I didn't like this movie . It was bad .".split() >>> mark_negation(sent) ['I', "didn't", 'like_NEG', 'this_NEG', 'movie_NEG', '.', 'It', 'was', 'bad', '.']
- nltk.sentiment.util.output_markdown(filename, **kwargs)[source]¶
Write the output of an analysis to a file.
- nltk.sentiment.util.parse_tweets_set(filename, label, word_tokenizer=None, sent_tokenizer=None, skip_header=True)[source]¶
Parse csv file containing tweets and output data a list of (text, label) tuples.
- Parameters
filename – the input csv filename.
label – the label to be appended to each tweet contained in the csv file.
word_tokenizer – the tokenizer instance that will be used to tokenize each sentence into tokens (e.g. WordPunctTokenizer() or BlanklineTokenizer()). If no word_tokenizer is specified, tweets will not be tokenized.
sent_tokenizer – the tokenizer that will be used to split each tweet into sentences.
skip_header – if True, skip the first line of the csv file (which usually contains headers).
- Returns
a list of (text, label) tuples.
- nltk.sentiment.util.split_train_test(all_instances, n=None)[source]¶
Randomly split n instances of the dataset into train and test sets.
- Parameters
all_instances – a list of instances (e.g. documents) that will be split.
n – the number of instances to consider (in case we want to use only a subset).
- Returns
two lists of instances. Train set is 8/10 of the total and test set is 2/10 of the total.