nltk.metrics.scores module¶
- nltk.metrics.scores.accuracy(reference, test)[source]¶
Given a list of reference values and a corresponding list of test values, return the fraction of corresponding values that are equal. In particular, return the fraction of indices
0<i<=len(test)
such thattest[i] == reference[i]
.- Parameters
reference (list) – An ordered list of reference values.
test (list) – A list of values to compare against the corresponding reference values.
- Raises
ValueError – If
reference
andlength
do not have the same length.
- nltk.metrics.scores.approxrand(a, b, **kwargs)[source]¶
Returns an approximate significance level between two lists of independently generated test values.
Approximate randomization calculates significance by randomly drawing from a sample of the possible permutations. At the limit of the number of possible permutations, the significance level is exact. The approximate significance level is the sample mean number of times the statistic of the permutated lists varies from the actual statistic of the unpermuted argument lists.
- Returns
a tuple containing an approximate significance level, the count of the number of times the pseudo-statistic varied from the actual statistic, and the number of shuffles
- Return type
tuple
- Parameters
a (list) – a list of test values
b (list) – another list of independently generated test values
- nltk.metrics.scores.f_measure(reference, test, alpha=0.5)[source]¶
Given a set of reference values and a set of test values, return the f-measure of the test values, when compared against the reference values. The f-measure is the harmonic mean of the
precision
andrecall
, weighted byalpha
. In particular, given the precision p and recall r defined by:p = card(
reference
intersectiontest
)/card(test
)r = card(
reference
intersectiontest
)/card(reference
)
The f-measure is:
1/(alpha/p + (1-alpha)/r)
If either
reference
ortest
is empty, thenf_measure
returns None.- Parameters
reference (set) – A set of reference values.
test (set) – A set of values to compare against the reference set.
- Return type
float or None
- nltk.metrics.scores.log_likelihood(reference, test)[source]¶
Given a list of reference values and a corresponding list of test probability distributions, return the average log likelihood of the reference values, given the probability distributions.
- Parameters
reference (list) – A list of reference values
test (list(ProbDistI)) – A list of probability distributions over values to compare against the corresponding reference values.
- nltk.metrics.scores.precision(reference, test)[source]¶
Given a set of reference values and a set of test values, return the fraction of test values that appear in the reference set. In particular, return card(
reference
intersectiontest
)/card(test
). Iftest
is empty, then return None.- Parameters
reference (set) – A set of reference values.
test (set) – A set of values to compare against the reference set.
- Return type
float or None
- nltk.metrics.scores.recall(reference, test)[source]¶
Given a set of reference values and a set of test values, return the fraction of reference values that appear in the test set. In particular, return card(
reference
intersectiontest
)/card(reference
). Ifreference
is empty, then return None.- Parameters
reference (set) – A set of reference values.
test (set) – A set of values to compare against the reference set.
- Return type
float or None