mxnet.metric.Perplexity¶
-
class
mxnet.metric.
Perplexity
(ignore_label, axis=-1, name='perplexity', output_names=None, label_names=None)[source]¶ Computes perplexity.
Perplexity is a measurement of how well a probability distribution or model predicts a sample. A low perplexity indicates the model is good at predicting the sample.
The perplexity of a model q is defined as
\[b^{\big(-\frac{1}{N} \sum_{i=1}^N \log_b q(x_i) \big)} = \exp \big(-\frac{1}{N} \sum_{i=1}^N \log q(x_i)\big)\]where we let b = e.
\(q(x_i)\) is the predicted value of its ground truth label on sample \(x_i\).
For example, we have three samples \(x_1, x_2, x_3\) and their labels are \([0, 1, 1]\). Suppose our model predicts \(q(x_1) = p(y_1 = 0 | x_1) = 0.3\) and \(q(x_2) = 1.0\), \(q(x_3) = 0.6\). The perplexity of model q is \(exp\big(-(\log 0.3 + \log 1.0 + \log 0.6) / 3\big) = 1.77109762852\).
- Parameters
ignore_label (int or None) – Index of invalid label to ignore when counting. By default, sets to -1. If set to None, it will include all entries.
axis (int (default -1)) – The axis from prediction that was used to compute softmax. By default use the last axis.
name (str) – Name of this metric instance for display.
output_names (list of str, or None) – Name of predictions that should be used when updating with update_dict. By default include all predictions.
label_names (list of str, or None) – Name of labels that should be used when updating with update_dict. By default include all labels.
Examples
>>> predicts = [mx.nd.array([[0.3, 0.7], [0, 1.], [0.4, 0.6]])] >>> labels = [mx.nd.array([0, 1, 1])] >>> perp = mx.metric.Perplexity(ignore_label=None) >>> perp.update(labels, predicts) >>> print perp.get() ('Perplexity', 1.7710976285155853)
-
__init__
(ignore_label, axis=-1, name='perplexity', output_names=None, label_names=None)[source]¶ Initialize self. See help(type(self)) for accurate signature.
Methods
__init__
(ignore_label[, axis, name, …])Initialize self.
get
()Returns the current evaluation result.
get_config
()Save configurations of metric.
get_global
()Returns the current global evaluation result.
get_global_name_value
()Returns zipped name and value pairs for global results.
get_name_value
()Returns zipped name and value pairs.
reset
()Resets the internal evaluation result to initial state.
reset_local
()Resets the local portion of the internal evaluation results to initial state.
update
(labels, preds)Updates the internal evaluation result.
update_dict
(label, pred)Update the internal evaluation with named label and pred