"description":"Computes the Brier's score, defined as the average (over test examples) of sumx(t(x)-p(x))^2, where x is a class, t(x) is 1 for the correct class and 0 for the others, and p(x) is the probability that the classifier assigned to the class x."
}
},
...
...
@@ -878,7 +878,7 @@
"wsdl":"",
"interactive":false,
"has_progress_bar":false,
"order":1,
"order":3,
"description":"Computes classification accuracy, i.e. percentage of matches between predicted and actual class. The function returns a list of classification accuracies of all classifiers tested. If reportSE is set to true, the list will contain tuples with accuracies and standard errors."
}
},
...
...
@@ -952,7 +952,7 @@
"wsdl":"",
"interactive":false,
"has_progress_bar":false,
"order":1,
"order":4,
"description":""
}
},
...
...
@@ -1044,7 +1044,7 @@
"wsdl":"",
"interactive":false,
"has_progress_bar":false,
"order":1,
"order":5,
"description":"With the confusion matrix defined in terms of positive and negative classes, you can also compute the sensitivity [TP/(TP+FN)], specificity [TN/(TN+FP)], positive predictive value [TP/(TP+FP)] and negative predictive value [TN/(TN+FN)]. In information retrieval, positive predictive value is called precision (the ratio of the number of relevant records retrieved to the total number of irrelevant and relevant records retrieved), and sensitivity is called recall (the ratio of the number of relevant records retrieved to the total number of relevant records in the database). The harmonic mean of precision and recall is called an F-measure, where, depending on the ratio of the weight between precision and recall is implemented as F1 [2*precision*recall/(precision+recall)] or, for a general case, Falpha [(1+alpha)*precision*recall / (alpha*precision + recall)]."
}
},
...
...
@@ -1222,7 +1222,7 @@
"wsdl":"",
"interactive":false,
"has_progress_bar":false,
"order":1,
"order":6,
"description":"Computes the average probability assigned to the correct class."
}
},
...
...
@@ -1296,7 +1296,7 @@
"wsdl":"",
"interactive":false,
"has_progress_bar":false,
"order":1,
"order":7,
"description":"Computes the information score as defined by Kononenko and Bratko (1991). "