| Prediction Ture | Prediction False | ||
|---|---|---|---|
| Actual True | a | b | a + b |
| Actual False | c | d | c + d |
| a + c | b + d |
a - TP(True Positive)
b - FN(False Negative)
c - FP(False Positive)
d - TN(True Negative)
a+b - Pos
c+d - Neg
tpr(True Positive Rate) = TP/Pos : Sensitivity
tnr(True Negative Rate) = TN/Neg : Specificity
fpr(False Positive Rate) = FP/Neg = 1-tnr
fnr(False Negative Rate) = FN/Pos = 1-tpr
Accuracy = weighted average of tpr and tnr
Error Rate = weighted average of fpr and fnr
from sklearn.metrics import precision_score, recall_score, f1_score
precision = precision_score(pred, y_test, average='micro')
recall = recall_score(pred, y_test, average='micro')
f1 = f1_score(pred, y_test, average='micro')
print(precision, recall, f1)
average='None' : 클래스간 지표를 합치지 말고 출력
average='micro' : 정밀도와 재현율 동일
average='macro' : 클래스간 지표를 단순 평균
average='weighted' : 클래스간 지표를 가중 평균