I am running a Convolutional Neural Network. After it finishes running, I use some metrics to evaluate the performance of the model. 2 of the metrics are the auc and roc_auc_score from sklearn

**AUC function**: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.auc.html?highlight=auc#sklearn.metrics.auc

**AUROC function**: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score

The code I am using is the following:

print(pred) fpr, tpr, thresholds = metrics.roc_curve(true_classes, pred, pos_label=1) print("-----AUC-----") print(metrics.auc(fpr, tpr)) print("----ROC AUC-----") print(metrics.roc_auc_score(true_classes, pred))

Where true_classes is a table which is of the form : [0 1 0 1 1 0] where 1 is the positive label and 0 the negative.

And pred is the predictions of the model:

prediction = classifier.predict(test_final) prediction1 = [] predictions = [] for preds in prediction: prediction1.append(preds[0]) pred = prediction1

However I am getting the same AUC and ROC AUC value no matter how many times I run the test (What I mean by that is that AUC and ROC AUC values in each test are the same. Not that they remain the same on all the tests. For example for test 1 I get AUC = 0.987 and ROC_AUC = 0.987 and for test 2 I get AUC = 0.95 and ROC_AUC = 0.95) . Am I doing something wrong? Or is it normal?

## Answer

As per documentation linked, `metrics.auc`

is a general case method to calculate area under a curve from points of that curve.

`metrics.roc_auc_score`

is a specific case method used to calculate Area Under Curve for ROC curve.

You would not expect to see different results if you’re using the same data to calculate both, as `metrics.roc_auc_score`

will do the same thing as `metrics.auc`

and, most likely, use the `metrics.auc`

method itself, under the hood (i.e. use the general method for the specific task of calculating Area under ROC curve).