Skip to main content

PerMetrics: A framework of PERformance METRICS for machine learning models

Project description

A framework of PERformance METRICS (PerMetrics) for artificial intelligence models

GitHub release Documentation Status Wheel PyPI version DOI Downloads License

Quick notification

  • Add classification metrics to version 1.3.0
  • Add more metrics to version 1.2.2
  • The version 1.2.0 has serious problem with calculate multiple metrics (OOP style), please update to version 1.2.1 as soon as possible for your sake.

Introduction

  • PerMetrics is a python library for performance metrics of machine learning models.
  • The goals of this framework are:
    • Combine all metrics for regression, classification and clustering models
    • Helping users in all field access to metrics as fast as possible

Dependencies

  • Python (>= 3.6)
  • Numpy (>= 1.15.1)

User installation

Install the current PyPI release:

pip install permetrics==1.3.0

Or install the development version from GitHub:

pip install git+https://github.com/thieu1995/permetrics

Example

The more complicated tests in the folder: examples

The documentation includes more detailed installation instructions and explanations.

from numpy import array
from permetrics.regression import RegressionMetric
from permetrics.classification import ClassificationMetric

#### Regression problem ==============================

## For 1-D array
y_true = array([3, -0.5, 2, 7])
y_pred = array([2.5, 0.0, 2, 8])

evaluator = RegressionMetric(y_true, y_pred, decimal=5)
print(evaluator.RMSE())
print(evaluator.MSE())

## For > 1-D array
y_true = array([[0.5, 1], [-1, 1], [7, -6]])
y_pred = array([[0, 2], [-1, 2], [8, -5]])

evaluator = RegressionMetric(y_true, y_pred, decimal=5)
print(evaluator.RMSE(multi_output="raw_values", decimal=5))
print(evaluator.MAE(multi_output="raw_values", decimal=5))


#### Classification problem ============================


## For integer labels or categorical labels
y_true = [0, 1, 0, 0, 1, 0]
y_pred = [0, 1, 0, 0, 0, 1]

# y_true = ["cat", "ant", "cat", "cat", "ant", "bird", "bird", "bird"]
# y_pred = ["ant", "ant", "cat", "cat", "ant", "cat", "bird", "ant"]

evaluator = ClassificationMetric(y_true, y_pred, decimal=5)

## Call specific function inside object, each function has 3 names like below

print(evaluator.f1_score())
print(evaluator.F1S(average="micro"))
print(evaluator.f1s(average="macro"))
print(evaluator.f1s(average="weighted"))

Changelog

  • See the ChangeLog.md for a history of notable changes to permetrics.

Important links

Metrics

Problem STT Metric Metric Fullname Characteristics
Regression 1 EVS Explained Variance Score Greater is better (Best = 1), Range=(-inf, 1.0]
**** 2 ME Max Error Smaller is better (Best = 0), Range=[0, +inf)
**** 3 MBE Mean Bias Error Best = 0, Range=(-inf, +inf)
**** 4 MAE Mean Absolute Error Smaller is better (Best = 0), Range=[0, +inf)
**** 5 MSE Mean Squared Error Smaller is better (Best = 0), Range=[0, +inf)
**** 6 RMSE Root Mean Squared Error Smaller is better (Best = 0), Range=[0, +inf)
**** 7 MSLE Mean Squared Log Error Smaller is better (Best = 0), Range=[0, +inf)
**** 8 MedAE Median Absolute Error Smaller is better (Best = 0), Range=[0, +inf)
**** 9 MRE / MRB Mean Relative Error / Mean Relative Bias Smaller is better (Best = 0), Range=[0, +inf)
**** 10 MPE Mean Percentage Error Best = 0, Range=(-inf, +inf)
**** 11 MAPE Mean Absolute Percentage Error Smaller is better (Best = 0), Range=[0, +inf)
**** 12 SMAPE Symmetric Mean Absolute Percentage Error Smaller is better (Best = 0), Range=[0, 1]
**** 13 MAAPE Mean Arctangent Absolute Percentage Error Smaller is better (Best = 0), Range=[0, +inf)
**** 14 MASE Mean Absolute Scaled Error Smaller is better (Best = 0), Range=[0, +inf)
**** 15 NSE Nash-Sutcliffe Efficiency Coefficient Greater is better (Best = 1), Range=(-inf, 1]
**** 16 NNSE Normalized Nash-Sutcliffe Efficiency Coefficient Greater is better (Best = 1), Range=[0, 1]
**** 17 WI Willmott Index Greater is better (Best = 1), Range=[0, 1]
**** 18 R / PCC Pearson’s Correlation Coefficient Greater is better (Best = 1), Range=[-1, 1]
**** 19 AR / APCC Absolute Pearson's Correlation Coefficient Greater is better (Best = 1), Range=[-1, 1]
**** 20 R2s (Pearson’s Correlation Index) ^ 2 Greater is better (Best = 1), Range=[0, 1]
**** 21 R2 / COD Coefficient of Determination Greater is better (Best = 1), Range=(-inf, 1]
**** 22 AR2 / ACOD Adjusted Coefficient of Determination Greater is better (Best = 1), Range=(-inf, 1]
**** 23 CI Confidence Index Greater is better (Best = 1), Range=(-inf, 1]
**** 24 DRV Deviation of Runoff Volume Smaller is better (Best = 1.0), Range=[1, +inf)
**** 25 KGE Kling-Gupta Efficiency Greater is better (Best = 1), Range=(-inf, 1]
**** 26 GINI Gini Coefficient Smaller is better (Best = 0), Range=[0, +inf)
**** 27 GINI_WIKI Gini Coefficient on Wikipage Smaller is better (Best = 0), Range=[0, +inf)
**** 28 PCD Prediction of Change in Direction Greater is better (Best = 1.0), Range=[0, 1]
**** 29 CE Cross Entropy Range(-inf, 0], Can't give comment about this
**** 30 KLD Kullback Leibler Divergence Best = 0, Range=(-inf, +inf)
**** 31 JSD Jensen Shannon Divergence Smaller is better (Best = 0), Range=[0, +inf)
**** 32 VAF Variance Accounted For Greater is better (Best = 100%), Range=(-inf, 100%]
**** 33 RAE Relative Absolute Error Smaller is better (Best = 0), Range=[0, +inf)
**** 34 A10 A10 Index Greater is better (Best = 1), Range=[0, 1]
**** 35 A20 A20 Index Greater is better (Best = 1), Range=[0, 1]
**** 36 A30 A30 Index Greater is better (Best = 1), Range=[0, 1]
**** 37 NRMSE Normalized Root Mean Square Error Smaller is better (Best = 0), Range=[0, +inf)
**** 38 RSE Residual Standard Error Smaller is better (Best = 0), Range=[0, +inf)
**** 39 RE / RB Relative Error / Relative Bias Best = 0, Range=(-inf, +inf)
**** 40 AE Absolute Error Best = 0, Range=(-inf, +inf)
**** 41 SE Squared Error Smaller is better (Best = 0), Range=[0, +inf)
**** 42 SLE Squared Log Error Smaller is better (Best = 0), Range=[0, +inf)
**** 43
Classification 1 PS Precision Score Higher is better (Best = 1), Range = [0, 1]
**** 2 NPV Negative Predictive Value Higher is better (Best = 1), Range = [0, 1]
**** 3 RS Recall Score Higher is better (Best = 1), Range = [0, 1]
**** 4 AS Accuracy Score Higher is better (Best = 1), Range = [0, 1]
**** 5 F1S F1 Score Higher is better (Best = 1), Range = [0, 1]
**** 6 F2S F2 Score Higher is better (Best = 1), Range = [0, 1]
**** 7 FBS F-Beta Score Higher is better (Best = 1), Range = [0, 1]
**** 8 SS Specificity Score Higher is better (Best = 1), Range = [0, 1]
**** 9 MCC Matthews Correlation Coefficient Higher is better (Best = 1), Range = [-1, +1]
**** 10 HL Hamming Loss Higher is better (Best = 1), Range = [0, 1]
**** 11 LS Lift Score Higher is better (Best = 0), Range = [0, +inf)
**** 12

Contributions

Citation

  • If you use permetrics in your project, please cite my works:
@software{thieu_nguyen_2020_3951205,
  author       = {Nguyen Van Thieu},
  title        = {Permetrics: A framework of performance metrics for artificial intelligence models},
  month        = jul,
  year         = 2020,
  publisher    = {Zenodo},
  doi          = {10.5281/zenodo.3951205},
  url          = {https://doi.org/10.5281/zenodo.3951205}
}

@article{nguyen2019efficient,
  title={Efficient Time-Series Forecasting Using Neural Network and Opposition-Based Coral Reefs Optimization},
  author={Nguyen, Thieu and Nguyen, Tu and Nguyen, Binh Minh and Nguyen, Giang},
  journal={International Journal of Computational Intelligence Systems},
  volume={12},
  number={2},
  pages={1144--1161},
  year={2019},
  publisher={Atlantis Press}
}

Future works

Classification

  • Calibration Error
  • Cohen Kappa
  • Coverage Error
  • Dice Score
  • Hinge Loss
  • Jaccard Index

Clustering

  • Adjusted Mutual Information
  • Adjusted Rand Score
  • Calinski And Harabasz Score
  • Davies-bouldin Score
  • Completeness Score
  • Contingency Matrix
  • Silhouette Coefficient
  • V-measure Score

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

permetrics-1.3.0.tar.gz (33.6 kB view hashes)

Uploaded Source

Built Distribution

permetrics-1.3.0-py3-none-any.whl (35.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page