Skip to main content

Compute (Target) Permutation Importances of a machine learning model

Project description

Target Permutation Importances

Ruff image image Actions status

Overview

This method aims to lower the feature attribution due to a feature's variance. If a feature shows high importance to a model after the target vector is shuffled, it fits the noise.

Overall, this package

  1. Fit the given model class $M$ times to get $M$ actual feature importances of feature f: $A_f = [a_{f_1},a_{f_2}...a_{f_M}]$.
  2. Fit the given model class with shuffled targets for $N$ times to get $N$ feature random importances: $R_f = [r_{f_1},r_{f_2}...r_{f_N}]$.
  3. Compute the final importances of a feature $f$ by various methods, such as:
    • $A_f$ - $R_f$
    • $A_f$ / ($R_f + 1)$

Not to be confused with sklearn.inspection.permutation_importance, this sklearn method is about feature permutation instead of target permutation.

This method were originally proposed/implemented by:

Install

pip install target-permutation-importances

or

poetry add target-permutation-importances

Basic Usage

# Import the function
from target_permutation_importances import compute

# Prepare a dataset
import pandas as pd
from sklearn.datasets import load_breast_cancer

# Models
from catboost import CatBoostClassifier
from lightgbm import LGBMClassifier
from sklearn.ensemble import RandomForestClassifier
from xgboost import XGBClassifier

data = load_breast_cancer()

# Convert to a pandas dataframe
Xpd = pd.DataFrame(data.data, columns=data.feature_names)

# Compute permutation importances with default settings
result_df = compute(
    model_cls=RandomForestClassifier, # Or other models
    model_cls_params={ # The params for the model class construction
        "n_estimators": 1,
    },
    model_fit_params={}, # The params for model.fit
    X=Xpd,
    y=data.target,
    num_actual_runs=2,
    num_random_runs=10,
)

You can find more detailed examples in the "Feature Selection Examples" section.

Advance Usage / Customization

This package exposes generic_compute to allow customization. Read target_permutation_importances.__init__.py for details.

Feature Selection Examples

TODO

Benchmarks

Benchmark has been done with some tabular datasets from the Tabular data learning benchmark. It is also hosted on Hugging Face.

The following models with their default params are used in the benchmark:

  • sklearn.ensemble.RandomForestClassifier
  • sklearn.ensemble.RandomForestRegressor
  • xgboost.XGBClassifier
  • xgboost.XGBRegressor
  • catboost.CatBoostClassifier
  • catboost.CatBoostRegressor
  • lightgbm.LGBMClassifier
  • lightgbm.LGBMRegressor

For the binary classification task, sklearn.metrics.f1_score is used for evaluation. For the regression task, sklearn.metrics.mean_squared_error is used for evaluation.

The downloaded datasets are divided into 3 sections: train: 50%, val: 10%, test: 40%. Feature importance is calculated from the train set. Feature selection is done on the val set. The final benchmark is evaluated on the test set. Therefore the test set is unseen to both the feature importance and selection process.

Raw result data are in benchmarks/results/tabular_benchmark.csv.

Kaggle Competitions

Many Kaggle Competition top solutions involve this method, here are some examples

Year Competition Medal Link
2023 Predict Student Performance from Game Play Gold 3rd place solution
2019 Elo Merchant Category Recommendation Gold 16th place solution
2018 Home Credit Default Risk Gold 10th place solution

Development Setup and Contribution Guide

Python Version

You can find the suggested development Python version in .python-version. You might consider setting up Pyenv if you want to have multiple Python versions on your machine.

Python packages

This repository is setup with Poetry. If you are not familiar with Poetry, you can find package requirements listed in pyproject.toml. Otherwise, you can just set it up with poetry install

Run Benchmarks

To run the benchmark locally on your machine, run make run_tabular_benchmark or python -m benchmarks.run_tabular_benchmark

Make Changes

Following the Make Changes Guide from Github Before committing or merging, please run the linters defined in make lint and the tests defined in make test

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

target_permutation_importances-1.0.6.tar.gz (4.7 kB view hashes)

Uploaded Source

Built Distribution

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page