Skip to main content

Statistical IV: J-Divergence Hypothesis Test for the Information Value (IV)

Project description

Statistical IV

Statistical_IV: J-Divergence Hypothesis Test for the Information Value (IV). Calculation of the Information Value with specific limits to the predictive power.

Using optimalBinning, We created a specific way to calculate a predicitive power for each particular variable.

  1. Import package

    from statistical_iv import api
    
  2. Provide a DataFrame as Input:

    • Supply a DataFrame df containing your data for IV calculation.
  3. Specify Predictor Variables:

    • Prived a list of predictor variable names (variables_names) to analyze.
  4. Define the Target Variable:

    • Specify the name of the target variable (var_y) in your DataFrame.
  5. Indicate Variable Types:

    • Define the type of your predictor variables as 'categorical' or 'numerical' using the type_vars parameter.
  6. Optional: Set Maximum Bins:

    • Adjust the maximum number of bins for discretization (optional) using the max_bins parameter.
  7. Call the statistical_iv Function:

    • Calculate IV by calling the statistical_iv function with the specified parameters (That is used for OptimalBinning package).
    result_df = statistical_iv(df, variables_names, var_y, type_vars, max_bins)
    

Example Result:

Output Example

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

statistical_iv-0.2.8.tar.gz (5.1 kB view hashes)

Uploaded Source

Built Distribution

statistical_iv-0.2.8-py3-none-any.whl (5.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page