Skip to main content

Performance hacking for your deep learning models

Project description

Build Status codecov License PyPI Gitter Codacy Badge

Darkon: Performance hacking for your deep learning models

Darkon is an open source toolkit for improving and debugging deep learning models. People think that deep neural network is a black-box that requires only large dataset and expect learning algorithms returns well-performing models. However, trained models often fail in real world usages, and it is difficult to fix such failure due to the black-box nature of deep neural networks. We are developing Darkon to ease effort to improve performance of deep learning models.

In this first release, we provide influence score calculation easily applicable to existing Tensorflow models (other models to be supported later) Influence score can be used for filtering bad training samples that affects test performance negatively. It can be used for prioritize potential mislabeled examples to be fixed, and debugging distribution mismatch between train and test samples.

Darkon will gradually provide performance hacking methods easily applicable to existing projects based on following technologies.

  • Dataset inspection/filtering/management

  • Continual learning

  • Meta/transfer learning

  • Interpretable ML

  • Hyper parameter optimization

  • Network architecture search

More features will be released soon. Feedback and feature request are always welcome, which help us to manage priorities. Please keep your eyes on Darkon.

Dependencies

Installation

pip install darkon

Usage

inspector = darkon.Influence(workspace_path,
                             YourDataFeeder(),
                             loss_op_train,
                             loss_op_test,
                             x_placeholder,
                             y_placeholder)

scores = inspector.upweighting_influence_batch(sess,
                                               test_indices,
                                               test_batch_size,
                                               approx_params,
                                               train_batch_size,
                                               train_iterations)

Examples

API Documentation

Communication

Authors

Neosapience, Inc.

License

Apache License 2.0

References

[1] Pang Wei Koh and Percy Liang “Understanding Black-box Predictions via Influence Functions“ ICML2017

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

darkon-0.0.3-py2.py3-none-any.whl (19.2 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page