Skip to main content

Spiking neuron integration for PyTorch

Project description

Travis-CI build status Test coverage

PyTorchSpiking

PyTorchSpiking provides tools for training and running spiking neural networks directly within the PyTorch framework. The main feature is pytorch_spiking.SpikingActivation, which can be used to transform any activation function into a spiking equivalent. For example, we can translate a non-spiking model, such as

torch.nn.Sequential(
    torch.nn.Linear(5, 10),
    torch.nn.ReLU(),
)

into the spiking equivalent:

torch.nn.Sequential(
    torch.nn.Linear(5, 10),
    pytorch_spiking.SpikingActivation(torch.nn.ReLU()),
)

Models with SpikingActivation layers can be optimized and evaluated in the same way as any other PyTorch model. They will automatically take advantage of PyTorchSpiking’s “spiking aware training”: using the spiking activations on the forward pass and the non-spiking (differentiable) activation function on the backwards pass.

PyTorchSpiking also includes various tools to assist in the training of spiking models, such as filtering layers.

If you are interested in building and optimizing spiking neuron models, you may also be interested in NengoDL. See this page for a comparison of the different use cases supported by these two packages.

Documentation

Check out the documentation for

Release history

0.1.0 (September 9, 2020)

Initial release

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytorch-spiking-0.1.0.tar.gz (30.0 kB view hashes)

Uploaded Source

Built Distribution

pytorch_spiking-0.1.0-py3-none-any.whl (10.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page