Skip to main content

Tools for making neural simulations using the Neural Engineering Framework

Project description

Latest PyPI version Travis-CI build status AppVeyor build status Test coverage

Nengo: Large-scale brain modelling in Python

An illustration of the three principles of the NEF

Installation

Nengo depends on NumPy, and we recommend that you install NumPy before installing Nengo. If you’re not sure how to do this, we recommend using Anaconda.

To install Nengo:

pip install nengo

If you have difficulty installing Nengo or NumPy, please read the more detailed Nengo installation instructions first.

If you’d like to install Nengo from source, please read the developer installation instructions.

Nengo is tested to work on Python 2.7 and 3.4+.

Examples

Here are six of many examples showing how Nengo enables the creation and simulation of large-scale neural models in few lines of code.

  1. 100 LIF neurons representing a sine wave

  2. Computing the square across a neural connection

  3. Controlled oscillatory dynamics with a recurrent connection

  4. Learning a communication channel with the PES rule

  5. Simple question answering with the Semantic Pointer Architecture

  6. A summary of the principles underlying all of these examples

Documentation

Usage and API documentation can be found at https://pythonhosted.org/nengo/.

Development

Information for current or prospective developers can be found at https://pythonhosted.org/nengo/dev_guide.html.

Getting Help

Questions relating to Nengo, whether it’s use or it’s development, should be asked on the Nengo forum at https://forum.nengo.ai.

Release History

2.4.0 (April 18, 2017)

Added

  • Added an optimizer that reduces simulation time for common types of models. The optimizer can be turned off by passing optimize=False to Simulator. (#1035)

  • Added the option to not normalize encoders by setting Ensemble.normalize_encoders to False. (#1191, #1267)

  • Added the Samples distribution to allow raw NumPy arrays to be passed in situations where a distribution is required. (#1233)

Changed

  • We now raise an error when an ensemble is assigned a negative gain. This can occur when solving for gains with intercepts greater than 1. (#1212, #1231, #1248)

  • We now raise an error when a Node or Direct ensemble produces a non-finite value. (#1178, #1280, #1286)

  • We now enforce that the label of a network must be a string or None, and that the seed of a network must be an int or None. This helps avoid situations where the seed would mistakenly be passed as the label. (#1277, #1275)

  • It is now possible to pass NumPy arrays in the ens_kwargs argument of EnsembleArray. Arrays are wrapped in a Samples distribution internally. (#691, #766, #1233)

  • The default refractory period (tau_ref) for the Sigmoid neuron type has changed to 2.5 ms (from 2 ms) for better compatibility with the default maximum firing rates of 200-400 Hz. (#1248)

  • Inputs to the Product and CircularConvolution networks have been renamed from A and B to input_a and input_b for consistency. The old names are still available, but should be considered deprecated. (#887, #1296)

Fixed

  • Properly handle non C-contiguous node outputs. (#1184, #1185)

Deprecated

  • The net argument to networks has been deprecated. This argument existed so that network components could be added to an existing network instead of constructing a new network. However, this feature is rarely used, and makes the code more complicated for complex networks. (#1296)

2.3.1 (February 18, 2017)

Added

  • Added documentation on config system quirks. (#1224)

  • Added nengo.utils.network.activate_direct_mode function to make it easier to activate direct mode in networks where some parts require neurons. (#1111, #1168)

Fixed

  • The matrix multiplication example will now work with matrices of any size and uses the product network for clarity. (#1159)

  • Fixed instances in which passing a callable class as a function could fail. (#1245)

  • Fixed an issue in which probing some attributes would be one timestep faster than other attributes. (#1234, #1245)

  • Fixed an issue in which SPA models could not be copied. (#1266, #1271)

  • Fixed an issue in which Nengo would crash if other programs had locks on Nengo cache files in Windows. (#1200, #1235)

Changed

  • Integer indexing of Nengo objects out of range raises an IndexError now to be consistent with standard Python behaviour. (#1176, #1183)

  • Documentation that applies to all Nengo projects has been moved to https://nengo.github.io/. (#1251)

2.3.0 (November 30, 2016)

Added

  • It is now possible to probe scaled_encoders on ensembles. (#1167, #1117)

  • Added copy method to Nengo objects. Nengo objects can now be pickled. (#977, #984)

  • A progress bar now tracks the build process in the terminal and Jupyter notebook. (#937, #1151)

  • Added nengo.dists.get_samples function for convenience when working with distributions or samples. (#1181, docs)

Changed

  • Access to probe data via nengo.Simulator.data is now cached, making repeated access much faster. (#1076, #1175)

Deprecated

  • Access to nengo.Simulator.model is deprecated. To access static data generated during the build use nengo.Simulator.data. It provides access to everything that nengo.Simulator.model.params used to provide access to and is the canonical way to access this data across different backends. (#1145, #1173)

2.2.0 (September 12, 2016)

API changes

  • It is now possible to pass a NumPy array to the function argument of nengo.Connection. The values in the array are taken to be the targets in the decoder solving process, which means that the eval_points must also be set on the connection. (#1010)

  • nengo.utils.connection.target_function is now deprecated, and will be removed in Nengo 3.0. Instead, pass the targets directly to the connection through the function argument. (#1010)

Behavioural changes

  • Dropped support for NumPy 1.6. Oldest supported NumPy version is now 1.7. (#1147)

Improvements

  • Added a nengo.backends entry point to make the reference simulator discoverable for other Python packages. In the future all backends should declare an entry point accordingly. (#1127)

  • Added ShapeParam to store array shapes. (#1045)

  • Added ThresholdingPreset to configure ensembles for thresholding. (#1058, #1077, #1148)

  • Tweaked rasterplot so that spikes from different neurons don’t overlap. (#1121)

Documentation

  • Added a page explaining the config system and preset configs. (#1150)

Bug fixes

  • Fixed some situations where the cache index becomes corrupt by writing the updated cache index atomically (in most cases). (#1097, #1107)

  • The synapse methods filt and filtfilt now support lists as input. (#1123)

  • Added a registry system so that only stable objects are cached. (#1054, #1068)

  • Nodes now support array views as input. (#1156, #1157)

2.1.2 (June 27, 2016)

Bug fixes

  • The DecoderCache is now more robust when used improperly, and no longer requires changes to backends in order to use properly. (#1112)

2.1.1 (June 24, 2016)

Improvements

  • Improved the default LIF neuron model to spike at the same rate as the LIFRate neuron model for constant inputs. The older model has been moved to nengo_extras under the name FastLIF. (#975)

  • Added y0 attribute to WhiteSignal, which adjusts the phase of each dimension to begin with absolute value closest to y0. (#1064)

  • Allow the AssociativeMemory to accept Semantic Pointer expressions as input_keys and output_keys. (#982)

Bug fixes

  • The DecoderCache is used as context manager instead of relying on the __del__ method for cleanup. This should solve problems with the cache’s file lock not being removed. It might be necessary to manually remove the index.lock file in the cache directory after upgrading from an older Nengo version. (#1053, #1041, #1048)

  • If the cache index is corrupted, we now fail gracefully by invalidating the cache and continuing rather than raising an exception. (#1110, #1097)

  • The Nnls solver now works for weights. The NnlsL2 solver is improved since we clip values to be non-negative before forming the Gram system. (#1027, #1019)

  • Eliminate memory leak in the parameter system. (#1089, #1090)

  • Allow recurrence of the form a=b, b=a in basal ganglia SPA actions. (#1098, #1099)

  • Support a greater range of Jupyter notebook and ipywidgets versions with the the ipynb extensions. (#1088, #1085)

2.1.0 (April 27, 2016)

API changes

  • A new class for representing stateful functions called Process has been added. Node objects are now process-aware, meaning that a process can be used as a node’s output. Unlike non-process callables, processes are properly reset when a simulator is reset. See the processes.ipynb example notebook, or the API documentation for more details. (#590, #652, #945, #955)

  • Spiking LIF neuron models now accept an additional argument, min_voltage. Voltages are clipped such that they do not drop below this value (previously, this was fixed at 0). (#666)

  • The PES learning rule no longer accepts a connection as an argument. Instead, error information is transmitted by making a connection to the learning rule object (e.g., nengo.Connection(error_ensemble, connection.learning_rule). (#344, #642)

  • The modulatory attribute has been removed from nengo.Connection. This was only used for learning rules to this point, and has been removed in favor of connecting directly to the learning rule. (#642)

  • Connection weights can now be probed with nengo.Probe(conn, 'weights'), and these are always the weights that will change with learning regardless of the type of connection. Previously, either decoders or transform may have changed depending on the type of connection; it is now no longer possible to probe decoders or transform. (#729)

  • A version of the AssociativeMemory SPA module is now available as a stand-alone network in nengo.networks. The AssociativeMemory SPA module also has an updated argument list. (#702)

  • The Product and InputGatedMemory networks no longer accept a config argument. (#814)

  • The EnsembleArray network’s neuron_nodes argument is deprecated. Instead, call the new add_neuron_input or add_neuron_output methods. (#868)

  • The nengo.log utility function now takes a string level parameter to specify any logging level, instead of the old binary debug parameter. Cache messages are logged at DEBUG instead of INFO level. (#883)

  • Reorganised the Associative Memory code, including removing many extra parameters from nengo.networks.assoc_mem.AssociativeMemory and modifying the defaults of others. (#797)

  • Add close method to Simulator. Simulator can now be used used as a context manager. (#857, #739, #859)

  • Most exceptions that Nengo can raise are now custom exception classes that can be found in the nengo.exceptions module. (#781)

  • All Nengo objects (Connection, Ensemble, Node, and Probe) now accept a label and seed argument if they didn’t previously. (#958)

  • In nengo.synapses, filt and filtfilt are deprecated. Every synapse type now has filt and filtfilt methods that filter using the synapse. (#945)

  • Connection objects can now accept a Distribution for the transform argument; the transform matrix will be sampled from that distribution when the model is built. (#979).

Behavioural changes

  • The sign on the PES learning rule’s error has been flipped to conform with most learning rules, in which error is minimized. The error should be actual - target. (#642)

  • The PES rule’s learning rate is invariant to the number of neurons in the presynaptic population. The effective speed of learning should now be unaffected by changes in the size of the presynaptic population. Existing learning networks may need to be updated; to achieve identical behavior, scale the learning rate by pre.n_neurons / 100. (#643)

  • The probeable attribute of all Nengo objects is now implemented as a property, rather than a configurable parameter. (#671)

  • Node functions receive x as a copied NumPy array (instead of a readonly view). (#716, #722)

  • The SPA Compare module produces a scalar output (instead of a specific vector). (#775, #782)

  • Bias nodes in spa.Cortical, and gate ensembles and connections in spa.Thalamus are now stored in the target modules. (#894, #906)

  • The filt and filtfilt functions on Synapse now use the initial value of the input signal to initialize the filter output by default. This provides more accurate filtering at the beginning of the signal, for signals that do not start at zero. (#945)

Improvements

  • Added Ensemble.noise attribute, which injects noise directly into neurons according to a stochastic Process. (#590)

  • Added a randomized_svd subsolver for the L2 solvers. This can be much quicker for large numbers of neurons or evaluation points. (#803)

  • Added PES.pre_tau attribute, which sets the time constant on a lowpass filter of the presynaptic activity. (#643)

  • EnsembleArray.add_output now accepts a list of functions to be computed by each ensemble. (#562, #580)

  • LinearFilter now has an analog argument which can be set through its constructor. Linear filters with digital coefficients can be specified by setting analog to False. (#819)

  • Added SqrtBeta distribution, which describes the distribution of semantic pointer elements. (#414, #430)

  • Added Triangle synapse, which filters with a triangular FIR filter. (#660)

  • Added utils.connection.eval_point_decoding function, which provides a connection’s static decoding of a list of evaluation points. (#700)

  • Resetting the Simulator now resets all Processes, meaning the injected random signals and noise are identical between runs, unless the seed is changed (which can be done through Simulator.reset). (#582, #616, #652)

  • An exception is raised if SPA modules are not properly assigned to an SPA attribute. (#730, #791)

  • The Product network is now more accurate. (#651)

  • Numpy arrays can now be used as indices for slicing objects. (#754)

  • Config.configures now accepts multiple classes rather than just one. (#842)

  • Added add method to spa.Actions, which allows actions to be added after module has been initialized. (#861, #862)

  • Added SPA wrapper for circular convolution networks, spa.Bind (#849)

  • Added the Voja (Vector Oja) learning rule type, which updates an ensemble’s encoders to fire selectively for its inputs. (see examples/learning/learn_associations.ipynb). (#727)

  • Added a clipped exponential distribution useful for thresholding, in particular in the AssociativeMemory. (#779)

  • Added a cosine similarity distribution, which is the distribution of the cosine of the angle between two random vectors. It is useful for setting intercepts, in particular when using the Voja learning rule. (#768)

  • nengo.synapses.LinearFilter now has an evaluate method to evaluate the filter response to sine waves of given frequencies. This can be used to create Bode plots, for example. (#945)

  • nengo.spa.Vocabulary objects now have a readonly attribute that can be used to disallow adding new semantic pointers. Vocabulary subsets are read-only by default. (#699)

  • Improved performance of the decoder cache by writing all decoders of a network into a single file. (#946)

Bug fixes

  • Fixed issue where setting Connection.seed through the constructor had no effect. (#724)

  • Fixed issue in which learning connections could not be sliced. (#632)

  • Fixed issue when probing scalar transforms. (#667, #671)

  • Fix for SPA actions that route to a module with multiple inputs. (#714)

  • Corrected the rmses values in BuiltConnection.solver_info when using NNls and Nnl2sL2 solvers, and the reg argument for Nnl2sL2. (#839)

  • spa.Vocabulary.create_pointer now respects the specified number of creation attempts, and returns the most dissimilar pointer if none can be found below the similarity threshold. (#817)

  • Probing a Connection’s output now returns the output of that individual Connection, rather than the input to the Connection’s post Ensemble. (#973, #974)

  • Fixed thread-safety of using networks and config in with statements. (#989)

  • The decoder cache will only be used when a seed is specified. (#946)

2.0.4 (April 27, 2016)

Bug fixes

  • Cache now fails gracefully if the legacy.txt file cannot be read. This can occur if a later version of Nengo is used.

2.0.3 (December 7, 2015)

API changes

  • The spa.State object replaces the old spa.Memory and spa.Buffer. These old modules are deprecated and will be removed in 2.2. (#796)

2.0.2 (October 13, 2015)

2.0.2 is a bug fix release to ensure that Nengo continues to work with more recent versions of Jupyter (formerly known as the IPython notebook).

Behavioural changes

  • The IPython notebook progress bar has to be activated with %load_ext nengo.ipynb. (#693)

Improvements

  • Added [progress] section to nengorc which allows setting progress_bar and updater. (#693)

Bug fixes

  • Fix compatibility issues with newer versions of IPython, and Jupyter. (#693)

2.0.1 (January 27, 2015)

Behavioural changes

  • Node functions receive t as a float (instead of a NumPy scalar) and x as a readonly NumPy array (instead of a writeable array). (#626, #628)

Improvements

  • rasterplot works with 0 neurons, and generates much smaller PDFs. (#601)

Bug fixes

  • Fix compatibility with NumPy 1.6. (#627)

2.0.0 (January 15, 2015)

Initial release of Nengo 2.0! Supports Python 2.6+ and 3.3+. Thanks to all of the contributors for making this possible!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nengo-2.4.0.tar.gz (381.9 kB view hashes)

Uploaded Source

Built Distribution

nengo-2.4.0-py2.py3-none-any.whl (353.6 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page