Skip to main content

Open Source Brain Model validation

Project description

Continuous builds PyPI PyPI - Python Version GitHub GitHub pull requests GitHub issues Gitter

OSB Model Validation

Tools for automated model validation in Open Source Brain projects, which can also be used for testing model behaviour on many simulation engines both:

  • on your local machine when developing models
  • on GitHub Actions, to ensure tests pass on every commit.

To see this framework in action, click on some of the green buttons below:

OSB project Tests on GitHub Actions Test workflow script
FitzHugh Nagumo Continuous build using OMV omv-ci.yml
Auditory cortex network Continuous build using OMV omv-ci.yml
SBML Showcase Continuous build using OMV omv-ci.yml

This framework has been used to test the 30+ NeuroML and PyNN models described in the Open Source Brain paper (Gleeson et al. 2019), and many more.

Installation

Quick system-wide install:

pip install osb-model-validation

Or you can install from cloned repository (preferably in a virtual environment):

git clone https://github.com/OpenSourceBrain/osb-model-validation.git
cd osb-model-validation
pip install .

Instructions

Any Open Source Brain project can have automated testing incorporated. For an overview of the various Open Source Brain projects with OMV tests, see https://github.com/OpenSourceBrain/.github/blob/main/testsheet/README.md (note not all of these use OMV yet).

Setting up validation for a model and simulation written in NeuroML2/LEMS requires two additional steps:

  • write a Model Emergent Properties (mep) file.
  • write the corresponding OSB Model Test (omt) file.

Write MEP files

Depending on the size of your model, you can run validation on the full fledged model, or you can create smaller stripped down versions that test particular aspects of the model. Here is an example LEMS file for the FitzHugh-Nagumo model on Open Source Brain: LEMS_FitzHugNagamo.xml.

# Script for running automated tests on OSBrain, see https://github.com/OpenSourceBrain/osb-model-validation

system: The Fitzhugh-Nagumo model, classical parameters

experiments:
  experiment 1, free run:
    expected:
      spike times: [2.24, 39.82, 76.53, 113.24, 149.94, 186.65, 223.36, 260.07, 296.78, 333.49, 370.2]

MEP files include information on what is expected from the simulation run. OMV will run the simulation based on the OMT files (which we'll see below), and compare the output to the information provided in MEP files. Multiple experiments can be mentioned in the MEP file, and each project can have multiple MEP files.

The MEP file for the FitzHugh-Nagumo model is here. It includes a single experiment, with the expected spike times that the simulation run should generate.

Writing OMT files

OMT files include information on simulating the model for validation. The FitzHugh-Nagumo model on Open Source Brain includes multiple OMT files that can be seen here.

  • Each OMT file specifies a target file, which is the LEMS simulation file to be run.

  • Each OMT file specifies an engine that OMV supports. Engines are simulators that OMV should use to run the model. See here for the current list. For example, the .test.fhn.jnml.omt file uses the jNeuroML engine, which implies that the model should be run using plain jNeuroML (and not any of the simulators that jNeuroML supports, like NEURON).

# Script for running automated tests on OSB, see https://github.com/OpenSourceBrain/osb-model-validation

target: LEMS_FitzHughNagumo.xml
engine: jNeuroML
mep: ../fhn.mep
experiments:
  experiment 1, free run:
    observables:
      spike times:
        file:
          path: ./fhn.dat
          columns: [0,1]
          scaling: [1000, 1]
        spike detection:
          method: derivative
        tolerance: 2.185696883946938e-16
  • Each OMT file specifies the MEP file that the output of its simulation run should be compared to. In this case, we use the same MEP file for all OMT files.

  • Finally, like MEP files, OMT files also include experiments. The names of the experiments in the OMT files must correspond to those used in the MEP files, so that OMV knows what section of the OMT and MEP files are related to each other. In each experiment, we specify our observables, which are to be compared to the information provided in the MEP file. Here, we inform OMV that we are observing spike times, which will be recorded by the simulation run in fhn.dat. We also inform OMV what columns of this file the information is to be extracted from, and if these columns need to be scaled before they are compared to the data provided in the MEP file. Finally, while simulations can save spike times directly and OMV will compare these to the MEP file, if the simulation is recording membrane potentials, OMV can also be asked to detect spikes from this data using the spike detection section. The tolerance key tells OMV what the acceptable difference between the expected and observed data values is.

This is the OMT file to validate the same model using the jNeuroML_NEURON engine: .test.fhn.jnmlnrn.omt. It uses the same MEP file, and observes the same recorded information. It only tells OMV to use a different simulation engine:

# Script for running automated tests on OSB, see https://github.com/OpenSourceBrain/osb-model-validation

target: LEMS_FitzHughNagumo.xml
engine: jNeuroML_NEURON
mep: ../fhn.mep
experiments:
  experiment 1, free run:
    observables:
      spike times:
        file:
          path: ./fhn.dat
          columns: [0,1]
          scaling: [1000, 1]
        spike detection:
          method: derivative
        tolerance: 0

Running validation tests locally

If you have installed OMV successfully, you can now run all the OMV tests locally, by running this command:

omv all

Adding the -V flag prints more details on successful/failed runs. Learn more about the options that omv can take by running omv --help.

To see what engines are currently installed (and what their versions are) type:

omv list -V  

Running validation tests locally ensures that you can quickly check if any changes you have made to the model cause changes to its specific outcomes. Since you can run the validation with different engines to use different simulators, this also allows you to quickly verify that your model gives similar results using these different tools.

Running tests automatically on GitHub Actions

To use OMV with GHA, copy an existing configuration file, e.g. https://github.com/OpenSourceBrain/ACnet2/blob/master/.github/workflows/omv-ci.yml and place it in the required repository.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

osbmodelvalidation-0.2.21.tar.gz (47.3 kB view hashes)

Uploaded Source

Built Distribution

OSBModelValidation-0.2.21-py3-none-any.whl (85.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page