skip to navigation
skip to content

Not Logged In

zope.testrunner 4.1.0

Zope testrunner script.

Latest Version: 4.4.3

This package provides a flexible test runner with layer support.

Getting started

zope.testrunner uses buildout. To start, run python bootstrap.py. It will create a number of directories and the bin/buildout script. Next, run bin/buildout. It will create a test script for you. Now, run bin/test to run the zope.testrunner test suite.

zope.testrunner Changelog

4.1.0 (2013-02-07)

  • Replaced deprecated zope.interface.implements usage with equivalent zope.interface.implementer decorator.
  • Dropped support for Python 2.4 and 2.5.
  • Made StartUpFailure compatible with unittest.TextTestRunner() (LP #1118344).

4.0.4 (2011-10-25)

  • Work around sporadic timing-related issues in the subprocess buffering tests. Thanks to Jonathan Ballet for the patch!

4.0.3 (2011-03-17)

  • Added back support for Python <= 2.6 which was broken in 4.0.2.

4.0.2 (2011-03-16)

  • Added back Python 3 support which was broken in 4.0.1.
  • Fixed Unexpected success support by implementing the whole concept.
  • Added support for the new __pycache__ directories in Python 3.2.

4.0.1 (2011-02-21)

  • LP #719369: An Unexpected success (concept introduced in Python 2.7) is no longer handled as success but as failure. This is a workaround. The whole unexpected success concept might be implemented later.

4.0.0 (2010-10-19)

  • Show more information about layers whose setup fails (LP #638153).

4.0.0b5 (2010-07-20)

  • Update fix for LP #221151 to a spelliing compatible with Python 2.4.
  • Timestamps are now always included in subunit output (r114849).
  • LP #591309: fix a crash when subunit reports test failures containing UTF8-encoded data.

4.0.0b4 (2010-06-23)

  • Package as a zipfile to work around Python 2.4 distutils bug (no feature changes or bugfixes in zope.testrunner itself).

4.0.0b3 (2010-06-16)

  • LP #221151: keep unittest.TestCase.shortDescription happy by supplying a _testMethodDoc attribute.
  • LP #595052: keep the distribution installable under Python 2.4: its distutils appears to munge the empty __init__.py file in the foo.bar egg used for testing into a directory.
  • LP #580083: fix the bin/test script to run only tests from zope.testrunner.
  • LP #579019: When layers were run in parallel, their tearDown was not called. Additionally, the first layer which was run in the main thread did not have it's tearDown called either.

4.0.0b2 (2010-05-03)

  • Having 'sampletests' in the MANIFEST.in gave warnings, but doesn't actually seem to include any more files, so I removed it.
  • Moved zope.testing.exceptions to zope.testrunner.exceptions. Now zope.testrunner no longer requires zope.testing except for when running its own tests.

4.0.0b1 (2010-04-29)

  • Initial release of the testrunner from zope.testrunner as it's own module.

Detailed Documentation

Test Runner

The testrunner module is used to run automated tests defined using the unittest framework. Its primary feature is that it finds tests by searching directory trees. It doesn't require the manual concatenation of specific test suites. It is highly customizable and should be usable with any project. In addition to finding and running tests, it provides the following additional features:

  • Test filtering using specifications of:

    o test packages within a larger tree

    o regular expression patterns for test modules

    o regular expression patterns for individual tests

  • Organization of tests into levels and layers

    Sometimes, tests take so long to run that you don't want to run them on every run of the test runner. Tests can be defined at different levels. The test runner can be configured to only run tests at a specific level or below by default. Command-line options can be used to specify a minimum level to use for a specific run, or to run all tests. Individual tests or test suites can specify their level via a 'level' attribute. where levels are integers increasing from 1.

    Most tests are unit tests. They don't depend on other facilities, or set up whatever dependencies they have. For larger applications, it's useful to specify common facilities that a large number of tests share. Making each test set up and and tear down these facilities is both ineffecient and inconvenient. For this reason, we've introduced the concept of layers, based on the idea of layered application architectures. Software build for a layer should be able to depend on the facilities of lower layers already being set up. For example, Zope defines a component architecture. Much Zope software depends on that architecture. We should be able to treat the component architecture as a layer that we set up once and reuse. Similarly, Zope application software should be able to depend on the Zope application server without having to set it up in each test.

    The test runner introduces test layers, which are objects that can set up environments for tests within the layers to use. A layer is set up before running the tests in it. Individual tests or test suites can define a layer by defining a layer attribute, which is a test layer.

  • Reporting

    • progress meter
    • summaries of tests run
  • Analysis of test execution

    • post-mortem debugging of test failures
    • memory leaks
    • code coverage
    • source analysis using pychecker
    • memory errors
    • execution times
    • profiling

Simple Usage

The test runner consists of an importable module. The test runner is used by providing scripts that import and invoke the run method from the module. The testrunner module is controlled via command-line options. Test scripts supply base and default options by supplying a list of default command-line options that are processed before the user-supplied command-line options are provided.

Typically, a test script does 2 things:

  • Adds the directory containing the zope package to the Python path.

  • Calls the test runner with default arguments and arguments supplied to the script.

    Normally, it just passes default/setup arguments. The test runner uses sys.argv to get the user's input.

This testrunner_ex subdirectory contains a number of sample packages with tests. Let's run the tests found here. First though, we'll set up our default options:

>>> import os.path
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
...     '--path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     ]

The default options are used by a script to customize the test runner for a particular application. In this case, we use two options:

path
Set the path where the test runner should look for tests. This path is also added to the Python path.
tests-pattern
Tell the test runner how to recognize modules or packages containing tests.

Now, if we run the tests, without any other options:

>>> from zope import testrunner
>>> import sys
>>> sys.argv = ['test']
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer1 tests:
  Set up samplelayers.Layer1 in N.NNN seconds.
  Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer11 tests:
  Set up samplelayers.Layer11 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer111 tests:
  Set up samplelayers.Layerx in N.NNN seconds.
  Set up samplelayers.Layer111 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer112 tests:
  Tear down samplelayers.Layer111 in N.NNN seconds.
  Set up samplelayers.Layer112 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer12 tests:
  Tear down samplelayers.Layer112 in N.NNN seconds.
  Tear down samplelayers.Layerx in N.NNN seconds.
  Tear down samplelayers.Layer11 in N.NNN seconds.
  Set up samplelayers.Layer12 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer121 tests:
  Set up samplelayers.Layer121 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer122 tests:
  Tear down samplelayers.Layer121 in N.NNN seconds.
  Set up samplelayers.Layer122 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running zope.testrunner.layer.UnitTests tests:
  Tear down samplelayers.Layer122 in N.NNN seconds.
  Tear down samplelayers.Layer12 in N.NNN seconds.
  Tear down samplelayers.Layer1 in N.NNN seconds.
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Ran 156 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
Total: 321 tests, 0 failures, 0 errors in N.NNN seconds.
False

we see the normal testrunner output, which summarizes the tests run for each layer. For each layer, we see what layers had to be torn down or set up to run the layer and we see the number of tests run, with results.

The test runner returns a boolean indicating whether there were errors. In this example, there were no errors, so it returned False.

(Of course, the times shown in these examples are just examples. Times will vary depending on system speed.)

Layers

A Layer is an object providing setup and teardown methods used to setup and teardown the environment provided by the layer. It may also provide setup and teardown methods used to reset the environment provided by the layer between each test.

Layers are generally implemented as classes using class methods.

>>> class BaseLayer:
...     def setUp(cls):
...         log('BaseLayer.setUp')
...     setUp = classmethod(setUp)
...
...     def tearDown(cls):
...         log('BaseLayer.tearDown')
...     tearDown = classmethod(tearDown)
...
...     def testSetUp(cls):
...         log('BaseLayer.testSetUp')
...     testSetUp = classmethod(testSetUp)
...
...     def testTearDown(cls):
...         log('BaseLayer.testTearDown')
...     testTearDown = classmethod(testTearDown)
...

Layers can extend other layers. Note that they do not explicitly invoke the setup and teardown methods of other layers - the test runner does this for us in order to minimize the number of invocations.

>>> class TopLayer(BaseLayer):
...     def setUp(cls):
...         log('TopLayer.setUp')
...     setUp = classmethod(setUp)
...
...     def tearDown(cls):
...         log('TopLayer.tearDown')
...     tearDown = classmethod(tearDown)
...
...     def testSetUp(cls):
...         log('TopLayer.testSetUp')
...     testSetUp = classmethod(testSetUp)
...
...     def testTearDown(cls):
...         log('TopLayer.testTearDown')
...     testTearDown = classmethod(testTearDown)
...

Tests or test suites specify what layer they need by storing a reference in the 'layer' attribute.

>>> import unittest
>>> class TestSpecifyingBaseLayer(unittest.TestCase):
...     'This TestCase explicitly specifies its layer'
...     layer = BaseLayer
...     name = 'TestSpecifyingBaseLayer' # For testing only
...
...     def setUp(self):
...         log('TestSpecifyingBaseLayer.setUp')
...
...     def tearDown(self):
...         log('TestSpecifyingBaseLayer.tearDown')
...
...     def test1(self):
...         log('TestSpecifyingBaseLayer.test1')
...
...     def test2(self):
...         log('TestSpecifyingBaseLayer.test2')
...
>>> class TestSpecifyingNoLayer(unittest.TestCase):
...     'This TestCase specifies no layer'
...     name = 'TestSpecifyingNoLayer' # For testing only
...     def setUp(self):
...         log('TestSpecifyingNoLayer.setUp')
...
...     def tearDown(self):
...         log('TestSpecifyingNoLayer.tearDown')
...
...     def test1(self):
...         log('TestSpecifyingNoLayer.test')
...
...     def test2(self):
...         log('TestSpecifyingNoLayer.test')
...

Create a TestSuite containing two test suites, one for each of TestSpecifyingBaseLayer and TestSpecifyingNoLayer.

>>> umbrella_suite = unittest.TestSuite()
>>> umbrella_suite.addTest(unittest.makeSuite(TestSpecifyingBaseLayer))
>>> no_layer_suite = unittest.makeSuite(TestSpecifyingNoLayer)
>>> umbrella_suite.addTest(no_layer_suite)

Before we can run the tests, we need to setup some helpers.

>>> from zope.testrunner import options
>>> from zope.testing.loggingsupport import InstalledHandler
>>> import logging
>>> log_handler = InstalledHandler('zope.testrunner.tests')
>>> def log(msg):
...     logging.getLogger('zope.testrunner.tests').info(msg)
>>> def fresh_options():
...     opts = options.get_options(['--test-filter', '.*'])
...     opts.resume_layer = None
...     opts.resume_number = 0
...     return opts

Now we run the tests. Note that the BaseLayer was not setup when the TestSpecifyingNoLayer was run and setup/torn down around the TestSpecifyingBaseLayer tests.

>>> from zope.testrunner.runner import Runner
>>> runner = Runner(options=fresh_options(), args=[], found_suites=[umbrella_suite])
>>> succeeded = runner.run()
Running ...BaseLayer tests:
  Set up ...BaseLayer in N.NNN seconds.
  Ran 2 tests with 0 failures and 0 errors in N.NNN seconds.
Running zope.testrunner.layer.UnitTests tests:
  Tear down ...BaseLayer in N.NNN seconds.
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Ran 2 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
Total: 4 tests, 0 failures, 0 errors in N.NNN seconds.

Now lets specify a layer in the suite containing TestSpecifyingNoLayer and run the tests again. This demonstrates the other method of specifying a layer. This is generally how you specify what layer doctests need.

>>> no_layer_suite.layer = BaseLayer
>>> runner = Runner(options=fresh_options(), args=[], found_suites=[umbrella_suite])
>>> succeeded = runner.run()
Running ...BaseLayer tests:
  Set up ...BaseLayer in N.NNN seconds.
  Ran 4 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down ...BaseLayer in N.NNN seconds.

Clear our logged output, as we want to inspect it shortly.

>>> log_handler.clear()

Now lets also specify a layer in the TestSpecifyingNoLayer class and rerun the tests. This demonstrates that the most specific layer is used. It also shows the behavior of nested layers - because TopLayer extends BaseLayer, both the BaseLayer and TopLayer environments are setup when the TestSpecifyingNoLayer tests are run.

>>> TestSpecifyingNoLayer.layer = TopLayer
>>> runner = Runner(options=fresh_options(), args=[], found_suites=[umbrella_suite])
>>> succeeded = runner.run()
Running ...BaseLayer tests:
  Set up ...BaseLayer in N.NNN seconds.
  Ran 2 tests with 0 failures and 0 errors in N.NNN seconds.
Running ...TopLayer tests:
  Set up ...TopLayer in N.NNN seconds.
  Ran 2 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down ...TopLayer in N.NNN seconds.
  Tear down ...BaseLayer in N.NNN seconds.
Total: 4 tests, 0 failures, 0 errors in N.NNN seconds.

If we inspect our trace of what methods got called in what order, we can see that the layer setup and teardown methods only got called once. We can also see that the layer's test setup and teardown methods got called for each test using that layer in the right order.

>>> def report():
...     print "Report:"
...     for record in log_handler.records:
...         print record.getMessage()
>>> report()
Report:
BaseLayer.setUp
BaseLayer.testSetUp
TestSpecifyingBaseLayer.setUp
TestSpecifyingBaseLayer.test1
TestSpecifyingBaseLayer.tearDown
BaseLayer.testTearDown
BaseLayer.testSetUp
TestSpecifyingBaseLayer.setUp
TestSpecifyingBaseLayer.test2
TestSpecifyingBaseLayer.tearDown
BaseLayer.testTearDown
TopLayer.setUp
BaseLayer.testSetUp
TopLayer.testSetUp
TestSpecifyingNoLayer.setUp
TestSpecifyingNoLayer.test
TestSpecifyingNoLayer.tearDown
TopLayer.testTearDown
BaseLayer.testTearDown
BaseLayer.testSetUp
TopLayer.testSetUp
TestSpecifyingNoLayer.setUp
TestSpecifyingNoLayer.test
TestSpecifyingNoLayer.tearDown
TopLayer.testTearDown
BaseLayer.testTearDown
TopLayer.tearDown
BaseLayer.tearDown

Now lets stack a few more layers to ensure that our setUp and tearDown methods are called in the correct order.

>>> from zope.testrunner.find import name_from_layer
>>> class A(object):
...     def setUp(cls):
...         log('%s.setUp' % name_from_layer(cls))
...     setUp = classmethod(setUp)
...
...     def tearDown(cls):
...         log('%s.tearDown' % name_from_layer(cls))
...     tearDown = classmethod(tearDown)
...
...     def testSetUp(cls):
...         log('%s.testSetUp' % name_from_layer(cls))
...     testSetUp = classmethod(testSetUp)
...
...     def testTearDown(cls):
...         log('%s.testTearDown' % name_from_layer(cls))
...     testTearDown = classmethod(testTearDown)
...
>>> class B(A): pass
>>> class C(B): pass
>>> class D(A): pass
>>> class E(D): pass
>>> class F(C,E): pass
>>> class DeepTest(unittest.TestCase):
...     layer = F
...     def test(self):
...         pass
>>> suite = unittest.makeSuite(DeepTest)
>>> log_handler.clear()
>>> runner = Runner(options=fresh_options(), args=[], found_suites=[suite])
>>> succeeded = runner.run() #doctest: +ELLIPSIS
Running ...F tests:
  Set up ...A in N.NNN seconds.
  Set up ...B in N.NNN seconds.
  Set up ...C in N.NNN seconds.
  Set up ...D in N.NNN seconds.
  Set up ...E in N.NNN seconds.
  Set up ...F in N.NNN seconds.
  Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down ...F in N.NNN seconds.
  Tear down ...E in N.NNN seconds.
  Tear down ...D in N.NNN seconds.
  Tear down ...C in N.NNN seconds.
  Tear down ...B in N.NNN seconds.
  Tear down ...A in N.NNN seconds.
>>> report() #doctest: +ELLIPSIS
Report:
...A.setUp
...B.setUp
...C.setUp
...D.setUp
...E.setUp
...F.setUp
...A.testSetUp
...B.testSetUp
...C.testSetUp
...D.testSetUp
...E.testSetUp
...F.testSetUp
...F.testTearDown
...E.testTearDown
...D.testTearDown
...C.testTearDown
...B.testTearDown
...A.testTearDown
...F.tearDown
...E.tearDown
...D.tearDown
...C.tearDown
...B.tearDown
...A.tearDown

Layer Selection

We can select which layers to run using the --layer option:

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
...     '--path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     ]
>>> sys.argv = 'test --layer 112 --layer Unit'.split()
>>> from zope import testrunner
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer112 tests:
  Set up samplelayers.Layerx in N.NNN seconds.
  Set up samplelayers.Layer1 in N.NNN seconds.
  Set up samplelayers.Layer11 in N.NNN seconds.
  Set up samplelayers.Layer112 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running zope.testrunner.layer.UnitTests tests:
  Tear down samplelayers.Layer112 in N.NNN seconds.
  Tear down samplelayers.Layerx in N.NNN seconds.
  Tear down samplelayers.Layer11 in N.NNN seconds.
  Tear down samplelayers.Layer1 in N.NNN seconds.
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Ran 156 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
Total: 182 tests, 0 failures, 0 errors in N.NNN seconds.
False

We can also specify that we want to run only the unit tests:

>>> sys.argv = 'test -u'.split()
>>> testrunner.run_internal(defaults)
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Ran 156 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

Or that we want to run all of the tests except for the unit tests:

>>> sys.argv = 'test -f'.split()
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer1 tests:
  Set up samplelayers.Layer1 in N.NNN seconds.
  Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer11 tests:
  Set up samplelayers.Layer11 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer111 tests:
  Set up samplelayers.Layerx in N.NNN seconds.
  Set up samplelayers.Layer111 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer112 tests:
  Tear down samplelayers.Layer111 in N.NNN seconds.
  Set up samplelayers.Layer112 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer12 tests:
  Tear down samplelayers.Layer112 in N.NNN seconds.
  Tear down samplelayers.Layerx in N.NNN seconds.
  Tear down samplelayers.Layer11 in N.NNN seconds.
  Set up samplelayers.Layer12 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer121 tests:
  Set up samplelayers.Layer121 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer122 tests:
  Tear down samplelayers.Layer121 in N.NNN seconds.
  Set up samplelayers.Layer122 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer122 in N.NNN seconds.
  Tear down samplelayers.Layer12 in N.NNN seconds.
  Tear down samplelayers.Layer1 in N.NNN seconds.
Total: 165 tests, 0 failures, 0 errors in N.NNN seconds.
False

Or we can explicitly say that we want both unit and non-unit tests.

>>> sys.argv = 'test -uf'.split()
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer1 tests:
  Set up samplelayers.Layer1 in N.NNN seconds.
  Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer11 tests:
  Set up samplelayers.Layer11 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer111 tests:
  Set up samplelayers.Layerx in N.NNN seconds.
  Set up samplelayers.Layer111 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer112 tests:
  Tear down samplelayers.Layer111 in N.NNN seconds.
  Set up samplelayers.Layer112 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer12 tests:
  Tear down samplelayers.Layer112 in N.NNN seconds.
  Tear down samplelayers.Layerx in N.NNN seconds.
  Tear down samplelayers.Layer11 in N.NNN seconds.
  Set up samplelayers.Layer12 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer121 tests:
  Set up samplelayers.Layer121 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer122 tests:
  Tear down samplelayers.Layer121 in N.NNN seconds.
  Set up samplelayers.Layer122 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Running zope.testrunner.layer.UnitTests tests:
  Tear down samplelayers.Layer122 in N.NNN seconds.
  Tear down samplelayers.Layer12 in N.NNN seconds.
  Tear down samplelayers.Layer1 in N.NNN seconds.
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Ran 156 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
Total: 321 tests, 0 failures, 0 errors in N.NNN seconds.
False

It is possible to force the layers to run in subprocesses and parallelize them.

>>> sys.argv = [testrunner_script, '-j2']
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer1 tests:
  Set up samplelayers.Layer1 in N.NNN seconds.
  Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer11 tests:
  Running in a subprocess.
  Set up samplelayers.Layer1 in N.NNN seconds.
  Set up samplelayers.Layer11 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
  Tear down samplelayers.Layer11 in N.NNN seconds.
  Tear down samplelayers.Layer1 in N.NNN seconds.
Running samplelayers.Layer111 tests:
  Running in a subprocess.
  Set up samplelayers.Layerx in N.NNN seconds.
  Set up samplelayers.Layer1 in N.NNN seconds.
  Set up samplelayers.Layer11 in N.NNN seconds.
  Set up samplelayers.Layer111 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
  Tear down samplelayers.Layer111 in N.NNN seconds.
  Tear down samplelayers.Layerx in N.NNN seconds.
  Tear down samplelayers.Layer11 in N.NNN seconds.
  Tear down samplelayers.Layer1 in N.NNN seconds.
Running samplelayers.Layer112 tests:
  Running in a subprocess.
  Set up samplelayers.Layerx in N.NNN seconds.
  Set up samplelayers.Layer1 in N.NNN seconds.
  Set up samplelayers.Layer11 in N.NNN seconds.
  Set up samplelayers.Layer112 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
  Tear down samplelayers.Layer112 in N.NNN seconds.
  Tear down samplelayers.Layerx in N.NNN seconds.
  Tear down samplelayers.Layer11 in N.NNN seconds.
  Tear down samplelayers.Layer1 in N.NNN seconds.
Running samplelayers.Layer12 tests:
  Running in a subprocess.
  Set up samplelayers.Layer1 in N.NNN seconds.
  Set up samplelayers.Layer12 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
  Tear down samplelayers.Layer12 in N.NNN seconds.
  Tear down samplelayers.Layer1 in N.NNN seconds.
Running samplelayers.Layer121 tests:
  Running in a subprocess.
  Set up samplelayers.Layer1 in N.NNN seconds.
  Set up samplelayers.Layer12 in N.NNN seconds.
  Set up samplelayers.Layer121 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
  Tear down samplelayers.Layer121 in N.NNN seconds.
  Tear down samplelayers.Layer12 in N.NNN seconds.
  Tear down samplelayers.Layer1 in N.NNN seconds.
Running samplelayers.Layer122 tests:
  Running in a subprocess.
  Set up samplelayers.Layer1 in N.NNN seconds.
  Set up samplelayers.Layer12 in N.NNN seconds.
  Set up samplelayers.Layer122 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
  Tear down samplelayers.Layer122 in N.NNN seconds.
  Tear down samplelayers.Layer12 in N.NNN seconds.
  Tear down samplelayers.Layer1 in N.NNN seconds.
Running zope.testrunner.layer.UnitTests tests:
  Running in a subprocess.
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Ran 156 tests with 0 failures and 0 errors in N.NNN seconds.
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer1 in N.NNN seconds.
Total: 321 tests, 0 failures, 0 errors in N.NNN seconds.
False

Passing arguments explicitly

In most of the examples here, we set up sys.argv. In normal usage, the testrunner just uses sys.argv. It is possible to pass arguments explicitly.

>>> import os.path
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
...     '--path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     ]
>>> from zope import testrunner
>>> testrunner.run_internal(defaults, 'test --layer 111'.split())
Running samplelayers.Layer111 tests:
  Set up samplelayers.Layerx in N.NNN seconds.
  Set up samplelayers.Layer1 in N.NNN seconds.
  Set up samplelayers.Layer11 in N.NNN seconds.
  Set up samplelayers.Layer111 in N.NNN seconds.
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer111 in N.NNN seconds.
  Tear down samplelayers.Layerx in N.NNN seconds.
  Tear down samplelayers.Layer11 in N.NNN seconds.
  Tear down samplelayers.Layer1 in N.NNN seconds.
False

If options already have default values, then passing a different default will override.

For example, --list-tests defaults to being turned off, but if we pass in a different default, that one takes effect.

>>> defaults = [
...     '--list-tests',
...     '--path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     ]
>>> from zope import testrunner
>>> testrunner.run_internal(defaults, 'test --layer 111'.split())
Listing samplelayers.Layer111 tests:
  test_x1 (sample1.sampletests.test111.TestA)
  test_y0 (sample1.sampletests.test111.TestA)
  test_z0 (sample1.sampletests.test111.TestA)
  test_x0 (sample1.sampletests.test111.TestB)
  test_y1 (sample1.sampletests.test111.TestB)
  test_z0 (sample1.sampletests.test111.TestB)
  test_1 (sample1.sampletests.test111.TestNotMuch)
  test_2 (sample1.sampletests.test111.TestNotMuch)
  test_3 (sample1.sampletests.test111.TestNotMuch)
  test_x0 (sample1.sampletests.test111)
  test_y0 (sample1.sampletests.test111)
  test_z1 (sample1.sampletests.test111)
  /home/benji/workspace/zope.testrunner/1/src/zope/testing/testrunner/testrunner-ex/sample1/sampletests/../../sampletestsl.txt
  test_x1 (sampletests.test111.TestA)
  test_y0 (sampletests.test111.TestA)
  test_z0 (sampletests.test111.TestA)
  test_x0 (sampletests.test111.TestB)
  test_y1 (sampletests.test111.TestB)
  test_z0 (sampletests.test111.TestB)
  test_1 (sampletests.test111.TestNotMuch)
  test_2 (sampletests.test111.TestNotMuch)
  test_3 (sampletests.test111.TestNotMuch)
  test_x0 (sampletests.test111)
  test_y0 (sampletests.test111)
  test_z1 (sampletests.test111)
  /home/benji/workspace/zope.testrunner/1/src/zope/testing/testrunner/testrunner-ex/sampletests/../sampletestsl.txt
False

Verbose Output

Normally, we just get a summary. We can use the -v option to get increasingly more information.

If we use a single --verbose (-v) option, we get a dot printed for each test:

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
...     '--path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     ]
>>> sys.argv = 'test --layer 122 -v'.split()
>>> from zope import testrunner
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
  Set up samplelayers.Layer1 in 0.000 seconds.
  Set up samplelayers.Layer12 in 0.000 seconds.
  Set up samplelayers.Layer122 in 0.000 seconds.
  Running:
    ..................................
  Ran 26 tests with 0 failures and 0 errors in 0.007 seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer122 in 0.000 seconds.
  Tear down samplelayers.Layer12 in 0.000 seconds.
  Tear down samplelayers.Layer1 in 0.000 seconds.
False

If there are more than 50 tests, the dots are printed in groups of 50:

>>> sys.argv = 'test -uv'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
................................................................................................................................................................................................
  Ran 156 tests with 0 failures and 0 errors in 0.035 seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

If the --verbose (-v) option is used twice, then the name and location of each test is printed as it is run:

>>> sys.argv = 'test --layer 122 -vv'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
  Set up samplelayers.Layer1 in 0.000 seconds.
  Set up samplelayers.Layer12 in 0.000 seconds.
  Set up samplelayers.Layer122 in 0.000 seconds.
  Running:
    test_x1 (sample1.sampletests.test122.TestA)
    test_y0 (sample1.sampletests.test122.TestA)
    test_z0 (sample1.sampletests.test122.TestA)
    test_x0 (sample1.sampletests.test122.TestB)
    test_y1 (sample1.sampletests.test122.TestB)
    test_z0 (sample1.sampletests.test122.TestB)
    test_1 (sample1.sampletests.test122.TestNotMuch)
    test_2 (sample1.sampletests.test122.TestNotMuch)
    test_3 (sample1.sampletests.test122.TestNotMuch)
    test_x0 (sample1.sampletests.test122)
    test_y0 (sample1.sampletests.test122)
    test_z1 (sample1.sampletests.test122)
    testrunner-ex/sample1/sampletests/../../sampletestsl.txt
    test_x1 (sampletests.test122.TestA)
    test_y0 (sampletests.test122.TestA)
    test_z0 (sampletests.test122.TestA)
    test_x0 (sampletests.test122.TestB)
    test_y1 (sampletests.test122.TestB)
    test_z0 (sampletests.test122.TestB)
    test_1 (sampletests.test122.TestNotMuch)
    test_2 (sampletests.test122.TestNotMuch)
    test_3 (sampletests.test122.TestNotMuch)
    test_x0 (sampletests.test122)
    test_y0 (sampletests.test122)
    test_z1 (sampletests.test122)
    testrunner-ex/sampletests/../sampletestsl.txt
  Ran 26 tests with 0 failures and 0 errors in 0.009 seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer122 in 0.000 seconds.
  Tear down samplelayers.Layer12 in 0.000 seconds.
  Tear down samplelayers.Layer1 in 0.000 seconds.
False

if the --verbose (-v) option is used three times, then individual test-execution times are printed:

>>> sys.argv = 'test --layer 122 -vvv'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
  Set up samplelayers.Layer1 in 0.000 seconds.
  Set up samplelayers.Layer12 in 0.000 seconds.
  Set up samplelayers.Layer122 in 0.000 seconds.
  Running:
    test_x1 (sample1.sampletests.test122.TestA) (0.000 s)
    test_y0 (sample1.sampletests.test122.TestA) (0.000 s)
    test_z0 (sample1.sampletests.test122.TestA) (0.000 s)
    test_x0 (sample1.sampletests.test122.TestB) (0.000 s)
    test_y1 (sample1.sampletests.test122.TestB) (0.000 s)
    test_z0 (sample1.sampletests.test122.TestB) (0.000 s)
    test_1 (sample1.sampletests.test122.TestNotMuch) (0.000 s)
    test_2 (sample1.sampletests.test122.TestNotMuch) (0.000 s)
    test_3 (sample1.sampletests.test122.TestNotMuch) (0.000 s)
    test_x0 (sample1.sampletests.test122) (0.001 s)
    test_y0 (sample1.sampletests.test122) (0.001 s)
    test_z1 (sample1.sampletests.test122) (0.001 s)
    testrunner-ex/sample1/sampletests/../../sampletestsl.txt (0.001 s)
    test_x1 (sampletests.test122.TestA) (0.000 s)
    test_y0 (sampletests.test122.TestA) (0.000 s)
    test_z0 (sampletests.test122.TestA) (0.000 s)
    test_x0 (sampletests.test122.TestB) (0.000 s)
    test_y1 (sampletests.test122.TestB) (0.000 s)
    test_z0 (sampletests.test122.TestB) (0.000 s)
    test_1 (sampletests.test122.TestNotMuch) (0.000 s)
    test_2 (sampletests.test122.TestNotMuch) (0.000 s)
    test_3 (sampletests.test122.TestNotMuch) (0.000 s)
    test_x0 (sampletests.test122) (0.001 s)
    test_y0 (sampletests.test122) (0.001 s)
    test_z1 (sampletests.test122) (0.001 s)
    testrunner-ex/sampletests/../sampletestsl.txt (0.001 s)
  Ran 26 tests with 0 failures and 0 errors in 0.009 seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer122 in 0.000 seconds.
  Tear down samplelayers.Layer12 in 0.000 seconds.
  Tear down samplelayers.Layer1 in 0.000 seconds.
False

Quiet output

The --quiet (-q) option cancels all verbose options. It's useful when the default verbosity is non-zero:

>>> defaults = [
...     '--path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     '-v'
...     ]
>>> sys.argv = 'test -q -u'.split()
>>> testrunner.run_internal(defaults)
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Ran 156 tests with 0 failures and 0 errors in 0.034 seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

Test Selection

We've already seen that we can select tests by layer. There are three other ways we can select tests. We can select tests by package:

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
...     '--path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     ]
>>> sys.argv = 'test --layer 122 -ssample1 -vv'.split()
>>> from zope import testrunner
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
  Set up samplelayers.Layer1 in 0.000 seconds.
  Set up samplelayers.Layer12 in 0.000 seconds.
  Set up samplelayers.Layer122 in 0.000 seconds.
  Running:
    test_x1 (sample1.sampletests.test122.TestA)
    test_y0 (sample1.sampletests.test122.TestA)
    test_z0 (sample1.sampletests.test122.TestA)
    test_x0 (sample1.sampletests.test122.TestB)
    test_y1 (sample1.sampletests.test122.TestB)
    test_z0 (sample1.sampletests.test122.TestB)
    test_1 (sample1.sampletests.test122.TestNotMuch)
    test_2 (sample1.sampletests.test122.TestNotMuch)
    test_3 (sample1.sampletests.test122.TestNotMuch)
    test_x0 (sample1.sampletests.test122)
    test_y0 (sample1.sampletests.test122)
    test_z1 (sample1.sampletests.test122)
    testrunner-ex/sample1/sampletests/../../sampletestsl.txt
  Ran 13 tests with 0 failures and 0 errors in 0.005 seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer122 in 0.000 seconds.
  Tear down samplelayers.Layer12 in 0.000 seconds.
  Tear down samplelayers.Layer1 in 0.000 seconds.
False

You can specify multiple packages:

>>> sys.argv = 'test -u  -vv -ssample1 -ssample2'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
 test_x1 (sample1.sampletestsf.TestA)
 test_y0 (sample1.sampletestsf.TestA)
 test_z0 (sample1.sampletestsf.TestA)
 test_x0 (sample1.sampletestsf.TestB)
 test_y1 (sample1.sampletestsf.TestB)
 test_z0 (sample1.sampletestsf.TestB)
 test_1 (sample1.sampletestsf.TestNotMuch)
 test_2 (sample1.sampletestsf.TestNotMuch)
 test_3 (sample1.sampletestsf.TestNotMuch)
 test_x0 (sample1.sampletestsf)
 test_y0 (sample1.sampletestsf)
 test_z1 (sample1.sampletestsf)
 testrunner-ex/sample1/../sampletests.txt
 test_x1 (sample1.sample11.sampletests.TestA)
 test_y0 (sample1.sample11.sampletests.TestA)
 test_z0 (sample1.sample11.sampletests.TestA)
 test_x0 (sample1.sample11.sampletests.TestB)
 test_y1 (sample1.sample11.sampletests.TestB)
 test_z0 (sample1.sample11.sampletests.TestB)
 test_1 (sample1.sample11.sampletests.TestNotMuch)
 test_2 (sample1.sample11.sampletests.TestNotMuch)
 test_3 (sample1.sample11.sampletests.TestNotMuch)
 test_x0 (sample1.sample11.sampletests)
 test_y0 (sample1.sample11.sampletests)
 test_z1 (sample1.sample11.sampletests)
 testrunner-ex/sample1/sample11/../../sampletests.txt
 test_x1 (sample1.sample13.sampletests.TestA)
 test_y0 (sample1.sample13.sampletests.TestA)
 test_z0 (sample1.sample13.sampletests.TestA)
 test_x0 (sample1.sample13.sampletests.TestB)
 test_y1 (sample1.sample13.sampletests.TestB)
 test_z0 (sample1.sample13.sampletests.TestB)
 test_1 (sample1.sample13.sampletests.TestNotMuch)
 test_2 (sample1.sample13.sampletests.TestNotMuch)
 test_3 (sample1.sample13.sampletests.TestNotMuch)
 test_x0 (sample1.sample13.sampletests)
 test_y0 (sample1.sample13.sampletests)
 test_z1 (sample1.sample13.sampletests)
 testrunner-ex/sample1/sample13/../../sampletests.txt
 test_x1 (sample1.sampletests.test1.TestA)
 test_y0 (sample1.sampletests.test1.TestA)
 test_z0 (sample1.sampletests.test1.TestA)
 test_x0 (sample1.sampletests.test1.TestB)
 test_y1 (sample1.sampletests.test1.TestB)
 test_z0 (sample1.sampletests.test1.TestB)
 test_1 (sample1.sampletests.test1.TestNotMuch)
 test_2 (sample1.sampletests.test1.TestNotMuch)
 test_3 (sample1.sampletests.test1.TestNotMuch)
 test_x0 (sample1.sampletests.test1)
 test_y0 (sample1.sampletests.test1)
 test_z1 (sample1.sampletests.test1)
 testrunner-ex/sample1/sampletests/../../sampletests.txt
 test_x1 (sample1.sampletests.test_one.TestA)
 test_y0 (sample1.sampletests.test_one.TestA)
 test_z0 (sample1.sampletests.test_one.TestA)
 test_x0 (sample1.sampletests.test_one.TestB)
 test_y1 (sample1.sampletests.test_one.TestB)
 test_z0 (sample1.sampletests.test_one.TestB)
 test_1 (sample1.sampletests.test_one.TestNotMuch)
 test_2 (sample1.sampletests.test_one.TestNotMuch)
 test_3 (sample1.sampletests.test_one.TestNotMuch)
 test_x0 (sample1.sampletests.test_one)
 test_y0 (sample1.sampletests.test_one)
 test_z1 (sample1.sampletests.test_one)
 testrunner-ex/sample1/sampletests/../../sampletests.txt
 test_x1 (sample2.sample21.sampletests.TestA)
 test_y0 (sample2.sample21.sampletests.TestA)
 test_z0 (sample2.sample21.sampletests.TestA)
 test_x0 (sample2.sample21.sampletests.TestB)
 test_y1 (sample2.sample21.sampletests.TestB)
 test_z0 (sample2.sample21.sampletests.TestB)
 test_1 (sample2.sample21.sampletests.TestNotMuch)
 test_2 (sample2.sample21.sampletests.TestNotMuch)
 test_3 (sample2.sample21.sampletests.TestNotMuch)
 test_x0 (sample2.sample21.sampletests)
 test_y0 (sample2.sample21.sampletests)
 test_z1 (sample2.sample21.sampletests)
 testrunner-ex/sample2/sample21/../../sampletests.txt
 test_x1 (sample2.sampletests.test_1.TestA)
 test_y0 (sample2.sampletests.test_1.TestA)
 test_z0 (sample2.sampletests.test_1.TestA)
 test_x0 (sample2.sampletests.test_1.TestB)
 test_y1 (sample2.sampletests.test_1.TestB)
 test_z0 (sample2.sampletests.test_1.TestB)
 test_1 (sample2.sampletests.test_1.TestNotMuch)
 test_2 (sample2.sampletests.test_1.TestNotMuch)
 test_3 (sample2.sampletests.test_1.TestNotMuch)
 test_x0 (sample2.sampletests.test_1)
 test_y0 (sample2.sampletests.test_1)
 test_z1 (sample2.sampletests.test_1)
 testrunner-ex/sample2/sampletests/../../sampletests.txt
 test_x1 (sample2.sampletests.testone.TestA)
 test_y0 (sample2.sampletests.testone.TestA)
 test_z0 (sample2.sampletests.testone.TestA)
 test_x0 (sample2.sampletests.testone.TestB)
 test_y1 (sample2.sampletests.testone.TestB)
 test_z0 (sample2.sampletests.testone.TestB)
 test_1 (sample2.sampletests.testone.TestNotMuch)
 test_2 (sample2.sampletests.testone.TestNotMuch)
 test_3 (sample2.sampletests.testone.TestNotMuch)
 test_x0 (sample2.sampletests.testone)
 test_y0 (sample2.sampletests.testone)
 test_z1 (sample2.sampletests.testone)
 testrunner-ex/sample2/sampletests/../../sampletests.txt
  Ran 104 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

You can specify directory names instead of packages (useful for tab-completion):

>>> subdir = os.path.join(directory_with_tests, 'sample1')
>>> sys.argv = ['test', '--layer', '122', '-s', subdir, '-vv']
>>> from zope import testrunner
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
  Set up samplelayers.Layer1 in 0.000 seconds.
  Set up samplelayers.Layer12 in 0.000 seconds.
  Set up samplelayers.Layer122 in 0.000 seconds.
  Running:
    test_x1 (sample1.sampletests.test122.TestA)
    test_y0 (sample1.sampletests.test122.TestA)
    test_z0 (sample1.sampletests.test122.TestA)
    test_x0 (sample1.sampletests.test122.TestB)
    test_y1 (sample1.sampletests.test122.TestB)
    test_z0 (sample1.sampletests.test122.TestB)
    test_1 (sample1.sampletests.test122.TestNotMuch)
    test_2 (sample1.sampletests.test122.TestNotMuch)
    test_3 (sample1.sampletests.test122.TestNotMuch)
    test_x0 (sample1.sampletests.test122)
    test_y0 (sample1.sampletests.test122)
    test_z1 (sample1.sampletests.test122)
    testrunner-ex/sample1/sampletests/../../sampletestsl.txt
  Ran 13 tests with 0 failures and 0 errors in 0.005 seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer122 in 0.000 seconds.
  Tear down samplelayers.Layer12 in 0.000 seconds.
  Tear down samplelayers.Layer1 in 0.000 seconds.
False

We can select by test module name using the --module (-m) option:

>>> sys.argv = 'test -u  -vv -ssample1 -m_one -mtest1'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
 test_x1 (sample1.sampletests.test1.TestA)
 test_y0 (sample1.sampletests.test1.TestA)
 test_z0 (sample1.sampletests.test1.TestA)
 test_x0 (sample1.sampletests.test1.TestB)
 test_y1 (sample1.sampletests.test1.TestB)
 test_z0 (sample1.sampletests.test1.TestB)
 test_1 (sample1.sampletests.test1.TestNotMuch)
 test_2 (sample1.sampletests.test1.TestNotMuch)
 test_3 (sample1.sampletests.test1.TestNotMuch)
 test_x0 (sample1.sampletests.test1)
 test_y0 (sample1.sampletests.test1)
 test_z1 (sample1.sampletests.test1)
 testrunner-ex/sample1/sampletests/../../sampletests.txt
 test_x1 (sample1.sampletests.test_one.TestA)
 test_y0 (sample1.sampletests.test_one.TestA)
 test_z0 (sample1.sampletests.test_one.TestA)
 test_x0 (sample1.sampletests.test_one.TestB)
 test_y1 (sample1.sampletests.test_one.TestB)
 test_z0 (sample1.sampletests.test_one.TestB)
 test_1 (sample1.sampletests.test_one.TestNotMuch)
 test_2 (sample1.sampletests.test_one.TestNotMuch)
 test_3 (sample1.sampletests.test_one.TestNotMuch)
 test_x0 (sample1.sampletests.test_one)
 test_y0 (sample1.sampletests.test_one)
 test_z1 (sample1.sampletests.test_one)
 testrunner-ex/sample1/sampletests/../../sampletests.txt
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

and by test within the module using the --test (-t) option:

>>> sys.argv = 'test -u  -vv -ssample1 -m_one -mtest1 -tx0 -ty0'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
 test_y0 (sample1.sampletests.test1.TestA)
 test_x0 (sample1.sampletests.test1.TestB)
 test_x0 (sample1.sampletests.test1)
 test_y0 (sample1.sampletests.test1)
 test_y0 (sample1.sampletests.test_one.TestA)
 test_x0 (sample1.sampletests.test_one.TestB)
 test_x0 (sample1.sampletests.test_one)
 test_y0 (sample1.sampletests.test_one)
  Ran 8 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False
>>> sys.argv = 'test -u  -vv -ssample1 -ttxt'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
 testrunner-ex/sample1/../sampletests.txt
 testrunner-ex/sample1/sample11/../../sampletests.txt
 testrunner-ex/sample1/sample13/../../sampletests.txt
 testrunner-ex/sample1/sampletests/../../sampletests.txt
 testrunner-ex/sample1/sampletests/../../sampletests.txt
  Ran 5 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

The --module and --test options take regular expressions. If the regular expressions specified begin with '!', then tests that don't match the regular expression are selected:

>>> sys.argv = 'test -u  -vv -ssample1 -m!sample1[.]sample1'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
 test_x1 (sample1.sampletestsf.TestA)
 test_y0 (sample1.sampletestsf.TestA)
 test_z0 (sample1.sampletestsf.TestA)
 test_x0 (sample1.sampletestsf.TestB)
 test_y1 (sample1.sampletestsf.TestB)
 test_z0 (sample1.sampletestsf.TestB)
 test_1 (sample1.sampletestsf.TestNotMuch)
 test_2 (sample1.sampletestsf.TestNotMuch)
 test_3 (sample1.sampletestsf.TestNotMuch)
 test_x0 (sample1.sampletestsf)
 test_y0 (sample1.sampletestsf)
 test_z1 (sample1.sampletestsf)
 testrunner-ex/sample1/../sampletests.txt
 test_x1 (sample1.sampletests.test1.TestA)
 test_y0 (sample1.sampletests.test1.TestA)
 test_z0 (sample1.sampletests.test1.TestA)
 test_x0 (sample1.sampletests.test1.TestB)
 test_y1 (sample1.sampletests.test1.TestB)
 test_z0 (sample1.sampletests.test1.TestB)
 test_1 (sample1.sampletests.test1.TestNotMuch)
 test_2 (sample1.sampletests.test1.TestNotMuch)
 test_3 (sample1.sampletests.test1.TestNotMuch)
 test_x0 (sample1.sampletests.test1)
 test_y0 (sample1.sampletests.test1)
 test_z1 (sample1.sampletests.test1)
 testrunner-ex/sample1/sampletests/../../sampletests.txt
 test_x1 (sample1.sampletests.test_one.TestA)
 test_y0 (sample1.sampletests.test_one.TestA)
 test_z0 (sample1.sampletests.test_one.TestA)
 test_x0 (sample1.sampletests.test_one.TestB)
 test_y1 (sample1.sampletests.test_one.TestB)
 test_z0 (sample1.sampletests.test_one.TestB)
 test_1 (sample1.sampletests.test_one.TestNotMuch)
 test_2 (sample1.sampletests.test_one.TestNotMuch)
 test_3 (sample1.sampletests.test_one.TestNotMuch)
 test_x0 (sample1.sampletests.test_one)
 test_y0 (sample1.sampletests.test_one)
 test_z1 (sample1.sampletests.test_one)
 testrunner-ex/sample1/sampletests/../../sampletests.txt
  Ran 39 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

Module and test filters can also be given as positional arguments:

>>> sys.argv = 'test -u  -vv -ssample1 !sample1[.]sample1'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
 test_x1 (sample1.sampletestsf.TestA)
 test_y0 (sample1.sampletestsf.TestA)
 test_z0 (sample1.sampletestsf.TestA)
 test_x0 (sample1.sampletestsf.TestB)
 test_y1 (sample1.sampletestsf.TestB)
 test_z0 (sample1.sampletestsf.TestB)
 test_1 (sample1.sampletestsf.TestNotMuch)
 test_2 (sample1.sampletestsf.TestNotMuch)
 test_3 (sample1.sampletestsf.TestNotMuch)
 test_x0 (sample1.sampletestsf)
 test_y0 (sample1.sampletestsf)
 test_z1 (sample1.sampletestsf)
 testrunner-ex/sample1/../sampletests.txt
 test_x1 (sample1.sampletests.test1.TestA)
 test_y0 (sample1.sampletests.test1.TestA)
 test_z0 (sample1.sampletests.test1.TestA)
 test_x0 (sample1.sampletests.test1.TestB)
 test_y1 (sample1.sampletests.test1.TestB)
 test_z0 (sample1.sampletests.test1.TestB)
 test_1 (sample1.sampletests.test1.TestNotMuch)
 test_2 (sample1.sampletests.test1.TestNotMuch)
 test_3 (sample1.sampletests.test1.TestNotMuch)
 test_x0 (sample1.sampletests.test1)
 test_y0 (sample1.sampletests.test1)
 test_z1 (sample1.sampletests.test1)
 testrunner-ex/sample1/sampletests/../../sampletests.txt
 test_x1 (sample1.sampletests.test_one.TestA)
 test_y0 (sample1.sampletests.test_one.TestA)
 test_z0 (sample1.sampletests.test_one.TestA)
 test_x0 (sample1.sampletests.test_one.TestB)
 test_y1 (sample1.sampletests.test_one.TestB)
 test_z0 (sample1.sampletests.test_one.TestB)
 test_1 (sample1.sampletests.test_one.TestNotMuch)
 test_2 (sample1.sampletests.test_one.TestNotMuch)
 test_3 (sample1.sampletests.test_one.TestNotMuch)
 test_x0 (sample1.sampletests.test_one)
 test_y0 (sample1.sampletests.test_one)
 test_z1 (sample1.sampletests.test_one)
 testrunner-ex/sample1/sampletests/../../sampletests.txt
  Ran 39 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False
>>> sys.argv = 'test -u  -vv -ssample1 . txt'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
 testrunner-ex/sample1/../sampletests.txt
 testrunner-ex/sample1/sample11/../../sampletests.txt
 testrunner-ex/sample1/sample13/../../sampletests.txt
 testrunner-ex/sample1/sampletests/../../sampletests.txt
 testrunner-ex/sample1/sampletests/../../sampletests.txt
  Ran 5 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

Sometimes, There are tests that you don't want to run by default. For example, you might have tests that take a long time. Tests can have a level attribute. If no level is specified, a level of 1 is assumed and, by default, only tests at level one are run. to run tests at a higher level, use the --at-level (-a) option to specify a higher level. For example, with the following options:

>>> sys.argv = 'test -u  -vv -t test_y1 -t test_y0'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
 test_y0 (sampletestsf.TestA)
 test_y1 (sampletestsf.TestB)
 test_y0 (sampletestsf)
 test_y0 (sample1.sampletestsf.TestA)
 test_y1 (sample1.sampletestsf.TestB)
 test_y0 (sample1.sampletestsf)
 test_y0 (sample1.sample11.sampletests.TestA)
 test_y1 (sample1.sample11.sampletests.TestB)
 test_y0 (sample1.sample11.sampletests)
 test_y0 (sample1.sample13.sampletests.TestA)
 test_y1 (sample1.sample13.sampletests.TestB)
 test_y0 (sample1.sample13.sampletests)
 test_y0 (sample1.sampletests.test1.TestA)
 test_y1 (sample1.sampletests.test1.TestB)
 test_y0 (sample1.sampletests.test1)
 test_y0 (sample1.sampletests.test_one.TestA)
 test_y1 (sample1.sampletests.test_one.TestB)
 test_y0 (sample1.sampletests.test_one)
 test_y0 (sample2.sample21.sampletests.TestA)
 test_y1 (sample2.sample21.sampletests.TestB)
 test_y0 (sample2.sample21.sampletests)
 test_y0 (sample2.sampletests.test_1.TestA)
 test_y1 (sample2.sampletests.test_1.TestB)
 test_y0 (sample2.sampletests.test_1)
 test_y0 (sample2.sampletests.testone.TestA)
 test_y1 (sample2.sampletests.testone.TestB)
 test_y0 (sample2.sampletests.testone)
 test_y0 (sample3.sampletests.TestA)
 test_y1 (sample3.sampletests.TestB)
 test_y0 (sample3.sampletests)
 test_y0 (sampletests.test1.TestA)
 test_y1 (sampletests.test1.TestB)
 test_y0 (sampletests.test1)
 test_y0 (sampletests.test_one.TestA)
 test_y1 (sampletests.test_one.TestB)
 test_y0 (sampletests.test_one)
  Ran 36 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

We get run 36 tests. If we specify a level of 2, we get some additional tests:

>>> sys.argv = 'test -u  -vv -a 2 -t test_y1 -t test_y0'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 2
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
 test_y0 (sampletestsf.TestA)
 test_y0 (sampletestsf.TestA2)
 test_y1 (sampletestsf.TestB)
 test_y0 (sampletestsf)
 test_y0 (sample1.sampletestsf.TestA)
 test_y1 (sample1.sampletestsf.TestB)
 test_y0 (sample1.sampletestsf)
 test_y0 (sample1.sample11.sampletests.TestA)
 test_y1 (sample1.sample11.sampletests.TestB)
 test_y1 (sample1.sample11.sampletests.TestB2)
 test_y0 (sample1.sample11.sampletests)
 test_y0 (sample1.sample13.sampletests.TestA)
 test_y1 (sample1.sample13.sampletests.TestB)
 test_y0 (sample1.sample13.sampletests)
 test_y0 (sample1.sampletests.test1.TestA)
 test_y1 (sample1.sampletests.test1.TestB)
 test_y0 (sample1.sampletests.test1)
 test_y0 (sample1.sampletests.test_one.TestA)
 test_y1 (sample1.sampletests.test_one.TestB)
 test_y0 (sample1.sampletests.test_one)
 test_y0 (sample2.sample21.sampletests.TestA)
 test_y1 (sample2.sample21.sampletests.TestB)
 test_y0 (sample2.sample21.sampletests)
 test_y0 (sample2.sampletests.test_1.TestA)
 test_y1 (sample2.sampletests.test_1.TestB)
 test_y0 (sample2.sampletests.test_1)
 test_y0 (sample2.sampletests.testone.TestA)
 test_y1 (sample2.sampletests.testone.TestB)
 test_y0 (sample2.sampletests.testone)
 test_y0 (sample3.sampletests.TestA)
 test_y1 (sample3.sampletests.TestB)
 test_y0 (sample3.sampletests)
 test_y0 (sampletests.test1.TestA)
 test_y1 (sampletests.test1.TestB)
 test_y0 (sampletests.test1)
 test_y0 (sampletests.test_one.TestA)
 test_y1 (sampletests.test_one.TestB)
 test_y0 (sampletests.test_one)
  Ran 38 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

We can use the --all option to run tests at all levels:

>>> sys.argv = 'test -u  -vv --all -t test_y1 -t test_y0'.split()
>>> testrunner.run_internal(defaults)
Running tests at all levels
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
 test_y0 (sampletestsf.TestA)
 test_y0 (sampletestsf.TestA2)
 test_y1 (sampletestsf.TestB)
 test_y0 (sampletestsf)
 test_y0 (sample1.sampletestsf.TestA)
 test_y1 (sample1.sampletestsf.TestB)
 test_y0 (sample1.sampletestsf)
 test_y0 (sample1.sample11.sampletests.TestA)
 test_y0 (sample1.sample11.sampletests.TestA3)
 test_y1 (sample1.sample11.sampletests.TestB)
 test_y1 (sample1.sample11.sampletests.TestB2)
 test_y0 (sample1.sample11.sampletests)
 test_y0 (sample1.sample13.sampletests.TestA)
 test_y1 (sample1.sample13.sampletests.TestB)
 test_y0 (sample1.sample13.sampletests)
 test_y0 (sample1.sampletests.test1.TestA)
 test_y1 (sample1.sampletests.test1.TestB)
 test_y0 (sample1.sampletests.test1)
 test_y0 (sample1.sampletests.test_one.TestA)
 test_y1 (sample1.sampletests.test_one.TestB)
 test_y0 (sample1.sampletests.test_one)
 test_y0 (sample2.sample21.sampletests.TestA)
 test_y1 (sample2.sample21.sampletests.TestB)
 test_y0 (sample2.sample21.sampletests)
 test_y0 (sample2.sampletests.test_1.TestA)
 test_y1 (sample2.sampletests.test_1.TestB)
 test_y0 (sample2.sampletests.test_1)
 test_y0 (sample2.sampletests.testone.TestA)
 test_y1 (sample2.sampletests.testone.TestB)
 test_y0 (sample2.sampletests.testone)
 test_y0 (sample3.sampletests.TestA)
 test_y1 (sample3.sampletests.TestB)
 test_y0 (sample3.sampletests)
 test_y0 (sampletests.test1.TestA)
 test_y1 (sampletests.test1.TestB)
 test_y0 (sampletests.test1)
 test_y0 (sampletests.test_one.TestA)
 test_y1 (sampletests.test_one.TestB)
 test_y0 (sampletests.test_one)
  Ran 39 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

Listing Selected Tests

When you're trying to figure out why the test you want is not matched by the pattern you specified, it is convenient to see which tests match your specifications.

>>> sys.argv = 'test --all -m sample1 -t test_y0 --list-tests'.split()
>>> testrunner.run_internal(defaults)
Listing samplelayers.Layer11 tests:
  test_y0 (sample1.sampletests.test11.TestA)
  test_y0 (sample1.sampletests.test11)
Listing samplelayers.Layer111 tests:
  test_y0 (sample1.sampletests.test111.TestA)
  test_y0 (sample1.sampletests.test111)
Listing samplelayers.Layer112 tests:
  test_y0 (sample1.sampletests.test112.TestA)
  test_y0 (sample1.sampletests.test112)
Listing samplelayers.Layer12 tests:
  test_y0 (sample1.sampletests.test12.TestA)
  test_y0 (sample1.sampletests.test12)
Listing samplelayers.Layer121 tests:
  test_y0 (sample1.sampletests.test121.TestA)
  test_y0 (sample1.sampletests.test121)
Listing samplelayers.Layer122 tests:
  test_y0 (sample1.sampletests.test122.TestA)
  test_y0 (sample1.sampletests.test122)
Listing zope.testrunner.layer.UnitTests tests:
  test_y0 (sample1.sampletestsf.TestA)
  test_y0 (sample1.sampletestsf)
  test_y0 (sample1.sample11.sampletests.TestA)
  test_y0 (sample1.sample11.sampletests.TestA3)
  test_y0 (sample1.sample11.sampletests)
  test_y0 (sample1.sample13.sampletests.TestA)
  test_y0 (sample1.sample13.sampletests)
  test_y0 (sample1.sampletests.test1.TestA)
  test_y0 (sample1.sampletests.test1)
  test_y0 (sample1.sampletests.test_one.TestA)
  test_y0 (sample1.sampletests.test_one)
False

Test Progress

If the --progress (-p) option is used, progress information is printed and a carriage return (rather than a new-line) is printed between detail lines. Let's look at the effect of --progress (-p) at different levels of verbosity.

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
...     '--path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     ]
>>> sys.argv = 'test --layer 122 -p'.split()
>>> from zope import testrunner
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer122 tests:
  Set up samplelayers.Layer1 in N.NNN seconds.
  Set up samplelayers.Layer12 in N.NNN seconds.
  Set up samplelayers.Layer122 in N.NNN seconds.
  Running:
    1/26 (3.8%)##r##
               ##r##
    2/26 (7.7%)##r##
               ##r##
    3/26 (11.5%)##r##
                ##r##
    4/26 (15.4%)##r##
                ##r##
    5/26 (19.2%)##r##
                ##r##
    6/26 (23.1%)##r##
                ##r##
    7/26 (26.9%)##r##
                ##r##
    8/26 (30.8%)##r##
                ##r##
    9/26 (34.6%)##r##
                ##r##
    10/26 (38.5%)##r##
                 ##r##
    11/26 (42.3%)##r##
                 ##r##
    12/26 (46.2%)##r##
                 ##r##
    13/26 (50.0%)##r##
                 ##r##
    14/26 (53.8%)##r##
                 ##r##
    15/26 (57.7%)##r##
                 ##r##
    16/26 (61.5%)##r##
                 ##r##
    17/26 (65.4%)##r##
                 ##r##
    18/26 (69.2%)##r##
                 ##r##
    19/26 (73.1%)##r##
                 ##r##
    20/26 (76.9%)##r##
                 ##r##
    21/26 (80.8%)##r##
                 ##r##
    22/26 (84.6%)##r##
                 ##r##
    23/26 (88.5%)##r##
                 ##r##
    24/26 (92.3%)##r##
                 ##r##
    25/26 (96.2%)##r##
                 ##r##
    26/26 (100.0%)##r##
                  ##r##
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer122 in N.NNN seconds.
  Tear down samplelayers.Layer12 in N.NNN seconds.
  Tear down samplelayers.Layer1 in N.NNN seconds.
False

(Note that, in the examples above and below, we show "##r##" followed by new lines where carriage returns would appear in actual output.)

Using a single level of verbosity causes test descriptions to be output, but only if they fit in the terminal width. The default width, when the terminal width can't be determined, is 80:

>>> sys.argv = 'test --layer 122 -pv'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
  Set up samplelayers.Layer1 in N.NNN seconds.
  Set up samplelayers.Layer12 in N.NNN seconds.
  Set up samplelayers.Layer122 in N.NNN seconds.
  Running:
    1/26 (3.8%) test_x1 (sample1.sampletests.test122.TestA)##r##
                                                           ##r##
    2/26 (7.7%) test_y0 (sample1.sampletests.test122.TestA)##r##
                                                           ##r##
    3/26 (11.5%) test_z0 (sample1.sampletests.test122.TestA)##r##
                                                            ##r##
    4/26 (15.4%) test_x0 (sample1.sampletests.test122.TestB)##r##
                                                            ##r##
    5/26 (19.2%) test_y1 (sample1.sampletests.test122.TestB)##r##
                                                            ##r##
    6/26 (23.1%) test_z0 (sample1.sampletests.test122.TestB)##r##
                                                            ##r##
    7/26 (26.9%) test_1 (sample1.sampletests.test122.TestNotMuch)##r##
                                                                 ##r##
    8/26 (30.8%) test_2 (sample1.sampletests.test122.TestNotMuch)##r##
                                                                 ##r##
    9/26 (34.6%) test_3 (sample1.sampletests.test122.TestNotMuch)##r##
                                                                 ##r##
    10/26 (38.5%) test_x0 (sample1.sampletests.test122)##r##
                                                       ##r##
    11/26 (42.3%) test_y0 (sample1.sampletests.test122)##r##
                                                       ##r##
    12/26 (46.2%) test_z1 (sample1.sampletests.test122)##r##
                                                       ##r##
 testrunner-ex/sample1/sampletests/../../sampletestsl.txt##r##
                                                                               ##r##
    14/26 (53.8%) test_x1 (sampletests.test122.TestA)##r##
                                                     ##r##
    15/26 (57.7%) test_y0 (sampletests.test122.TestA)##r##
                                                     ##r##
    16/26 (61.5%) test_z0 (sampletests.test122.TestA)##r##
                                                     ##r##
    17/26 (65.4%) test_x0 (sampletests.test122.TestB)##r##
                                                     ##r##
    18/26 (69.2%) test_y1 (sampletests.test122.TestB)##r##
                                                     ##r##
    19/26 (73.1%) test_z0 (sampletests.test122.TestB)##r##
                                                     ##r##
    20/26 (76.9%) test_1 (sampletests.test122.TestNotMuch)##r##
                                                          ##r##
    21/26 (80.8%) test_2 (sampletests.test122.TestNotMuch)##r##
                                                          ##r##
    22/26 (84.6%) test_3 (sampletests.test122.TestNotMuch)##r##
                                                          ##r##
    23/26 (88.5%) test_x0 (sampletests.test122)##r##
                                               ##r##
    24/26 (92.3%) test_y0 (sampletests.test122)##r##
                                               ##r##
    25/26 (96.2%) test_z1 (sampletests.test122)##r##
                                               ##r##
 testrunner-ex/sampletests/../sampletestsl.txt##r##
                                                                               ##r##
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer122 in N.NNN seconds.
  Tear down samplelayers.Layer12 in N.NNN seconds.
  Tear down samplelayers.Layer1 in N.NNN seconds.
False

The terminal width is determined using the curses module. To see that, we'll provide a fake curses module:

>>> class FakeCurses:
...     class error(Exception):
...         pass
...     def setupterm(self):
...         pass
...     def tigetnum(self, ignored):
...         return 60
>>> old_curses = sys.modules.get('curses')
>>> sys.modules['curses'] = FakeCurses()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
  Set up samplelayers.Layer1 in N.NNN seconds.
  Set up samplelayers.Layer12 in N.NNN seconds.
  Set up samplelayers.Layer122 in N.NNN seconds.
  Running:
    1/26 (3.8%) test_x1 (sample1.sampletests.test122.TestA)##r##
                                                           ##r##
    2/26 (7.7%) test_y0 (sample1.sampletests.test122.TestA)##r##
                                                           ##r##
    3/26 (11.5%) test_z0 (...le1.sampletests.test122.TestA)##r##
                                                           ##r##
    4/26 (15.4%) test_x0 (...le1.sampletests.test122.TestB)##r##
                                                           ##r##
    5/26 (19.2%) test_y1 (...le1.sampletests.test122.TestB)##r##
                                                           ##r##
    6/26 (23.1%) test_z0 (...le1.sampletests.test122.TestB)##r##
                                                           ##r##
    7/26 (26.9%) test_1 (...ampletests.test122.TestNotMuch)##r##
                                                           ##r##
    8/26 (30.8%) test_2 (...ampletests.test122.TestNotMuch)##r##
                                                           ##r##
    9/26 (34.6%) test_3 (...ampletests.test122.TestNotMuch)##r##
                                                           ##r##
    10/26 (38.5%) test_x0 (sample1.sampletests.test122)##r##
                                                       ##r##
    11/26 (42.3%) test_y0 (sample1.sampletests.test122)##r##
                                                       ##r##
    12/26 (46.2%) test_z1 (sample1.sampletests.test122)##r##
                                                       ##r##
    13/26 (50.0%) ... e1/sampletests/../../sampletestsl.txt##r##
                                                           ##r##
    14/26 (53.8%) test_x1 (sampletests.test122.TestA)##r##
                                                     ##r##
    15/26 (57.7%) test_y0 (sampletests.test122.TestA)##r##
                                                     ##r##
    16/26 (61.5%) test_z0 (sampletests.test122.TestA)##r##
                                                     ##r##
    17/26 (65.4%) test_x0 (sampletests.test122.TestB)##r##
                                                     ##r##
    18/26 (69.2%) test_y1 (sampletests.test122.TestB)##r##
                                                     ##r##
    19/26 (73.1%) test_z0 (sampletests.test122.TestB)##r##
                                                     ##r##
    20/26 (76.9%) test_1 (sampletests.test122.TestNotMuch)##r##
                                                          ##r##
    21/26 (80.8%) test_2 (sampletests.test122.TestNotMuch)##r##
                                                          ##r##
    22/26 (84.6%) test_3 (sampletests.test122.TestNotMuch)##r##
                                                          ##r##
    23/26 (88.5%) test_x0 (sampletests.test122)##r##
                                               ##r##
    24/26 (92.3%) test_y0 (sampletests.test122)##r##
                                               ##r##
    25/26 (96.2%) test_z1 (sampletests.test122)##r##
                                               ##r##
    26/26 (100.0%) ... r-ex/sampletests/../sampletestsl.txt##r##
                                                           ##r##
  Ran 26 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer122 in N.NNN seconds.
  Tear down samplelayers.Layer12 in N.NNN seconds.
  Tear down samplelayers.Layer1 in N.NNN seconds.
False
>>> sys.modules['curses'] = old_curses

If a second or third level of verbosity are added, we get additional information.

>>> sys.argv = 'test --layer 122 -pvv -t !txt'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
  Set up samplelayers.Layer1 in 0.000 seconds.
  Set up samplelayers.Layer12 in 0.000 seconds.
  Set up samplelayers.Layer122 in 0.000 seconds.
  Running:
    1/24 (4.2%) test_x1 (sample1.sampletests.test122.TestA)##r##
                                                          ##r##
    2/24 (8.3%) test_y0 (sample1.sampletests.test122.TestA)##r##
                                                          ##r##
    3/24 (12.5%) test_z0 (sample1.sampletests.test122.TestA)##r##
                                                           ##r##
    4/24 (16.7%) test_x0 (sample1.sampletests.test122.TestB)##r##
                                                           ##r##
    5/24 (20.8%) test_y1 (sample1.sampletests.test122.TestB)##r##
                                                           ##r##
    6/24 (25.0%) test_z0 (sample1.sampletests.test122.TestB)##r##
                                                           ##r##
    7/24 (29.2%) test_1 (sample1.sampletests.test122.TestNotMuch)##r##
                                                                ##r##
    8/24 (33.3%) test_2 (sample1.sampletests.test122.TestNotMuch)##r##
                                                                ##r##
    9/24 (37.5%) test_3 (sample1.sampletests.test122.TestNotMuch)##r##
                                                                ##r##
    10/24 (41.7%) test_x0 (sample1.sampletests.test122)##r##
                                                      ##r##
    11/24 (45.8%) test_y0 (sample1.sampletests.test122)##r##
                                                      ##r##
    12/24 (50.0%) test_z1 (sample1.sampletests.test122)##r##
                                                      ##r##
    13/24 (54.2%) test_x1 (sampletests.test122.TestA)##r##
                                                    ##r##
    14/24 (58.3%) test_y0 (sampletests.test122.TestA)##r##
                                                    ##r##
    15/24 (62.5%) test_z0 (sampletests.test122.TestA)##r##
                                                    ##r##
    16/24 (66.7%) test_x0 (sampletests.test122.TestB)##r##
                                                    ##r##
    17/24 (70.8%) test_y1 (sampletests.test122.TestB)##r##
                                                    ##r##
    18/24 (75.0%) test_z0 (sampletests.test122.TestB)##r##
                                                    ##r##
    19/24 (79.2%) test_1 (sampletests.test122.TestNotMuch)##r##
                                                         ##r##
    20/24 (83.3%) test_2 (sampletests.test122.TestNotMuch)##r##
                                                         ##r##
    21/24 (87.5%) test_3 (sampletests.test122.TestNotMuch)##r##
                                                         ##r##
    22/24 (91.7%) test_x0 (sampletests.test122)##r##
                                              ##r##
    23/24 (95.8%) test_y0 (sampletests.test122)##r##
                                              ##r##
    24/24 (100.0%) test_z1 (sampletests.test122)##r##
                                               ##r##
  Ran 24 tests with 0 failures and 0 errors in 0.006 seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer122 in 0.000 seconds.
  Tear down samplelayers.Layer12 in 0.000 seconds.
  Tear down samplelayers.Layer1 in 0.000 seconds.
False

Note that, in this example, we used a test-selection pattern starting with '!' to exclude tests containing the string "txt".

>>> sys.argv = 'test --layer 122 -pvvv -t!(txt|NotMuch)'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
  Set up samplelayers.Layer1 in 0.000 seconds.
  Set up samplelayers.Layer12 in 0.000 seconds.
  Set up samplelayers.Layer122 in 0.000 seconds.
  Running:
    1/18 (5.6%) test_x1 (sample1.sampletests.test122.TestA) (0.000 s)##r##
                                                                      ##r##
    2/18 (11.1%) test_y0 (sample1.sampletests.test122.TestA) (0.000 s)##r##
                                                                       ##r##
    3/18 (16.7%) test_z0 (sample1.sampletests.test122.TestA) (0.000 s)##r##
                                                                       ##r##
    4/18 (22.2%) test_x0 (sample1.sampletests.test122.TestB) (0.000 s)##r##
                                                                       ##r##
    5/18 (27.8%) test_y1 (sample1.sampletests.test122.TestB) (0.000 s)##r##
                                                                       ##r##
    6/18 (33.3%) test_z0 (sample1.sampletests.test122.TestB) (0.000 s)##r##
                                                                       ##r##
    7/18 (38.9%) test_x0 (sample1.sampletests.test122) (0.001 s)##r##
                                                                 ##r##
    8/18 (44.4%) test_y0 (sample1.sampletests.test122) (0.001 s)##r##
                                                                 ##r##
    9/18 (50.0%) test_z1 (sample1.sampletests.test122) (0.001 s)##r##
                                                                 ##r##
    10/18 (55.6%) test_x1 (sampletests.test122.TestA) (0.000 s)##r##
                                                                ##r##
    11/18 (61.1%) test_y0 (sampletests.test122.TestA) (0.000 s)##r##
                                                                ##r##
    12/18 (66.7%) test_z0 (sampletests.test122.TestA) (0.000 s)##r##
                                                                ##r##
    13/18 (72.2%) test_x0 (sampletests.test122.TestB) (0.000 s)##r##
                                                                ##r##
    14/18 (77.8%) test_y1 (sampletests.test122.TestB) (0.000 s)##r##
                                                                ##r##
    15/18 (83.3%) test_z0 (sampletests.test122.TestB) (0.000 s)##r##
                                                                ##r##
    16/18 (88.9%) test_x0 (sampletests.test122) (0.001 s)##r##
                                                          ##r##
    17/18 (94.4%) test_y0 (sampletests.test122) (0.001 s)##r##
                                                          ##r##
    18/18 (100.0%) test_z1 (sampletests.test122) (0.001 s)##r##
                                                           ##r##
  Ran 18 tests with 0 failures and 0 errors in 0.006 seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer122 in 0.000 seconds.
  Tear down samplelayers.Layer12 in 0.000 seconds.
  Tear down samplelayers.Layer1 in 0.000 seconds.
False

In this example, we also excluded tests with "NotMuch" in their names.

Unfortunately, the time data above doesn't buy us much because, in practice, the line is cleared before there is time to see the times. :/

Autodetecting progress

The --auto-progress option will determine if stdout is a terminal, and only enable progress output if so.

Let's pretend we have a terminal

>>> class Terminal(object):
...     def __init__(self, stream):
...         self._stream = stream
...     def __getattr__(self, attr):
...         return getattr(self._stream, attr)
...     def isatty(self):
...         return True
>>> real_stdout = sys.stdout
>>> sys.stdout = Terminal(sys.stdout)
>>> sys.argv = 'test -u -t test_one.TestNotMuch --auto-progress'.split()
>>> testrunner.run_internal(defaults)
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
    1/6 (16.7%)##r##
               ##r##
    2/6 (33.3%)##r##
               ##r##
    3/6 (50.0%)##r##
               ##r##
    4/6 (66.7%)##r##
               ##r##
    5/6 (83.3%)##r##
               ##r##
    6/6 (100.0%)##r##
                ##r##
  Ran 6 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

Let's stop pretending

>>> sys.stdout = real_stdout
>>> sys.argv = 'test -u -t test_one.TestNotMuch --auto-progress'.split()
>>> testrunner.run_internal(defaults)
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Ran 6 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

Disabling progress indication

If -p or --progress have been previously provided on the command line (perhaps by a wrapper script) but you do not desire progress indication, you can switch it off with --no-progress:

>>> sys.argv = 'test -u -t test_one.TestNotMuch -p --no-progress'.split()
>>> testrunner.run_internal(defaults)
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Ran 6 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

Debugging

The testrunner module supports post-mortem debugging and debugging using pdb.set_trace. Let's look first at using pdb.set_trace. To demonstrate this, we'll provide input via helper Input objects:

>>> class Input:
...     def __init__(self, src):
...         self.lines = src.split('\n')
...     def readline(self):
...         line = self.lines.pop(0)
...         print line
...         return line+'\n'

If a test or code called by a test calls pdb.set_trace, then the runner will enter pdb at that point:

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> from zope import testrunner
>>> defaults = [
...     '--path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     ]
>>> real_stdin = sys.stdin
>>> sys.stdin = Input('p x\nc')
>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
...             ' -t set_trace1').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
... # doctest: +ELLIPSIS
Running zope.testrunner.layer.UnitTests tests:
...
> testrunner-ex/sample3/sampletests_d.py(27)test_set_trace1()
-> y = x
(Pdb) p x
1
(Pdb) c
  Ran 1 tests with 0 failures and 0 errors in 0.001 seconds.
...
False

Note that, prior to Python 2.4, calling pdb.set_trace caused pdb to break in the pdb.set_trace function. It was necessary to use 'next' or 'up' to get to the application code that called pdb.set_trace. In Python 2.4, pdb.set_trace causes pdb to stop right after the call to pdb.set_trace.

You can also do post-mortem debugging, using the --post-mortem (-D) option:

>>> sys.stdin = Input('p x\nc')
>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
...             ' -t post_mortem1 -D').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF +ELLIPSIS
Running zope.testrunner.layer.UnitTests tests:
...
Error in test test_post_mortem1 (sample3.sampletests_d.TestSomething)
Traceback (most recent call last):
  File "testrunner-ex/sample3/sampletests_d.py",
          line 34, in test_post_mortem1
    raise ValueError
ValueError
<BLANKLINE>
...ValueError:
<BLANKLINE>
> testrunner-ex/sample3/sampletests_d.py(34)test_post_mortem1()
-> raise ValueError
(Pdb) p x
1
(Pdb) c
True

Note that the test runner exits after post-mortem debugging.

In the example above, we debugged an error. Failures are actually converted to errors and can be debugged the same way:

>>> sys.stdin = Input('p x\np y\nc')
>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
...             ' -t post_mortem_failure1 -D').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF +ELLIPSIS
Running zope.testrunner.layer.UnitTests tests:
...
Error in test test_post_mortem_failure1 (sample3.sampletests_d.TestSomething)
Traceback (most recent call last):
  File ".../unittest.py",  line 252, in debug
    getattr(self, self.__testMethodName)()
  File "testrunner-ex/sample3/sampletests_d.py",
    line 42, in test_post_mortem_failure1
    assert x == y
AssertionError
<BLANKLINE>
...AssertionError:
<BLANKLINE>
> testrunner-ex/sample3/sampletests_d.py(42)test_post_mortem_failure1()
-> assert x == y
(Pdb) p x
1
(Pdb) p y
2
(Pdb) c
True

Layers that can't be torn down

A layer can have a tearDown method that raises NotImplementedError. If this is the case and there are no remaining tests to run, the test runner will just note that the tear down couldn't be done:

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> from zope import testrunner
>>> defaults = [
...     '--path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     ]
>>> sys.argv = 'test -ssample2 --tests-pattern sampletests_ntd$'.split()
>>> testrunner.run_internal(defaults)
Running sample2.sampletests_ntd.Layer tests:
  Set up sample2.sampletests_ntd.Layer in 0.000 seconds.
  Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
Tearing down left over layers:
  Tear down sample2.sampletests_ntd.Layer ... not supported
False

If the tearDown method raises NotImplementedError and there are remaining layers to run, the test runner will restart itself as a new process, resuming tests where it left off:

>>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntd$']
>>> testrunner.run_internal(defaults)
Running sample1.sampletests_ntd.Layer tests:
  Set up sample1.sampletests_ntd.Layer in N.NNN seconds.
  Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Running sample2.sampletests_ntd.Layer tests:
  Tear down sample1.sampletests_ntd.Layer ... not supported
  Running in a subprocess.
  Set up sample2.sampletests_ntd.Layer in N.NNN seconds.
  Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
  Tear down sample2.sampletests_ntd.Layer ... not supported
Running sample3.sampletests_ntd.Layer tests:
  Running in a subprocess.
  Set up sample3.sampletests_ntd.Layer in N.NNN seconds.
<BLANKLINE>
<BLANKLINE>
Error in test test_error1 (sample3.sampletests_ntd.TestSomething)
Traceback (most recent call last):
 testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error1
    raise TypeError("Can we see errors")
TypeError: Can we see errors
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Error in test test_error2 (sample3.sampletests_ntd.TestSomething)
Traceback (most recent call last):
 testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error2
    raise TypeError("I hope so")
TypeError: I hope so
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Failure in test test_fail1 (sample3.sampletests_ntd.TestSomething)
Traceback (most recent call last):
 testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail1
    self.assertEqual(1, 2)
AssertionError: 1 != 2
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Failure in test test_fail2 (sample3.sampletests_ntd.TestSomething)
Traceback (most recent call last):
 testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail2
    self.assertEqual(1, 3)
AssertionError: 1 != 3
<BLANKLINE>
  Ran 6 tests with 2 failures and 2 errors in N.NNN seconds.
  Tear down sample3.sampletests_ntd.Layer ... not supported
Total: 8 tests, 2 failures, 2 errors in N.NNN seconds.
True

in the example above, some of the tests run as a subprocess had errors and failures. They were displayed as usual and the failure and error statistice were updated as usual.

Note that debugging doesn't work when running tests in a subprocess:

>>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntd$',
...             '-D', ]
>>> testrunner.run_internal(defaults)
Running sample1.sampletests_ntd.Layer tests:
  Set up sample1.sampletests_ntd.Layer in N.NNN seconds.
  Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Running sample2.sampletests_ntd.Layer tests:
  Tear down sample1.sampletests_ntd.Layer ... not supported
  Running in a subprocess.
  Set up sample2.sampletests_ntd.Layer in N.NNN seconds.
  Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
  Tear down sample2.sampletests_ntd.Layer ... not supported
Running sample3.sampletests_ntd.Layer tests:
  Running in a subprocess.
  Set up sample3.sampletests_ntd.Layer in N.NNN seconds.
<BLANKLINE>
<BLANKLINE>
Error in test test_error1 (sample3.sampletests_ntd.TestSomething)
Traceback (most recent call last):
 testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error1
    raise TypeError("Can we see errors")
TypeError: Can we see errors
<BLANKLINE>
<BLANKLINE>
**********************************************************************
Can't post-mortem debug when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Error in test test_error2 (sample3.sampletests_ntd.TestSomething)
Traceback (most recent call last):
 testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error2
    raise TypeError("I hope so")
TypeError: I hope so
<BLANKLINE>
<BLANKLINE>
**********************************************************************
Can't post-mortem debug when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Error in test test_fail1 (sample3.sampletests_ntd.TestSomething)
Traceback (most recent call last):
 testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail1
    self.assertEqual(1, 2)
AssertionError: 1 != 2
<BLANKLINE>
<BLANKLINE>
**********************************************************************
Can't post-mortem debug when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Error in test test_fail2 (sample3.sampletests_ntd.TestSomething)
Traceback (most recent call last):
 testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail2
    self.assertEqual(1, 3)
AssertionError: 1 != 3
<BLANKLINE>
<BLANKLINE>
**********************************************************************
Can't post-mortem debug when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
  Ran 6 tests with 0 failures and 4 errors in N.NNN seconds.
  Tear down sample3.sampletests_ntd.Layer ... not supported
Total: 8 tests, 0 failures, 4 errors in N.NNN seconds.
True

Similarly, pdb.set_trace doesn't work when running tests in a layer that is run as a subprocess:

>>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntds']
>>> testrunner.run_internal(defaults)
Running sample1.sampletests_ntds.Layer tests:
  Set up sample1.sampletests_ntds.Layer in 0.000 seconds.
  Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
Running sample2.sampletests_ntds.Layer tests:
  Tear down sample1.sampletests_ntds.Layer ... not supported
  Running in a subprocess.
  Set up sample2.sampletests_ntds.Layer in 0.000 seconds.
--Return--
> testrunner-ex/sample2/sampletests_ntds.py(37)test_something()->None
-> import pdb; pdb.set_trace()
(Pdb) c
<BLANKLINE>
**********************************************************************
Can't use pdb.set_trace when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
--Return--
> testrunner-ex/sample2/sampletests_ntds.py(40)test_something2()->None
-> import pdb; pdb.set_trace()
(Pdb) c
<BLANKLINE>
**********************************************************************
Can't use pdb.set_trace when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
--Return--
> testrunner-ex/sample2/sampletests_ntds.py(43)test_something3()->None
-> import pdb; pdb.set_trace()
(Pdb) c
<BLANKLINE>
**********************************************************************
Can't use pdb.set_trace when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
--Return--
> testrunner-ex/sample2/sampletests_ntds.py(46)test_something4()->None
-> import pdb; pdb.set_trace()
(Pdb) c
<BLANKLINE>
**********************************************************************
Can't use pdb.set_trace when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
--Return--
> testrunner-ex/sample2/sampletests_ntds.py(52)f()->None
-> import pdb; pdb.set_trace()
(Pdb) c
<BLANKLINE>
**********************************************************************
Can't use pdb.set_trace when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
--Return--
> doctest.py(351)set_trace()->None
-> Pdb().set_trace()
(Pdb) c
<BLANKLINE>
**********************************************************************
Can't use pdb.set_trace when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
--Return--
> doctest.py(351)set_trace()->None
-> Pdb().set_trace()
(Pdb) c
<BLANKLINE>
**********************************************************************
Can't use pdb.set_trace when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
  Ran 7 tests with 0 failures and 0 errors in 0.008 seconds.
  Tear down sample2.sampletests_ntds.Layer ... not supported
Total: 8 tests, 0 failures, 0 errors in N.NNN seconds.
False

If you want to use pdb from a test in a layer that is run as a subprocess, then rerun the test runner selecting just that layer so that it's not run as a subprocess.

If a test is run in a subprocess and it generates output on stderr (as stderrtest does), the output is ignored (but it doesn't cause a SubprocessError like it once did).

>>> from cStringIO import StringIO
>>> real_stderr = sys.stderr
>>> sys.stderr = StringIO()
>>> sys.argv = [testrunner_script, '-s', 'sample2', '--tests-pattern',
...     '(sampletests_ntd$|stderrtest)']
>>> testrunner.run_internal(defaults)
Running sample2.sampletests_ntd.Layer tests:
  Set up sample2.sampletests_ntd.Layer in 0.000 seconds.
  Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
Running sample2.stderrtest.Layer tests:
  Tear down sample2.sampletests_ntd.Layer ... not supported
  Running in a subprocess.
  Set up sample2.stderrtest.Layer in 0.000 seconds.
  Ran 1 tests with 0 failures and 0 errors in 0.002 seconds.
  Tear down sample2.stderrtest.Layer in 0.000 seconds.
Total: 2 tests, 0 failures, 0 errors in 0.197 seconds.
False
>>> print(sys.stderr.getvalue())
A message on stderr.  Please ignore (expected in test output).
>>> sys.stderr = real_stderr

Code Coverage

If the --coverage option is used, test coverage reports will be generated. The directory name given as the parameter will be used to hold the reports.

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
...     '--path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     ]
>>> sys.argv = 'test --coverage=coverage_dir'.split()
>>> from zope import testrunner
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer1 tests:
  Set up samplelayers.Layer1 in 0.000 seconds.
  Ran 9 tests with 0 failures and 0 errors in 0.000 seconds.
Running samplelayers.Layer11 tests:
  Set up samplelayers.Layer11 in 0.000 seconds.
  Ran 26 tests with 0 failures and 0 errors in 0.125 seconds.
Running samplelayers.Layer111 tests:
  Set up samplelayers.Layerx in 0.000 seconds.
  Set up samplelayers.Layer111 in 0.000 seconds.
  Ran 26 tests with 0 failures and 0 errors in 0.125 seconds.
Running samplelayers.Layer112 tests:
  Tear down samplelayers.Layer111 in 0.000 seconds.
  Set up samplelayers.Layer112 in 0.000 seconds.
  Ran 26 tests with 0 failures and 0 errors in 0.125 seconds.
Running samplelayers.Layer12 tests:
  Tear down samplelayers.Layer112 in 0.000 seconds.
  Tear down samplelayers.Layerx in 0.000 seconds.
  Tear down samplelayers.Layer11 in 0.000 seconds.
  Set up samplelayers.Layer12 in 0.000 seconds.
  Ran 26 tests with 0 failures and 0 errors in 0.140 seconds.
Running samplelayers.Layer121 tests:
  Set up samplelayers.Layer121 in 0.000 seconds.
  Ran 26 tests with 0 failures and 0 errors in 0.125 seconds.
Running samplelayers.Layer122 tests:
  Tear down samplelayers.Layer121 in 0.000 seconds.
  Set up samplelayers.Layer122 in 0.000 seconds.
  Ran 26 tests with 0 failures and 0 errors in 0.125 seconds.
Running zope.testrunner.layer.UnitTests tests:
  Tear down samplelayers.Layer122 in 0.000 seconds.
  Tear down samplelayers.Layer12 in 0.000 seconds.
  Tear down samplelayers.Layer1 in 0.000 seconds.
  Set up zope.testrunner.layer.UnitTests in 0.000 seconds.
  Ran 156 tests with 0 failures and 0 errors in 0.687 seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in 0.000 seconds.
lines   cov%   module   (path)
...
   48   100%   sampletests.test1   (testrunner-ex/sampletests/test1.py)
   74   100%   sampletests.test11   (testrunner-ex/sampletests/test11.py)
   74   100%   sampletests.test111   (testrunner-ex/sampletests/test111.py)
   76   100%   sampletests.test112   (testrunner-ex/sampletests/test112.py)
   74   100%   sampletests.test12   (testrunner-ex/sampletests/test12.py)
   74   100%   sampletests.test121   (testrunner-ex/sampletests/test121.py)
   74   100%   sampletests.test122   (testrunner-ex/sampletests/test122.py)
   48   100%   sampletests.test_one   (testrunner-ex/sampletests/test_one.py)
  112    95%   sampletestsf   (testrunner-ex/sampletestsf.py)
Total: 321 tests, 0 failures, 0 errors in 0.630 seconds.
False

The directory specified with the --coverage option will have been created and will hold the coverage reports.

>>> os.path.exists('coverage_dir')
True
>>> os.listdir('coverage_dir')
[...]

(We should clean up after ourselves.)

>>> import shutil
>>> shutil.rmtree('coverage_dir')

Ignoring Tests

The trace module supports ignoring directories and modules based the test selection. Only directories selected for testing should report coverage. The test runner provides a custom implementation of the relevant API.

The TestIgnore class, the class managing the ignoring, is initialized by passing the command line options. It uses the options to determine the directories that should be covered.

>>> class FauxOptions(object):
...   package = None
...   test_path = [('/myproject/src/blah/foo', ''),
...                ('/myproject/src/blah/bar', '')]
>>> from zope.testrunner import coverage
>>> from zope.testrunner.find import test_dirs
>>> ignore = coverage.TestIgnore(test_dirs(FauxOptions(), {}))
>>> ignore._test_dirs
['/myproject/src/blah/foo/', '/myproject/src/blah/bar/']

We can now ask whether a particular module should be ignored:

>>> ignore.names('/myproject/src/blah/foo/baz.py', 'baz')
False
>>> ignore.names('/myproject/src/blah/bar/mine.py', 'mine')
False
>>> ignore.names('/myproject/src/blah/foo/__init__.py', 'foo')
False
>>> ignore.names('/myproject/src/blah/hello.py', 'hello')
True

When running the test runner, modules are sometimes created from text strings. Those should always be ignored:

>>> ignore.names('/myproject/src/blah/hello.txt', '<string>')
True

To make this check fast, the class implements a cache. In an early implementation, the result was cached by the module name, which was a problem, since a lot of modules carry the same name (not the Python dotted name here!). So just because a module has the same name in an ignored and tested directory, does not mean it is always ignored:

>>> ignore.names('/myproject/src/blah/module.py', 'module')
True
>>> ignore.names('/myproject/src/blah/foo/module.py', 'module')
False

Profiling

The testrunner supports hotshot and cProfile profilers. Hotshot profiler support does not work with python2.6

>>> import os.path, sys
>>> profiler = '--profile=hotshot'
>>> if sys.hexversion >= 0x02060000:
...     profiler = '--profile=cProfile'

The testrunner includes the ability to profile the test execution with hotshot via the --profile option, if it a python <= 2.6

>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> sys.path.append(directory_with_tests)
>>> defaults = [
...     '--path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     ]
>>> sys.argv = [testrunner_script, profiler]

When the tests are run, we get profiling output.

>>> from zope import testrunner
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer1 tests:
...
Running samplelayers.Layer11 tests:
...
Running zope.testrunner.layer.UnitTests tests:
...
   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
...
Total: ... tests, 0 failures, 0 errors in ... seconds.
False

Profiling also works across layers.

>>> sys.argv = [testrunner_script, '-ssample2', profiler,
...             '--tests-pattern', 'sampletests_ntd']
>>> testrunner.run_internal(defaults)
Running...
  Tear down ... not supported...
   ncalls  tottime  percall  cumtime  percall filename:lineno(function)...

The testrunner creates temnporary files containing hotshot profiler data:

>>> import glob
>>> files = list(glob.glob('tests_profile.*.prof'))
>>> files.sort()
>>> files
['tests_profile.cZj2jt.prof', 'tests_profile.yHD-so.prof']

It deletes these when rerun. We'll delete these ourselves:

>>> import os
>>> for f in files:
...     os.unlink(f)

Running Without Source Code

The --usecompiled option allows running tests in a tree without .py source code, provided compiled .pyc or .pyo files exist (without --usecompiled, .py files are necessary).

We have a very simple directory tree, under usecompiled/, to test this. Because we're going to delete its .py files, we want to work in a copy of that:

>>> import os.path, shutil, sys, tempfile
>>> directory_with_tests = tempfile.mkdtemp()
>>> NEWNAME = "unlikely_package_name"
>>> src = os.path.join(this_directory, 'testrunner-ex', 'usecompiled')
>>> os.path.isdir(src)
True
>>> dst = os.path.join(directory_with_tests, NEWNAME)
>>> os.path.isdir(dst)
False

Have to use our own copying code, to avoid copying read-only SVN files that can't be deleted later.

>>> n = len(src) + 1
>>> for root, dirs, files in os.walk(src):
...     dirs[:] = [d for d in dirs if d == "package"] # prune cruft
...     os.mkdir(os.path.join(dst, root[n:]))
...     for f in files:
...         shutil.copy(os.path.join(root, f),
...                     os.path.join(dst, root[n:], f))

Now run the tests in the copy:

>>> from zope import testrunner
>>> mydefaults = [
...     '--path', directory_with_tests,
...     '--tests-pattern', '^compiletest$',
...     '--package', NEWNAME,
...     '-vv',
...     ]
>>> sys.argv = ['test']
>>> testrunner.run_internal(mydefaults)
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
 test1 (unlikely_package_name.compiletest.Test)
 test2 (unlikely_package_name.compiletest.Test)
 test1 (unlikely_package_name.package.compiletest.Test)
 test2 (unlikely_package_name.package.compiletest.Test)
  Ran 4 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

If we delete the source files, it's normally a disaster: the test runner doesn't believe any test files, or even packages, exist. Note that we pass --keepbytecode this time, because otherwise the test runner would delete the compiled Python files too:

>>> for root, dirs, files in os.walk(dst):
...    for f in files:
...        if f.endswith(".py"):
...            os.remove(os.path.join(root, f))
>>> testrunner.run_internal(mydefaults, ["test", "--keepbytecode"])
Running tests at level 1
Total: 0 tests, 0 failures, 0 errors in N.NNN seconds.
False

Finally, passing --usecompiled asks the test runner to treat .pyc and .pyo files as adequate replacements for .py files. Note that the output is the same as when running with .py source above. The absence of "removing stale bytecode ..." messages shows that --usecompiled also implies --keepbytecode:

>>> testrunner.run_internal(mydefaults, ["test", "--usecompiled"])
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
 test1 (unlikely_package_name.compiletest.Test)
 test2 (unlikely_package_name.compiletest.Test)
 test1 (unlikely_package_name.package.compiletest.Test)
 test2 (unlikely_package_name.package.compiletest.Test)
  Ran 4 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

Remove the temporary directory:

>>> shutil.rmtree(directory_with_tests)

Repeating Tests

The --repeat option can be used to repeat tests some number of times. Repeating tests is useful to help make sure that tests clean up after themselves.

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
...     '--path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     ]
>>> sys.argv = 'test --layer 112 --layer UnitTests --repeat 3'.split()
>>> from zope import testrunner
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer112 tests:
  Set up samplelayers.Layerx in 0.000 seconds.
  Set up samplelayers.Layer1 in 0.000 seconds.
  Set up samplelayers.Layer11 in 0.000 seconds.
  Set up samplelayers.Layer112 in 0.000 seconds.
Iteration 1
  Ran 26 tests with 0 failures and 0 errors in 0.010 seconds.
Iteration 2
  Ran 26 tests with 0 failures and 0 errors in 0.010 seconds.
Iteration 3
  Ran 26 tests with 0 failures and 0 errors in 0.010 seconds.
Running zope.testrunner.layer.UnitTests tests:
  Tear down samplelayers.Layer112 in N.NNN seconds.
  Tear down samplelayers.Layerx in N.NNN seconds.
  Tear down samplelayers.Layer11 in N.NNN seconds.
  Tear down samplelayers.Layer1 in N.NNN seconds.
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
Iteration 1
  Ran 156 tests with 0 failures and 0 errors in N.NNN seconds.
Iteration 2
  Ran 156 tests with 0 failures and 0 errors in N.NNN seconds.
Iteration 3
  Ran 156 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
Total: 182 tests, 0 failures, 0 errors in N.NNN seconds.
False

The tests are repeated by layer. Layers are set up and torn down only once.

Garbage Collection Control

When having problems that seem to be caused my memory-management errors, it can be helpful to adjust Python's cyclic garbage collector or to get garbage colection statistics. The --gc option can be used for this purpose.

If you think you are getting a test failure due to a garbage collection problem, you can try disabling garbage collection by using the --gc option with a value of zero.

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = ['--path', directory_with_tests]
>>> from zope import testrunner
>>> sys.argv = 'test --tests-pattern ^gc0$ --gc 0 -vv'.split()
>>> _ = testrunner.run_internal(defaults)
Cyclic garbage collection is disabled.
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
 make_sure_gc_is_disabled (gc0)
  Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.

Alternatively, if you think you are having a garbage collection related problem, you can cause garbage collection to happen more often by providing a low threshold:

>>> sys.argv = 'test --tests-pattern ^gc1$ --gc 1 -vv'.split()
>>> _ = testrunner.run_internal(defaults)
Cyclic garbage collection threshold set to: (1,)
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
 make_sure_gc_threshold_is_one (gc1)
  Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.

You can specify up to 3 --gc options to set each of the 3 gc threshold values:

>>> sys.argv = ('test --tests-pattern ^gcset$ --gc 701 --gc 11 --gc 9 -vv'
...             .split())
>>> _ = testrunner.run_internal(defaults)
Cyclic garbage collection threshold set to: (701, 11, 9)
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
 make_sure_gc_threshold_is_701_11_9 (gcset)
  Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.

Specifying more than 3 --gc options is not allowed:

>>> from StringIO import StringIO
>>> out = StringIO()
>>> stdout = sys.stdout
>>> sys.stdout = out
>>> sys.argv = ('test --tests-pattern ^gcset$ --gc 701 --gc 42 --gc 11 --gc 9 -vv'
...             .split())
>>> _ = testrunner.run_internal(defaults)
Traceback (most recent call last):
...
SystemExit: 1
>>> sys.stdout = stdout
>>> print out.getvalue()
Too many --gc options

Garbage Collection Statistics

You can enable gc debugging statistics using the --gc-options (-G) option. You should provide names of one or more of the flags described in the library documentation for the gc module.

The output statistics are written to standard error.

>>> from StringIO import StringIO
>>> err = StringIO()
>>> stderr = sys.stderr
>>> sys.stderr = err
>>> sys.argv = ('test --tests-pattern ^gcstats$ -G DEBUG_STATS'
...             ' -G DEBUG_COLLECTABLE -vv'
...             .split())
>>> _ = testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
 generate_some_gc_statistics (gcstats)
  Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
>>> sys.stderr = stderr
>>> print err.getvalue()        # doctest: +ELLIPSIS
gc: collecting generation ...

Debugging Memory Leaks

The --report-refcounts (-r) option can be used with the --repeat (-N) option to detect and diagnose memory leaks. To use this option, you must configure Python with the --with-pydebug option. (On Unix, pass this option to configure and then build Python.)

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
...     '--path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     ]
>>> from zope import testrunner
>>> sys.argv = 'test --layer Layer11$ --layer Layer12$ -N4 -r'.split()
>>> _ = testrunner.run_internal(defaults)
Running samplelayers.Layer11 tests:
  Set up samplelayers.Layer1 in 0.000 seconds.
  Set up samplelayers.Layer11 in 0.000 seconds.
Iteration 1
  Ran 26 tests with 0 failures and 0 errors in 0.013 seconds.
Iteration 2
  Ran 26 tests with 0 failures and 0 errors in 0.012 seconds.
  sys refcount=100401   change=0
Iteration 3
  Ran 26 tests with 0 failures and 0 errors in 0.012 seconds.
  sys refcount=100401   change=0
Iteration 4
  Ran 26 tests with 0 failures and 0 errors in 0.013 seconds.
  sys refcount=100401   change=0
Running samplelayers.Layer12 tests:
  Tear down samplelayers.Layer11 in 0.000 seconds.
  Set up samplelayers.Layer12 in 0.000 seconds.
Iteration 1
  Ran 26 tests with 0 failures and 0 errors in 0.013 seconds.
Iteration 2
  Ran 26 tests with 0 failures and 0 errors in 0.012 seconds.
  sys refcount=100411   change=0
Iteration 3
  Ran 26 tests with 0 failures and 0 errors in 0.012 seconds.
  sys refcount=100411   change=0
Iteration 4
  Ran 26 tests with 0 failures and 0 errors in 0.012 seconds.
  sys refcount=100411   change=0
Tearing down left over layers:
  Tear down samplelayers.Layer12 in 0.000 seconds.
  Tear down samplelayers.Layer1 in 0.000 seconds.
Total: 68 tests, 0 failures, 0 errors in N.NNN seconds.

Each layer is repeated the requested number of times. For each iteration after the first, the system refcount and change in system refcount is shown. The system refcount is the total of all refcount in the system. When a refcount on any object is changed, the system refcount is changed by the same amount. Tests that don't leak show zero changes in systen refcount.

Let's look at an example test that leaks:

>>> sys.argv = 'test --tests-pattern leak -N4 -r'.split()
>>> _ = testrunner.run_internal(defaults)
Running zope.testrunner.layer.UnitTests tests:...
Iteration 1
  Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
Iteration 2
  Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
  sys refcount=92506    change=12
Iteration 3
  Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
  sys refcount=92513    change=12
Iteration 4
  Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
  sys refcount=92520    change=12
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.

Here we see that the system refcount is increating. If we specify a verbosity greater than one, we can get details broken out by object type (or class):

>>> sys.argv = 'test --tests-pattern leak -N5 -r -v'.split()
>>> _ = testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:...
Iteration 1
  Running:
    .
  Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
Iteration 2
  Running:
    .
  Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
  sum detail refcount=95832    sys refcount=105668   change=16
    Leak details, changes in instances and refcounts by type/class:
    type/class                                               insts   refs
    -------------------------------------------------------  -----   ----
    classobj                                                     0      1
    dict                                                         2      2
    float                                                        1      1
    int                                                          2      2
    leak.ClassicLeakable                                         1      1
    leak.Leakable                                                1      1
    str                                                          0      4
    tuple                                                        1      1
    type                                                         0      3
    -------------------------------------------------------  -----   ----
    total                                                        8     16
Iteration 3
  Running:
    .
  Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
  sum detail refcount=95844    sys refcount=105680   change=12
    Leak details, changes in instances and refcounts by type/class:
    type/class                                               insts   refs
    -------------------------------------------------------  -----   ----
    classobj                                                     0      1
    dict                                                         2      2
    float                                                        1      1
    int                                                         -1      0
    leak.ClassicLeakable                                         1      1
    leak.Leakable                                                1      1
    str                                                          0      4
    tuple                                                        1      1
    type                                                         0      1
    -------------------------------------------------------  -----   ----
    total                                                        5     12
Iteration 4
  Running:
    .
  Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
  sum detail refcount=95856    sys refcount=105692   change=12
    Leak details, changes in instances and refcounts by type/class:
    type/class                                               insts   refs
    -------------------------------------------------------  -----   ----
    classobj                                                     0      1
    dict                                                         2      2
    float                                                        1      1
    leak.ClassicLeakable                                         1      1
    leak.Leakable                                                1      1
    str                                                          0      4
    tuple                                                        1      1
    type                                                         0      1
    -------------------------------------------------------  -----   ----
    total                                                        6     12
Iteration 5
  Running:
    .
  Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
  sum detail refcount=95868    sys refcount=105704   change=12
    Leak details, changes in instances and refcounts by type/class:
    type/class                                               insts   refs
    -------------------------------------------------------  -----   ----
    classobj                                                     0      1
    dict                                                         2      2
    float                                                        1      1
    leak.ClassicLeakable                                         1      1
    leak.Leakable                                                1      1
    str                                                          0      4
    tuple                                                        1      1
    type                                                         0      1
    -------------------------------------------------------  -----   ----
    total                                                        6     12
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.

It is instructive to analyze the results in some detail. The test being run was designed to intentionally leak:

class ClassicLeakable:
def __init__(self):
self.x = 'x'
class Leakable(object):
def __init__(self):
self.x = 'x'

leaked = []

class TestSomething(unittest.TestCase):

def testleak(self):
leaked.append((ClassicLeakable(), Leakable(), time.time()))

Let's go through this by type.

float, leak.ClassicLeakable, leak.Leakable, and tuple
We leak one of these every time. This is to be expected because we are adding one of these to the list every time.
str
We don't leak any instances, but we leak 4 references. These are due to the instance attributes avd values.
dict
We leak 2 of these, one for each ClassicLeakable and Leakable instance.
classobj
We increase the number of classobj instance references by one each time because each ClassicLeakable instance has a reference to its class. This instances increases the references in it's class, which increases the total number of references to classic classes (clasobj instances).
type
For most interations, we increase the number of type references by one for the same reason we increase the number of clasobj references by one. The increase of the number of type references by 3 in the second iteration is puzzling, but illustrates that this sort of data is often puzzling.
int
The change in the number of int instances and references in this example is a side effect of the statistics being gathered. Lots of integers are created to keep the memory statistics used here.

The summary statistics include the sum of the detail refcounts. (Note that this sum is less than the system refcount. This is because the detailed analysis doesn't inspect every object. Not all objects in the system are returned by sys.getobjects.)

Knitting in extra package directories

Python packages have __path__ variables that can be manipulated to add extra directories cntaining software used in the packages. The testrunner needs to be given extra information about this sort of situation.

Let's look at an example. The testrunner-ex-knit-lib directory is a directory that we want to add to the Python path, but that we don't want to search for tests. It has a sample4 package and a products subpackage. The products subpackage adds the testrunner-ex-knit-products to it's __path__. We want to run tests from the testrunner-ex-knit-products directory. When we import these tests, we need to import them from the sample4.products package. We can't use the --path option to name testrunner-ex-knit-products. It isn't enough to add the containing directory to the test path because then we wouldn't be able to determine the package name properly. We might be able to use the --package option to run the tests from the sample4/products package, but we want to run tests in testrunner-ex that aren't in this package.

We can use the --package-path option in this case. The --package-path option is like the --test-path option in that it defines a path to be searched for tests without affecting the python path. It differs in that it supplied a package name that is added a profex when importing any modules found. The --package-path option takes two arguments, a package name and file path.

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> sys.path.append(os.path.join(this_directory, 'testrunner-ex-pp-lib'))
>>> defaults = [
...     '--path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     '--package-path',
...     os.path.join(this_directory, 'testrunner-ex-pp-products'),
...     'sample4.products',
...     ]
>>> from zope import testrunner
>>> sys.argv = 'test --layer Layer111 -vv'.split()
>>> _ = testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer111 tests:
  Set up samplelayers.Layerx in 0.000 seconds.
  Set up samplelayers.Layer1 in 0.000 seconds.
  Set up samplelayers.Layer11 in 0.000 seconds.
  Set up samplelayers.Layer111 in 0.000 seconds.
  Running:
    test_x1 (sample1.sampletests.test111.TestA)
    test_y0 (sample1.sampletests.test111.TestA)
    ...
    test_y0 (sampletests.test111)
    test_z1 (sampletests.test111)
    testrunner-ex/sampletests/../sampletestsl.txt
    test_extra_test_in_products (sample4.products.sampletests.Test)
    test_another_test_in_products (sample4.products.more.sampletests.Test)
  Ran 28 tests with 0 failures and 0 errors in 0.008 seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer111 in 0.000 seconds.
  Tear down samplelayers.Layerx in 0.000 seconds.
  Tear down samplelayers.Layer11 in 0.000 seconds.
  Tear down samplelayers.Layer1 in 0.000 seconds.

In the example, the last test, test_extra_test_in_products, came from the products directory. As usual, we can select the knit-in packages or individual packages within knit-in packages:

>>> sys.argv = 'test --package sample4.products -vv'.split()
>>> _ = testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer111 tests:
  Set up samplelayers.Layerx in 0.000 seconds.
  Set up samplelayers.Layer1 in 0.000 seconds.
  Set up samplelayers.Layer11 in 0.000 seconds.
  Set up samplelayers.Layer111 in 0.000 seconds.
  Running:
    test_extra_test_in_products (sample4.products.sampletests.Test)
    test_another_test_in_products (sample4.products.more.sampletests.Test)
  Ran 2 tests with 0 failures and 0 errors in 0.000 seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer111 in 0.000 seconds.
  Tear down samplelayers.Layerx in 0.000 seconds.
  Tear down samplelayers.Layer11 in 0.000 seconds.
  Tear down samplelayers.Layer1 in 0.000 seconds.
>>> sys.argv = 'test --package sample4.products.more -vv'.split()
>>> _ = testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer111 tests:
  Set up samplelayers.Layerx in 0.000 seconds.
  Set up samplelayers.Layer1 in 0.000 seconds.
  Set up samplelayers.Layer11 in 0.000 seconds.
  Set up samplelayers.Layer111 in 0.000 seconds.
  Running:
    test_another_test_in_products (sample4.products.more.sampletests.Test)
  Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer111 in 0.000 seconds.
  Tear down samplelayers.Layerx in 0.000 seconds.
  Tear down samplelayers.Layer11 in 0.000 seconds.
  Tear down samplelayers.Layer1 in 0.000 seconds.

testrunner Edge Cases

This document has some edge-case examples to test various aspects of the test runner.

Separating Python path and test directories

The --path option defines a directory to be searched for tests and a directory to be added to Python's search path. The --test-path option can be used when you want to set a test search path without also affecting the Python path:

>>> import os, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> from zope import testrunner
>>> defaults = [
...     '--test-path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     ]
>>> sys.argv = ['test']
>>> testrunner.run_internal(defaults)
... # doctest: +ELLIPSIS
Test-module import failures:
<BLANKLINE>
Module: sampletestsf
<BLANKLINE>
Traceback (most recent call last):
ImportError: No module named sampletestsf
...
>>> sys.path.append(directory_with_tests)
>>> sys.argv = ['test']
>>> testrunner.run_internal(defaults)
... # doctest: +ELLIPSIS
Running samplelayers.Layer1 tests:
  Set up samplelayers.Layer1 in 0.000 seconds.
  Ran 9 tests with 0 failures and 0 errors in 0.000 seconds.
...
Running zope.testrunner.layer.UnitTests tests:
  Tear down samplelayers.Layer122 in N.NNN seconds.
  Tear down samplelayers.Layer12 in N.NNN seconds.
  Tear down samplelayers.Layer1 in N.NNN seconds.
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Ran 156 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
Total: 321 tests, 0 failures, 0 errors in N.NNN seconds.
False

Bug #251759: The test runner's protection against descending into non-package directories was ineffective, e.g. picking up tests from eggs that were stored close by:

>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex-251759')
>>> defaults = [
...     '--test-path', directory_with_tests,
...     ]
>>> testrunner.run_internal(defaults)
Total: 0 tests, 0 failures, 0 errors in 0.000 seconds.
False

Debugging Edge Cases

>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
...     '--test-path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     ]
>>> class Input:
...     def __init__(self, src):
...         self.lines = src.split('\n')
...     def readline(self):
...         line = self.lines.pop(0)
...         print line
...         return line+'\n'
>>> real_stdin = sys.stdin

Using pdb.set_trace in a function called by an ordinary test:

>>> sys.stdin = Input('p x\nc')
>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
...             ' -t set_trace2').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
... # doctest: +ELLIPSIS
Running zope.testrunner.layer.UnitTests tests:...
> testrunner-ex/sample3/sampletests_d.py(47)f()
-> y = x
(Pdb) p x
1
(Pdb) c
  Ran 1 tests with 0 failures and 0 errors in 0.001 seconds.
...
False

Using pdb.set_trace in a function called by a doctest in a doc string:

>>> sys.stdin = Input('n\np x\nc')
>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
...             ' -t set_trace4').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
> testrunner-ex/sample3/sampletests_d.py(NNN)f()
-> y = x
(Pdb) n
--Return--
> ...->None
-> y = x
(Pdb) p x
1
(Pdb) c
  Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

Using pdb in a docstring-based doctest

>>> sys.stdin = Input('n\np x\nc')
>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
...             ' -t set_trace3').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
> <doctest sample3.sampletests_d.set_trace3[1]>(3)?()
-> y = x
(Pdb) n
--Return--
> ...->None
-> y = x
(Pdb) p x
1
(Pdb) c
  Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

Using pdb.set_trace in a doc file:

>>> sys.stdin = Input('n\np x\nc')
>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
...             ' -t set_trace5').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
> <doctest set_trace5.txt[1]>(3)?()
-> y = x
(Pdb) n
--Return--
> ...->None
-> y = x
(Pdb) p x
1
(Pdb) c
  Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

Using pdb.set_trace in a function called by a doctest in a doc file:

>>> sys.stdin = Input('n\np x\nc')
>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
...             ' -t set_trace6').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
> testrunner-ex/sample3/sampletests_d.py(NNN)f()
-> y = x
(Pdb) n
--Return--
> ...->None
-> y = x
(Pdb) p x
1
(Pdb) c
  Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
False

Post-mortem debugging function called from ordinary test:

>>> sys.stdin = Input('p x\nc')
>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
...             ' -t post_mortem2 -D').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
... # doctest: +NORMALIZE_WHITESPACE
Running zope.testrunner.layer.UnitTests tests:...
<BLANKLINE>
<BLANKLINE>
Error in test test_post_mortem2 (sample3.sampletests_d.TestSomething)
Traceback (most recent call last):
  File "testrunner-ex/sample3/sampletests_d.py",
       line 37, in test_post_mortem2
    g()
  File "testrunner-ex/sample3/sampletests_d.py", line 46, in g
    raise ValueError
ValueError
<BLANKLINE>
...ValueError:
<BLANKLINE>
> testrunner-ex/sample3/sampletests_d.py(46)g()
-> raise ValueError
(Pdb) p x
1
(Pdb) c
True

Post-mortem debugging docstring-based doctest:

>>> sys.stdin = Input('p x\nc')
>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
...             ' -t post_mortem3 -D').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
... # doctest: +NORMALIZE_WHITESPACE
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
<BLANKLINE>
<BLANKLINE>
Error in test post_mortem3 (sample3.sampletests_d)
Traceback (most recent call last):
...UnexpectedException: testrunner-ex/sample3/sampletests_d.py:NNN (2 examples)>
<BLANKLINE>
...ValueError:
<BLANKLINE>
> <doctest sample3.sampletests_d.post_mortem3[1]>(1)?()
(Pdb) p x
1
(Pdb) c
True

Post-mortem debugging function called from docstring-based doctest:

>>> sys.stdin = Input('p x\nc')
>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
...             ' -t post_mortem4 -D').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
... # doctest: +NORMALIZE_WHITESPACE
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
<BLANKLINE>
<BLANKLINE>
Error in test post_mortem4 (sample3.sampletests_d)
Traceback (most recent call last):
...UnexpectedException: testrunner-ex/sample3/sampletests_d.py:NNN (1 example)>
<BLANKLINE>
...ValueError:
<BLANKLINE>
> testrunner-ex/sample3/sampletests_d.py(NNN)g()
-> raise ValueError
(Pdb) p x
1
(Pdb) c
True

Post-mortem debugging file-based doctest:

>>> sys.stdin = Input('p x\nc')
>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
...             ' -t post_mortem5 -D').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
... # doctest: +NORMALIZE_WHITESPACE
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
<BLANKLINE>
<BLANKLINE>
Error testrunner-ex/sample3/post_mortem5.txt
Traceback (most recent call last):
...UnexpectedException: testrunner-ex/sample3/post_mortem5.txt:0 (2 examples)>
<BLANKLINE>
...ValueError:
<BLANKLINE>
> <doctest post_mortem5.txt[1]>(1)?()
(Pdb) p x
1
(Pdb) c
True

Post-mortem debugging function called from file-based doctest:

>>> sys.stdin = Input('p x\nc')
>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
...             ' -t post_mortem6 -D').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
... # doctest: +NORMALIZE_WHITESPACE
Running zope.testrunner.layer.UnitTests tests:...
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
<BLANKLINE>
<BLANKLINE>
Error testrunner-ex/sample3/post_mortem6.txt
Traceback (most recent call last):
  File ".../zope/testing/doctest/__init__.py", Line NNN, in debug
    runner.run(self._dt_test, clear_globs=False)
  File ".../zope/testing/doctest/__init__.py", Line NNN, in run
    r = DocTestRunner.run(self, test, compileflags, out, False)
  File ".../zope/testing/doctest/__init__.py", Line NNN, in run
    return self.__run(test, compileflags, out)
  File ".../zope/testing/doctest/__init__.py", Line NNN, in __run
    exc_info)
  File ".../zope/testing/doctest/__init__.py", Line NNN, in report_unexpected_exception
    raise UnexpectedException(test, example, exc_info)
...UnexpectedException: testrunner-ex/sample3/post_mortem6.txt:0 (2 examples)>
<BLANKLINE>
...ValueError:
<BLANKLINE>
> testrunner-ex/sample3/sampletests_d.py(NNN)g()
-> raise ValueError
(Pdb) p x
1
(Pdb) c
True

Post-mortem debugging of a docstring doctest failure:

>>> sys.stdin = Input('p x\nc')
>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
...             ' -t post_mortem_failure2 -D').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
... # doctest: +NORMALIZE_WHITESPACE
Running zope.testrunner.layer.UnitTests tests:...
<BLANKLINE>
<BLANKLINE>
Error in test post_mortem_failure2 (sample3.sampletests_d)
<BLANKLINE>
File "testrunner-ex/sample3/sampletests_d.py",
               line 81, in sample3.sampletests_d.post_mortem_failure2
<BLANKLINE>
x
<BLANKLINE>
Want:
2
<BLANKLINE>
Got:
1
<BLANKLINE>
<BLANKLINE>
> testrunner-ex/sample3/sampletests_d.py(81)_()
...ValueError:
Expected and actual output are different
> <string>(1)...()
(Pdb) p x
1
(Pdb) c
True

Post-mortem debugging of a docfile doctest failure:

>>> sys.stdin = Input('p x\nc')
>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
...             ' -t post_mortem_failure.txt -D').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
... # doctest: +NORMALIZE_WHITESPACE
Running zope.testrunner.layer.UnitTests tests:...
<BLANKLINE>
<BLANKLINE>
Error in test /home/jim/z3/zope.testrunner/src/zope/testing/testrunner-ex/sample3/post_mortem_failure.txt
<BLANKLINE>
File "testrunner-ex/sample3/post_mortem_failure.txt",
                                  line 2, in post_mortem_failure.txt
<BLANKLINE>
x
<BLANKLINE>
Want:
2
<BLANKLINE>
Got:
1
<BLANKLINE>
<BLANKLINE>
> testrunner-ex/sample3/post_mortem_failure.txt(2)_()
...ValueError:
Expected and actual output are different
> <string>(1)...()
(Pdb) p x
1
(Pdb) c
True

Post-mortem debugging with triple verbosity

>>> sys.argv = 'test --layer samplelayers.Layer1$ -vvv -D'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer1 tests:
  Set up samplelayers.Layer1 in 0.000 seconds.
  Running:
    test_x1 (sampletestsf.TestA1) (0.000 s)
    test_y0 (sampletestsf.TestA1) (0.000 s)
    test_z0 (sampletestsf.TestA1) (0.000 s)
    test_x0 (sampletestsf.TestB1) (0.000 s)
    test_y1 (sampletestsf.TestB1) (0.000 s)
    test_z0 (sampletestsf.TestB1) (0.000 s)
    test_1 (sampletestsf.TestNotMuch1) (0.000 s)
    test_2 (sampletestsf.TestNotMuch1) (0.000 s)
    test_3 (sampletestsf.TestNotMuch1) (0.000 s)
  Ran 9 tests with 0 failures and 0 errors in 0.001 seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer1 in 0.000 seconds.
False

Test Suites with None for suites or tests

>>> sys.argv = ['test',
...             '--tests-pattern', '^sampletests_none_suite$',
...     ]
>>> testrunner.run_internal(defaults)
Test-module import failures:
<BLANKLINE>
Module: sample1.sampletests_none_suite
<BLANKLINE>
Traceback (most recent call last):
TypeError: Invalid test_suite, None, in sample1.sampletests_none_suite
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Test-modules with import problems:
  sample1.sampletests_none_suite
Total: 0 tests, 0 failures, 0 errors in N.NNN seconds.
True
>>> sys.argv = ['test',
...             '--tests-pattern', '^sampletests_none_test$',
...     ]
>>> testrunner.run_internal(defaults)
Test-module import failures:
<BLANKLINE>
Module: sample1.sampletests_none_test
<BLANKLINE>
Traceback (most recent call last):
TypeError: ...
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Test-modules with import problems:
  sample1.sampletests_none_test
Total: 0 tests, 0 failures, 0 errors in N.NNN seconds.
True

You must use --repeat with --report-refcounts

It is an error to specify --report-refcounts (-r) without specifying a repeat count greater than 1

>>> sys.argv = 'test -r'.split()
>>> testrunner.run_internal(defaults)
        You must use the --repeat (-N) option to specify a repeat
        count greater than 1 when using the --report_refcounts (-r)
        option.
<BLANKLINE>
True
>>> sys.argv = 'test -r -N1'.split()
>>> testrunner.run_internal(defaults)
        You must use the --repeat (-N) option to specify a repeat
        count greater than 1 when using the --report_refcounts (-r)
        option.
<BLANKLINE>
True

Errors and Failures

Let's look at tests that have errors and failures, first we need to make a temporary copy of the entire testing directory (except .svn files which may be read only):

>>> import os.path, sys, tempfile, shutil
>>> tmpdir = tempfile.mkdtemp()
>>> directory_with_tests = os.path.join(tmpdir, 'testrunner-ex')
>>> source = os.path.join(this_directory, 'testrunner-ex')
>>> n = len(source) + 1
>>> for root, dirs, files in os.walk(source):
...     dirs[:] = [d for d in dirs if d != ".svn"] # prune cruft
...     os.mkdir(os.path.join(directory_with_tests, root[n:]))
...     for f in files:
...         shutil.copy(os.path.join(root, f),
...                     os.path.join(directory_with_tests, root[n:], f))
>>> from zope import testrunner
>>> defaults = [
...     '--path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     ]
>>> sys.argv = 'test --tests-pattern ^sampletests(f|_e|_f)?$ '.split()
>>> testrunner.run_internal(defaults)
... # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS
Running samplelayers.Layer1 tests:
...
Running zope.testrunner.layer.UnitTests tests:
...
<BLANKLINE>
<BLANKLINE>
Failure in test eek (sample2.sampletests_e)
Failed doctest test for sample2.sampletests_e.eek
  File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_e.py", line 30, in sample2.sampletests_e.eek
Failed example:
    f()
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
        f()
      File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
        g()
      File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
        x = y + 1
       - __traceback_info__: I don't know what Y should be.
    NameError: global name 'y' is not defined
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Error in test test3 (sample2.sampletests_e.Test)
Traceback (most recent call last):
  File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
    f()
  File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
    g()
  File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
    x = y + 1
   - __traceback_info__: I don't know what Y should be.
NameError: global name 'y' is not defined
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Failure in test testrunner-ex/sample2/e.txt
Failed doctest test for e.txt
  File "testrunner-ex/sample2/e.txt", line 0
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/e.txt", line 4, in e.txt
Failed example:
    f()
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest e.txt[1]>", line 1, in ?
        f()
      File "<doctest e.txt[0]>", line 2, in f
        return x
    NameError: global name 'x' is not defined
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Failure in test test (sample2.sampletests_f.Test)
Traceback (most recent call last):
  File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
    self.assertEqual(1,0)
  File "/usr/local/python/2.3/lib/python2.3/unittest.py", line 302, in failUnlessEqual
    raise self.failureException, \
AssertionError: 1 != 0
<BLANKLINE>
  Ran 164 tests with 3 failures and 1 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
Total: 329 tests, 3 failures, 1 errors in N.NNN seconds.
True

We see that we get an error report and a traceback for the failing test. In addition, the test runner returned True, indicating that there was an error.

If we ask for verbosity, the dotted output will be interrupted, and there'll be a summary of the errors at the end of the test:

>>> sys.argv = 'test --tests-pattern ^sampletests(f|_e|_f)?$ -uv'.split()
>>> testrunner.run_internal(defaults)
... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
...
  Running:
.................................................................................................
<BLANKLINE>
Failure in test eek (sample2.sampletests_e)
Failed doctest test for sample2.sampletests_e.eek
  File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_e.py", line 30,
    in sample2.sampletests_e.eek
Failed example:
    f()
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
        f()
      File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
        g()
      File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
        x = y + 1
       - __traceback_info__: I don't know what Y should be.
    NameError: global name 'y' is not defined
<BLANKLINE>
...
<BLANKLINE>
<BLANKLINE>
Error in test test3 (sample2.sampletests_e.Test)
Traceback (most recent call last):
  File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
    f()
  File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
    g()
  File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
    x = y + 1
   - __traceback_info__: I don't know what Y should be.
NameError: global name 'y' is not defined
<BLANKLINE>
...
<BLANKLINE>
Failure in test testrunner-ex/sample2/e.txt
Failed doctest test for e.txt
  File "testrunner-ex/sample2/e.txt", line 0
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/e.txt", line 4, in e.txt
Failed example:
    f()
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest e.txt[1]>", line 1, in ?
        f()
      File "<doctest e.txt[0]>", line 2, in f
        return x
    NameError: global name 'x' is not defined
<BLANKLINE>
.
<BLANKLINE>
Failure in test test (sample2.sampletests_f.Test)
Traceback (most recent call last):
  File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
    self.assertEqual(1,0)
  File ".../unittest.py", line 302, in failUnlessEqual
    raise self.failureException, \
AssertionError: 1 != 0
<BLANKLINE>
................................................................................................
<BLANKLINE>
  Ran 164 tests with 3 failures and 1 errors in 0.040 seconds.
...
<BLANKLINE>
Tests with errors:
   test3 (sample2.sampletests_e.Test)
<BLANKLINE>
Tests with failures:
   eek (sample2.sampletests_e)
   testrunner-ex/sample2/e.txt
   test (sample2.sampletests_f.Test)
True

Similarly for progress output, the progress ticker will be interrupted:

>>> sys.argv = ('test --tests-pattern ^sampletests(f|_e|_f)?$ -u -ssample2'
...             ' -p').split()
>>> testrunner.run_internal(defaults)
... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Running:
    1/47 (2.1%)
<BLANKLINE>
Failure in test eek (sample2.sampletests_e)
Failed doctest test for sample2.sampletests_e.eek
  File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_e.py", line 30, in sample2.sampletests_e.eek
Failed example:
    f()
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
        f()
      File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
        g()
      File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
        x = y + 1
       - __traceback_info__: I don't know what Y should be.
    NameError: global name 'y' is not defined
<BLANKLINE>
    2/47 (4.3%)\r
               \r
    3/47 (6.4%)\r
               \r
    4/47 (8.5%)
<BLANKLINE>
Error in test test3 (sample2.sampletests_e.Test)
Traceback (most recent call last):
  File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
    f()
  File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
    g()
  File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
    x = y + 1
   - __traceback_info__: I don't know what Y should be.
NameError: global name 'y' is not defined
<BLANKLINE>
    5/47 (10.6%)\r
               \r
    6/47 (12.8%)\r
                \r
    7/47 (14.9%)
<BLANKLINE>
Failure in test testrunner-ex/sample2/e.txt
Failed doctest test for e.txt
  File "testrunner-ex/sample2/e.txt", line 0
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/e.txt", line 4, in e.txt
Failed example:
    f()
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest e.txt[1]>", line 1, in ?
        f()
      File "<doctest e.txt[0]>", line 2, in f
        return x
    NameError: global name 'x' is not defined
<BLANKLINE>
    8/47 (17.0%)
<BLANKLINE>
Failure in test test (sample2.sampletests_f.Test)
Traceback (most recent call last):
  File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
    self.assertEqual(1,0)
  File ".../unittest.py", line 302, in failUnlessEqual
    raise self.failureException, \
AssertionError: 1 != 0
<BLANKLINE>
    9/47 (19.1%)\r
                \r
    10/47 (21.3%)\r
                 \r
    11/47 (23.4%)\r
                 \r
    12/47 (25.5%)\r
                 \r
    13/47 (27.7%)\r
                 \r
    14/47 (29.8%)\r
                 \r
    15/47 (31.9%)\r
                 \r
    16/47 (34.0%)\r
                 \r
    17/47 (36.2%)\r
                 \r
    18/47 (38.3%)\r
                 \r
    19/47 (40.4%)\r
                 \r
    20/47 (42.6%)\r
                 \r
    21/47 (44.7%)\r
                 \r
    22/47 (46.8%)\r
                 \r
    23/47 (48.9%)\r
                 \r
    24/47 (51.1%)\r
                 \r
    25/47 (53.2%)\r
                 \r
    26/47 (55.3%)\r
                 \r
    27/47 (57.4%)\r
                 \r
    28/47 (59.6%)\r
                 \r
    29/47 (61.7%)\r
                 \r
    30/47 (63.8%)\r
                 \r
    31/47 (66.0%)\r
                 \r
    32/47 (68.1%)\r
                 \r
    33/47 (70.2%)\r
                 \r
    34/47 (72.3%)\r
                 \r
    35/47 (74.5%)\r
                 \r
    36/47 (76.6%)\r
                 \r
    37/47 (78.7%)\r
                 \r
    38/47 (80.9%)\r
                 \r
    39/47 (83.0%)\r
                 \r
    40/47 (85.1%)\r
                 \r
    41/47 (87.2%)\r
                 \r
    42/47 (89.4%)\r
                 \r
    43/47 (91.5%)\r
                 \r
    44/47 (93.6%)\r
                 \r
    45/47 (95.7%)\r
                 \r
    46/47 (97.9%)\r
                 \r
    47/47 (100.0%)\r
                  \r
<BLANKLINE>
  Ran 47 tests with 3 failures and 1 errors in 0.054 seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
True

If you also want a summary of errors at the end, ask for verbosity as well as progress output.

Suppressing multiple doctest errors

Often, when a doctest example fails, the failure will cause later examples in the same test to fail. Each failure is reported:

>>> sys.argv = 'test --tests-pattern ^sampletests_1$'.split()
>>> testrunner.run_internal(defaults) # doctest: +NORMALIZE_WHITESPACE
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
<BLANKLINE>
<BLANKLINE>
Failure in test eek (sample2.sampletests_1)
Failed doctest test for sample2.sampletests_1.eek
  File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_1.py", line 19,
     in sample2.sampletests_1.eek
Failed example:
    x = y
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
        x = y
    NameError: name 'y' is not defined
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_1.py", line 21,
     in sample2.sampletests_1.eek
Failed example:
    x
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_1.eek[1]>", line 1, in ?
        x
    NameError: name 'x' is not defined
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_1.py", line 24,
     in sample2.sampletests_1.eek
Failed example:
    z = x + 1
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_1.eek[2]>", line 1, in ?
        z = x + 1
    NameError: name 'x' is not defined
<BLANKLINE>
  Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
True

This can be a bit confusing, especially when there are enough tests that they scroll off a screen. Often you just want to see the first failure. This can be accomplished with the -1 option (for "just show me the first failed example in a doctest" :)

>>> sys.argv = 'test --tests-pattern ^sampletests_1$ -1'.split()
>>> testrunner.run_internal(defaults) # doctest:
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
<BLANKLINE>
<BLANKLINE>
Failure in test eek (sample2.sampletests_1)
Failed doctest test for sample2.sampletests_1.eek
  File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_1.py", line 19,
     in sample2.sampletests_1.eek
Failed example:
    x = y
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
        x = y
    NameError: name 'y' is not defined
<BLANKLINE>
  Ran 1 tests with 1 failures and 0 errors in 0.001 seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
True

The --hide-secondary-failures option is an alias for -1:

>>> sys.argv = (
...     'test --tests-pattern ^sampletests_1$'
...     ' --hide-secondary-failures'
...     ).split()
>>> testrunner.run_internal(defaults) # doctest:
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
<BLANKLINE>
<BLANKLINE>
Failure in test eek (sample2.sampletests_1)
Failed doctest test for sample2.sampletests_1.eek
  File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_1.py", line 19, in sample2.sampletests_1.eek
Failed example:
    x = y
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
        x = y
    NameError: name 'y' is not defined
<BLANKLINE>
  Ran 1 tests with 1 failures and 0 errors in 0.001 seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
True

The --show-secondary-failures option counters -1 (or it's alias), causing the second and subsequent errors to be shown. This is useful if -1 is provided by a test script by inserting it ahead of command-line options in sys.argv.

>>> sys.argv = (
...     'test --tests-pattern ^sampletests_1$'
...     ' --hide-secondary-failures --show-secondary-failures'
...     ).split()
>>> testrunner.run_internal(defaults) # doctest: +NORMALIZE_WHITESPACE
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
<BLANKLINE>
<BLANKLINE>
Failure in test eek (sample2.sampletests_1)
Failed doctest test for sample2.sampletests_1.eek
  File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_1.py", line 19, in sample2.sampletests_1.eek
Failed example:
    x = y
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
        x = y
    NameError: name 'y' is not defined
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_1.py", line 21, in sample2.sampletests_1.eek
Failed example:
    x
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_1.eek[1]>", line 1, in ?
        x
    NameError: name 'x' is not defined
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_1.py", line 24, in sample2.sampletests_1.eek
Failed example:
    z = x + 1
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_1.eek[2]>", line 1, in ?
        z = x + 1
    NameError: name 'x' is not defined
<BLANKLINE>
  Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
True

Getting diff output for doctest failures

If a doctest has large expected and actual output, it can be hard to see differences when expected and actual output differ. The --ndiff, --udiff, and --cdiff options can be used to get diff output of various kinds.

>>> sys.argv = 'test --tests-pattern ^pledge$'.split()
>>> _ = testrunner.run_internal(defaults)
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
<BLANKLINE>
<BLANKLINE>
Failure in test pledge (pledge)
Failed doctest test for pledge.pledge
  File "testrunner-ex/pledge.py", line 24, in pledge
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/pledge.py", line 26, in pledge.pledge
Failed example:
    print_pledge()
Expected:
    I give my pledge, as an earthling,
    to save, and faithfully, to defend from waste,
    the natural resources of my planet.
    It's soils, minerals, forests, waters, and wildlife.
    <BLANKLINE>
Got:
    I give my pledge, as and earthling,
    to save, and faithfully, to defend from waste,
    the natural resources of my planet.
    It's soils, minerals, forests, waters, and wildlife.
    <BLANKLINE>
<BLANKLINE>
  Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.

Here, the actual output uses the word "and" rather than the word "an", but it's a bit hard to pick this out. We can use the various diff outputs to see this better. We could modify the test to ask for diff output, but it's easier to use one of the diff options.

The --ndiff option requests a diff using Python's ndiff utility. This is the only method that marks differences within lines as well as across lines. For example, if a line of expected output contains digit 1 where actual output contains letter l, a line is inserted with a caret marking the mismatching column positions.

>>> sys.argv = 'test --tests-pattern ^pledge$ --ndiff'.split()
>>> _ = testrunner.run_internal(defaults)
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
<BLANKLINE>
<BLANKLINE>
Failure in test pledge (pledge)
Failed doctest test for pledge.pledge
  File "testrunner-ex/pledge.py", line 24, in pledge
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/pledge.py", line 26, in pledge.pledge
Failed example:
    print_pledge()
Differences (ndiff with -expected +actual):
    - I give my pledge, as an earthling,
    + I give my pledge, as and earthling,
    ?                        +
      to save, and faithfully, to defend from waste,
      the natural resources of my planet.
      It's soils, minerals, forests, waters, and wildlife.
    <BLANKLINE>
<BLANKLINE>
  Ran 1 tests with 1 failures and 0 errors in 0.003 seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.

The -udiff option requests a standard "unified" diff:

>>> sys.argv = 'test --tests-pattern ^pledge$ --udiff'.split()
>>> _ = testrunner.run_internal(defaults)
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
<BLANKLINE>
<BLANKLINE>
Failure in test pledge (pledge)
Failed doctest test for pledge.pledge
  File "testrunner-ex/pledge.py", line 24, in pledge
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/pledge.py", line 26, in pledge.pledge
Failed example:
    print_pledge()
Differences (unified diff with -expected +actual):
    @@ -1,3 +1,3 @@
    -I give my pledge, as an earthling,
    +I give my pledge, as and earthling,
     to save, and faithfully, to defend from waste,
     the natural resources of my planet.
<BLANKLINE>
  Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.

The -cdiff option requests a standard "context" diff:

>>> sys.argv = 'test --tests-pattern ^pledge$ --cdiff'.split()
>>> _ = testrunner.run_internal(defaults)
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
<BLANKLINE>
<BLANKLINE>
Failure in test pledge (pledge)
Failed doctest test for pledge.pledge
  File "testrunner-ex/pledge.py", line 24, in pledge
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/pledge.py", line 26, in pledge.pledge
Failed example:
    print_pledge()
Differences (context diff with expected followed by actual):
    ***************
    *** 1,3 ****
    ! I give my pledge, as an earthling,
      to save, and faithfully, to defend from waste,
      the natural resources of my planet.
    --- 1,3 ----
    ! I give my pledge, as and earthling,
      to save, and faithfully, to defend from waste,
      the natural resources of my planet.
<BLANKLINE>
  Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.

Specifying more than one diff option at once causes an error:

>>> sys.argv = 'test --tests-pattern ^pledge$ --cdiff --udiff'.split()
>>> _ = testrunner.run_internal(defaults)
Traceback (most recent call last):
...
SystemExit: 1
>>> sys.argv = 'test --tests-pattern ^pledge$ --cdiff --ndiff'.split()
>>> _ = testrunner.run_internal(defaults)
Traceback (most recent call last):
...
SystemExit: 1
>>> sys.argv = 'test --tests-pattern ^pledge$ --udiff --ndiff'.split()
>>> _ = testrunner.run_internal(defaults)
Traceback (most recent call last):
...
SystemExit: 1

Testing-Module Import Errors

If there are errors when importing a test module, these errors are reported. In order to illustrate a module with a syntax error, we create one now: this module used to be checked in to the project, but then it was included in distributions of projects using zope.testrunner too, and distutils complained about the syntax error when it compiled Python files during installation of such projects. So first we create a module with bad syntax:

>>> badsyntax_path = os.path.join(directory_with_tests,
...                               "sample2", "sampletests_i.py")
>>> f = open(badsyntax_path, "w")
>>> print >> f, "importx unittest"  # syntax error
>>> f.close()

Then run the tests:

>>> sys.argv = ('test --tests-pattern ^sampletests(f|_i)?$ --layer 1 '
...            ).split()
>>> testrunner.run_internal(defaults)
... # doctest: +NORMALIZE_WHITESPACE
Test-module import failures:
<BLANKLINE>
Module: sample2.sampletests_i
<BLANKLINE>
Traceback (most recent call last):
  File "testrunner-ex/sample2/sampletests_i.py", line 1
    importx unittest
                   ^
SyntaxError: invalid syntax
<BLANKLINE>
<BLANKLINE>
Module: sample2.sample21.sampletests_i
<BLANKLINE>
Traceback (most recent call last):
  File "testrunner-ex/sample2/sample21/sampletests_i.py", line 15, in ?
    import zope.testrunner.huh
ImportError: No module named huh
<BLANKLINE>
<BLANKLINE>
Module: sample2.sample23.sampletests_i
<BLANKLINE>
Traceback (most recent call last):
  File "testrunner-ex/sample2/sample23/sampletests_i.py", line 18, in ?
    class Test(unittest.TestCase):
  File "testrunner-ex/sample2/sample23/sampletests_i.py", line 23, in Test
    raise TypeError('eek')
TypeError: eek
<BLANKLINE>
<BLANKLINE>
Running samplelayers.Layer1 tests:
  Set up samplelayers.Layer1 in 0.000 seconds.
  Ran 9 tests with 0 failures and 0 errors in 0.000 seconds.
Running samplelayers.Layer11 tests:
  Set up samplelayers.Layer11 in 0.000 seconds.
  Ran 26 tests with 0 failures and 0 errors in 0.007 seconds.
Running samplelayers.Layer111 tests:
  Set up samplelayers.Layerx in 0.000 seconds.
  Set up samplelayers.Layer111 in 0.000 seconds.
  Ran 26 tests with 0 failures and 0 errors in 0.007 seconds.
Running samplelayers.Layer112 tests:
  Tear down samplelayers.Layer111 in 0.000 seconds.
  Set up samplelayers.Layer112 in 0.000 seconds.
  Ran 26 tests with 0 failures and 0 errors in 0.007 seconds.
Running samplelayers.Layer12 tests:
  Tear down samplelayers.Layer112 in 0.000 seconds.
  Tear down samplelayers.Layerx in 0.000 seconds.
  Tear down samplelayers.Layer11 in 0.000 seconds.
  Set up samplelayers.Layer12 in 0.000 seconds.
  Ran 26 tests with 0 failures and 0 errors in 0.007 seconds.
Running samplelayers.Layer121 tests:
  Set up samplelayers.Layer121 in 0.000 seconds.
  Ran 26 tests with 0 failures and 0 errors in 0.007 seconds.
Running samplelayers.Layer122 tests:
  Tear down samplelayers.Layer121 in 0.000 seconds.
  Set up samplelayers.Layer122 in 0.000 seconds.
  Ran 26 tests with 0 failures and 0 errors in 0.006 seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer122 in 0.000 seconds.
  Tear down samplelayers.Layer12 in 0.000 seconds.
  Tear down samplelayers.Layer1 in 0.000 seconds.
<BLANKLINE>
Test-modules with import problems:
  sample2.sampletests_i
  sample2.sample21.sampletests_i
  sample2.sample23.sampletests_i
Total: 165 tests, 0 failures, 0 errors in N.NNN seconds.
True

Reporting Errors to Calling Processes

The testrunner returns the error status, indicating that the tests failed. This can be useful for an invoking process that wants to monitor the result of a test run.

This is applied when invoking the testrunner using the run() function instead of run_internal():

>>> sys.argv = (
...     'test --tests-pattern ^sampletests_1$'.split())
>>> try:
...     testrunner.run(defaults)
... except SystemExit, e:
...     print 'exited with code', e.code
... else:
...     print 'sys.exit was not called'
... # doctest: +ELLIPSIS
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
...
  Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
exited with code 1

Passing tests exit with code 0 according to UNIX practices:

>>> sys.argv = (
...     'test --tests-pattern ^sampletests$'.split())
>>> try:
...     testrunner.run(defaults)
... except SystemExit, e2:
...     print 'exited with code', e2.code
... else:
...     print 'sys.exit was not called'
... # doctest: +ELLIPSIS
Running samplelayers.Layer11 tests:
...
Running zope.testrunner.layer.UnitTests tests:
  Tear down samplelayers.Layer122 in N.NNN seconds.
  Tear down samplelayers.Layer12 in N.NNN seconds.
  Tear down samplelayers.Layer1 in N.NNN seconds.
  Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
  Ran 130 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
Total: 286 tests, 0 failures, 0 errors in N.NNN seconds.
exited with code 0

And remove the temporary directory:

>>> shutil.rmtree(tmpdir)
 
File Type Py Version Uploaded on Size
zope.testrunner-4.1.0.zip (md5) Source 2013-02-07 221KB
  • Downloads (All Versions):
  • 69 downloads in the last day
  • 931 downloads in the last week
  • 4268 downloads in the last month