summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorRobert Collins <robertc@robertcollins.net>2009-03-11 00:05:11 +1100
committerRobert Collins <robertc@robertcollins.net>2009-03-11 00:05:11 +1100
commit2934942897b593aa74451e4642b4e540d7594b30 (patch)
tree5f4fa4401978fcc89fcd307e50f087cea358ca04
parent9a44909bfe18cb1ad888bab684913fe3f8dee046 (diff)
parentc68c7ccf8643b684e36f2086001fca8cb3b49cf8 (diff)
downloadtestscenarios-git-2934942897b593aa74451e4642b4e540d7594b30.tar.gz
Merge Martin Pools improvements to README, making it doctestable, more ReST compliant and a bit easier to read.
-rw-r--r--Makefile3
-rw-r--r--NEWS5
-rw-r--r--README176
-rw-r--r--doc/__init__.py1
-rw-r--r--doc/test_sample.py6
-rwxr-xr-xtest_all.py6
6 files changed, 135 insertions, 62 deletions
diff --git a/Makefile b/Makefile
index 73a862b..ec83908 100644
--- a/Makefile
+++ b/Makefile
@@ -1,9 +1,10 @@
PYTHONPATH:=$(shell pwd)/lib:${PYTHONPATH}
+PYTHON ?= python
all:
check:
- PYTHONPATH=$(PYTHONPATH) python ./test_all.py $(TESTRULE)
+ PYTHONPATH=$(PYTHONPATH) $(PYTHON) ./test_all.py $(TESTRULE)
clean:
find . -name '*.pyc' -print0 | xargs -0 rm -f
diff --git a/NEWS b/NEWS
index 2769ea1..55a6cdb 100644
--- a/NEWS
+++ b/NEWS
@@ -14,6 +14,11 @@ IN DEVELOPMENT
README.
(Robert Collins)
+ * Make the README documentation doctest compatible, to be sure it works.
+ Also various presentation and language touchups. (Martin Pool)
+ (Adjusted to use doctest directly, and to not print the demo runners
+ output to stderror during make check - Robert Collins)
+
IMPROVEMENTS:
BUG FIXES:
diff --git a/README b/README
index f18a027..c94fc7e 100644
--- a/README
+++ b/README
@@ -1,5 +1,6 @@
-testscenarios: extensions to python unittest to allow declarative
-dependency injection ('scenarios') by tests.
+*****************************************************************
+testscenarios: extensions to python unittest to support scenarios
+*****************************************************************
Copyright (C) 2009 Robert Collins <robertc@robertcollins.net>
@@ -24,11 +25,11 @@ a single test suite) or for classic dependency injection (provide tests with
dependencies externally to the test code itself, allowing easy testing in
different situations).
-Dependencies:
-=============
+Dependencies
+============
* Python 2.4+
-* testtools
+* testtools <https://launchpad.net/testtools>
Why TestScenarios
@@ -51,19 +52,33 @@ It is the intent of testscenarios to make dynamically running a single test
in multiple scenarios clear, easy to debug and work with even when the list
of scenarios is dynamically generated.
-Getting Scenarios applied:
-==========================
+
+Defining Scenarios
+==================
+
+A **scenario** is a tuple of a string name for the scenario, and a dict of
+parameters describing the scenario. The name is appended to the test name, and
+the parameters are made available to the test instance when it's run.
+
+Scenarios are presented in **scenario lists** which are typically Python lists
+but may be any iterable.
+
+
+Getting Scenarios applied
+=========================
At its heart the concept is simple. For a given test object with a list of
scenarios we prepare a new test object for each scenario. This involves:
- * Clone the test to a new test with a new id uniquely distinguishing it.
- * Apply the scenario to the test by setting each key, value in the scenario
- as attributes on the test object.
+
+* Clone the test to a new test with a new id uniquely distinguishing it.
+* Apply the scenario to the test by setting each key, value in the scenario
+ as attributes on the test object.
There are some complicating factors around making this happen seamlessly. These
factors are in two areas:
- * Choosing what scenarios to use. (See Setting Scenarios For A Test).
- * Getting the multiplication to happen.
+
+* Choosing what scenarios to use. (See Setting Scenarios For A Test).
+* Getting the multiplication to happen.
Subclasssing
++++++++++++
@@ -84,10 +99,22 @@ TwistedTestCase, or TestCaseWithResources, or any one of a number of other
useful test base classes, or need to override run() or __call__ yourself) then
you can cause scenario application to happen later by calling
``testscenarios.generate_scenarios()``. For instance::
- >>> mytests = loader.loadTestsFromNames([...])
- >>> test_suite = TestSuite()
+
+ >>> import unittest
+ >>> import StringIO
+ >>> from testscenarios.scenarios import generate_scenarios
+
+This can work with loaders and runners from the standard library, or possibly other
+implementations::
+
+ >>> loader = unittest.TestLoader()
+ >>> test_suite = unittest.TestSuite()
+ >>> runner = unittest.TextTestRunner(stream=StringIO.StringIO())
+
+ >>> mytests = loader.loadTestsFromNames(['doc.test_sample'])
>>> test_suite.addTests(generate_scenarios(mytests))
>>> runner.run(test_suite)
+ <unittest._TextTestResult run=1 errors=0 failures=0>
Testloaders
+++++++++++
@@ -100,35 +127,37 @@ course, if you are using the subclassing approach this is already a surety).
With ``load_tests``::
>>> def load_tests(standard_tests, module, loader):
- >>> result = loader.suiteClass()
- >>> result.addTests(generate_scenarios(standard_tests))
- >>> return result
+ ... result = loader.suiteClass()
+ ... result.addTests(generate_scenarios(standard_tests))
+ ... return result
With ``test_suite``::
>>> def test_suite():
- >>> loader = TestLoader()
- >>> tests = loader.loadTestsFromName(__name__)
- >>> result = loader.suiteClass()
- >>> result.addTests(generate_scenarios(tests))
- >>> return result
+ ... loader = TestLoader()
+ ... tests = loader.loadTestsFromName(__name__)
+ ... result = loader.suiteClass()
+ ... result.addTests(generate_scenarios(tests))
+ ... return result
-Setting Scenarios for a test:
-=============================
+Setting Scenarios for a test
+============================
A sample test using scenarios can be found in the doc/ folder.
-See pydoc testscenarios for details.
+See `pydoc testscenarios` for details.
On the TestCase
+++++++++++++++
You can set a scenarios attribute on the test case::
- >>> class MyTest(TestCase):
- >>>
- >>> scenarios = [scenario1, scenario2, ...]
+ >>> class MyTest(unittest.TestCase):
+ ...
+ ... scenarios = [
+ ... ('scenario1', dict(param=1)),
+ ... ('scenario2', dict(param=2)),]
This provides the main interface by which scenarios are found for a given test.
Subclasses will inherit the scenarios (unless they override the attribute).
@@ -139,23 +168,42 @@ After loading
Test scenarios can also be generated arbitrarily later, as long as the test has
not yet run. Simply replace (or alter, but be aware that many tests may share a
single scenarios attribute) the scenarios attribute. For instance in this
-example some third party tests are extended to run with a custom scenario.
-
- >>> for test in iterate_tests(stock_library_tests):
- >>> if isinstance(test, TestVFS):
- >>> test.scenarios = test.scenarios + [my_vfs_scenario]
- >>> ...
+example some third party tests are extended to run with a custom scenario. ::
+
+ >>> import testtools
+ >>> class TestTransport:
+ ... """Hypothetical test case for bzrlib transport tests"""
+ ... pass
+ ...
+ >>> stock_library_tests = unittest.TestLoader().loadTestsFromNames(
+ ... ['doc.test_sample'])
+ ...
+ >>> for test in testtools.iterate_tests(stock_library_tests):
+ ... if isinstance(test, TestTransport):
+ ... test.scenarios = test.scenarios + [my_vfs_scenario]
+ ...
+ >>> suite = unittest.TestSuite()
>>> suite.addTests(generate_scenarios(stock_library_tests))
-Note that adding scenarios to a test that has already been parameterised via
-generate_scenarios generates a cross product::
- >>> class CrossProductDemo(TestCase):
- >>> scenarios = [scenario_0_0, scenario_0_1]
- >>> def test_foo(self):
- >>> return
+Generated tests don't have a ``scenarios`` list, because they don't normally
+require any more expansion. However, you can add a ``scenarios`` list back on
+to them, and then run them through ``generate_scenarios`` again to generate the
+cross product of tests. ::
+
+ >>> class CrossProductDemo(unittest.TestCase):
+ ... scenarios = [('scenario_0_0', {}),
+ ... ('scenario_0_1', {})]
+ ... def test_foo(self):
+ ... return
+ ...
+ >>> suite = unittest.TestSuite()
>>> suite.addTests(generate_scenarios(CrossProductDemo("test_foo")))
- >>> for test in iterate_tests(suite):
- >>> test.scenarios = test.scenarios + [scenario_1_0, scenario_1_1]
+ >>> for test in testtools.iterate_tests(suite):
+ ... test.scenarios = [
+ ... ('scenario_1_0', {}),
+ ... ('scenario_1_1', {})]
+ ...
+ >>> suite2 = unittest.TestSuite()
>>> suite2.addTests(generate_scenarios(suite))
>>> print suite2.countTestCases()
4
@@ -168,27 +216,28 @@ and available libraries. An easy way to do this is to provide a global scope
scenarios somewhere relevant to the tests that will use it, and then that can
be customised, or dynamically populate your scenarios from a registry etc.
For instance::
+
>>> hash_scenarios = []
>>> try:
- >>> import md5
- >>> except ImportError:
- >>> pass
- >>> else:
- >>> hash_scenarios.append(("md5", "hash": md5.new))
+ ... from hashlib import md5
+ ... except ImportError:
+ ... pass
+ ... else:
+ ... hash_scenarios.append(("md5", dict(hash=md5)))
>>> try:
- >>> import sha1
- >>> except ImportError:
- >>> pass
- >>> else:
- >>> hash_scenarios.append(("sha1", "hash": sha1.new))
- >>>
- >>> class TestHashContract(TestCase):
- >>>
- >>> scenarios = hash_scenarios
- >>>
- >>> class TestHashPerformance(TestCase):
- >>>
- >>> scenarios = hash_scenarios
+ ... from hashlib import sha1
+ ... except ImportError:
+ ... pass
+ ... else:
+ ... hash_scenarios.append(("sha1", dict(hash=sha1)))
+ ...
+ >>> class TestHashContract(unittest.TestCase):
+ ...
+ ... scenarios = hash_scenarios
+ ...
+ >>> class TestHashPerformance(unittest.TestCase):
+ ...
+ ... scenarios = hash_scenarios
Forcing Scenarios
@@ -201,3 +250,10 @@ introspecting the test object to determine the scenarios. The
``apply_scenarios`` function does not reset the test scenarios attribute,
allowing it to be used to layer scenarios without affecting existing scenario
selection.
+
+
+Advice on Writing Scenarios
+===========================
+
+If a parameterised test is because of a bug run without being parameterized,
+it should fail rather than running with defaults, because this can hide bugs.
diff --git a/doc/__init__.py b/doc/__init__.py
new file mode 100644
index 0000000..16da333
--- /dev/null
+++ b/doc/__init__.py
@@ -0,0 +1 @@
+# contractual obligation
diff --git a/doc/test_sample.py b/doc/test_sample.py
new file mode 100644
index 0000000..5254ccb
--- /dev/null
+++ b/doc/test_sample.py
@@ -0,0 +1,6 @@
+import unittest
+
+class TestSample(unittest.TestCase):
+
+ def test_so_easy(self):
+ pass
diff --git a/test_all.py b/test_all.py
index 0838c00..f5330ea 100755
--- a/test_all.py
+++ b/test_all.py
@@ -19,6 +19,7 @@
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
+import doctest
import unittest
import sys
import os
@@ -77,7 +78,10 @@ def earlyStopFactory(*args, **kwargs):
def test_suite():
import testscenarios
- return testscenarios.test_suite()
+ result = testscenarios.test_suite()
+ doctest.set_unittest_reportflags(doctest.REPORT_ONLY_FIRST_FAILURE)
+ result.addTest(doctest.DocFileSuite("README"))
+ return result
def main(argv):