Advanced usage

This section contains some advanced examples for using pytest-dependency.

Dynamic compilation of marked parameters

Sometimes, the parameter values for parametrized tests cannot easily be typed as a simple list. It may need to be compiled at run time depending on a set of test data. This also works together with marking dependencies in the individual test instances.

Consider the following example test module:

import pytest

# Test data
# Consider a bunch of Nodes, some of them are parents and some are children.

class Node(object):
    NodeMap = {}
    def __init__(self, name, parent=None):
        self.name = name
        self.children = []
        self.NodeMap[self.name] = self
        if parent:
            self.parent = self.NodeMap[parent]
            self.parent.children.append(self)
        else:
            self.parent = None
    def __str__(self):
        return self.name

parents = [ Node("a"),  Node("b"),  Node("c"),  Node("d"), ]
childs =  [ Node("e", "a"), Node("f", "a"), Node("g", "a"), 
            Node("h", "b"), Node("i", "c"), Node("j", "c"), 
            Node("k", "d"), Node("l", "d"), Node("m", "d"), ]

# The test for the parent shall depend on the test of all its children.
# Create enriched parameter lists, decorated with the dependency marker.

childparam = [ 
    pytest.param(c, marks=pytest.mark.dependency(name="test_child[%s]" % c)) 
    for c in childs
]
parentparam = [
    pytest.param(p, marks=pytest.mark.dependency(
        name="test_parent[%s]" % p, 
        depends=["test_child[%s]" % c for c in p.children]
    )) for p in parents
]

@pytest.mark.parametrize("c", childparam)
def test_child(c):
    if c.name == "l":
        pytest.xfail("deliberate fail")
        assert False

@pytest.mark.parametrize("p", parentparam)
def test_parent(p):
    pass

In principle, this example works the very same way as the basic example for Parametrized tests. The only difference is that the lists of parameters are dynamically compiled beforehand. The test for child l deliberately fails, just to show the effect. As a consequence, the test for its parent d will be skipped.

Grouping tests using fixtures

pytest features the automatic grouping of tests by fixture instances. This is particularly useful if there is a set of test cases and a series of tests shall be run for each of the test case respectively.

Consider the following example:

import pytest
from pytest_dependency import depends

@pytest.fixture(scope="module", params=range(1,10))
def testcase(request):
    param = request.param
    return param

@pytest.mark.dependency()
def test_a(testcase):
    if testcase % 7 == 0:
        pytest.xfail("deliberate fail")
        assert False

@pytest.mark.dependency()
def test_b(request, testcase):
    depends(request, ["test_a[%d]" % testcase])
    pass

The test instances of test_b depend on test_a for the same parameter value. The test test_a[7] deliberately fails, as a consequence test_b[7] will be skipped. Note that we need to call pytest_dependency.depends() to mark the dependencies, because there is no way to use the pytest.mark.dependency() marker on the parameter values here.

If many tests in the series depend on a single test, it might be an option, to move the call to pytest_dependency.depends() in a fixture on its own. Consider:

import pytest
from pytest_dependency import depends

@pytest.fixture(scope="module", params=range(1,10))
def testcase(request):
    param = request.param
    return param

@pytest.fixture(scope="module")
def dep_testcase(request, testcase):
    depends(request, ["test_a[%d]" % testcase])
    return testcase

@pytest.mark.dependency()
def test_a(testcase):
    if testcase % 7 == 0:
        pytest.xfail("deliberate fail")
        assert False

@pytest.mark.dependency()
def test_b(dep_testcase):
    pass

@pytest.mark.dependency()
def test_c(dep_testcase):
    pass

In this example, both test_b[7] and test_c[7] are skipped, because test_a[7] deliberately fails.

Depend on all instances of a parametrized test at once

If a test depends on a all instances of a parametrized test at once, listing all of them in the pytest.mark.dependency() marker explicitly might not be the best solution. But you can dynamically compile these lists from the parameter values, as in the following example:

import pytest

def instances(name, params):
    def vstr(val):
        if isinstance(val, (list, tuple)):
            return "-".join([str(v) for v in val])
        else:
            return str(val)
    return ["%s[%s]" % (name, vstr(v)) for v in params]


params_a = range(17)

@pytest.mark.parametrize("x", params_a)
@pytest.mark.dependency()
def test_a(x):
    if x == 13:
        pytest.xfail("deliberate fail")
        assert False
    else:
        pass

@pytest.mark.dependency(depends=instances("test_a", params_a))
def test_b():
    pass

params_c = list(zip(range(0,8,2), range(2,6)))

@pytest.mark.parametrize("x,y", params_c)
@pytest.mark.dependency()
def test_c(x, y):
    if x > y:
        pytest.xfail("deliberate fail")
        assert False
    else:
        pass

@pytest.mark.dependency(depends=instances("test_c", params_c))
def test_d():
    pass

params_e = ['abc', 'def']

@pytest.mark.parametrize("s", params_e)
@pytest.mark.dependency()
def test_e(s):
    if 'e' in s:
        pytest.xfail("deliberate fail")
        assert False
    else:
        pass

@pytest.mark.dependency(depends=instances("test_e", params_e))
def test_f():
    pass

Here, test_b, test_d, and test_f will be skipped because they depend on all instances of test_a, test_c, and test_e respectively, but test_a[13], test_c[6-5], and test_e[def] fail. The list of the test instances is compiled in the helper function instances().

Unfortunately you need knowledge how pytest encodes parameter values in test instance names to write this helper function. Note in particular how lists of parameter values are compiled into one single string in the case of multi parameter tests. But also note that this example of the instances() helper will only work for simple cases. It requires the parameter values to be scalars that can easily be converted to strings. And it will fail if the same list of parameters is passed to the same test more then once, because then, pytest will add an index to the name to disambiguate the parameter values.

Logical combinations of dependencies

The dependencies passed as in the depends argument to the pytest.mark.dependency() marker are combined in an and-like manner: the current test is skipped unless all dependencies did succeed. Sometimes one may want to combine the dependencies in a different way. This is not supported by pytest-dependency out of the box, but it is not difficult to implement. Consider the following example:

import pytest
from pytest_dependency import depends

def depends_or(request, other, scope='module'):
    item = request.node
    for o in other:
        try:
            depends(request, [o], scope)
        except pytest.skip.Exception:
            continue
        else:
            return
    pytest.skip("%s depends on any of %s" % (item.name, ", ".join(other)))


@pytest.mark.dependency()
def test_ap():
    pass

@pytest.mark.dependency()
@pytest.mark.xfail(reason="deliberate fail")
def test_ax():
    assert False

@pytest.mark.dependency()
def test_bp():
    pass

@pytest.mark.dependency()
@pytest.mark.xfail(reason="deliberate fail")
def test_bx():
    assert False

@pytest.mark.dependency()
def test_c(request):
    depends_or(request, ["test_ax", "test_bx"])
    pass

@pytest.mark.dependency()
def test_d(request):
    depends_or(request, ["test_ax", "test_bp"])
    pass

@pytest.mark.dependency()
def test_e(request):
    depends_or(request, ["test_ap", "test_bx"])
    pass

@pytest.mark.dependency()
def test_f(request):
    depends_or(request, ["test_ap", "test_bp"])
    pass

The helper function depends_or() is similar to pytest_dependency.depends(), it takes the same arguments. The only difference is that it combines the dependencies passed in the other argument in an or-like manner: the current test will be run if at least one of the other tests did succeed.

The tests test_c, test_d, test_e, and test_f in this example all depend on two other tests. Only test_c will be skipped, because all tests in its dependency list fail. The other ones are run, because they have at least one succeeding test in their dependency list.

Other logical combinations of dependencies are conceivable and may be implemented in a similar way, according to the use case at hand.

Note

The depends_or() helper function above is based on pytest internals: skipping of tests works by raising an exception and the exception class is exposed as pytest.skip.Exception. This is not documented in pytest. It has been tested to work for pytest versions 3.7.0 through 6.2.5, but it is not guaranteed to be stable for future pytest versions.