Sometimes you want to change the behavior of a function call in a Python test. Let's assume you have the following code:
# a.py from b import subfunc def func(): # do something subfunc(1, 2) # do something else # b.py def subfunc(a, b=1): # step1 # step2 # step3
You are testing the func function and would to change the behavior of step2 in subfunc without affecting step1 or step3.
Mocking: Replacement Function
One way to solve this would be to mock the entire subfunc:
def test_func(monkeypatch): def _mocked(a, b=1): # step1 # step3 monkeypatch.setattr('b.subfunc', _mocked) # do testing of func()
(Note, all example code assumes that you're using pytest with the monkeypatch fixture. But you can also use other testing frameworks and the mock library instead.)
But that would require you to copy the body of the function and adjust it as desired. This violates the DRY principle and could be a source of bugs (e.g. if step1 and step3 change later on).
A cleaner way to make the subfunc more dynamic is dependency injection. We simply add a new argument with a default value, and act depending on that value:
# b.py def subfunc(a, b=1, do_step2=True): # step1 if do_step2 is True: # step2 # step3
Now we can simply manipulate the value of the do_step parameter to change the behavior. But how do we actually do that? Two methods come to mind:
Monkey patching __defaults__
Every Python function with default arguments has a __defaults__ attribute. It contains a tuple with all default argument values:
>>> def subfunc(a, b=1, do_step2=True): ... print(a, b, do_step2) ... >>> subfunc.__defaults__ (1, True)
We can manipulate that attribute to change the defaults:
def test_func(monkeypatch): monkeypatch.setattr('b.subfunc.__defaults__', (1, False)) # do testing of func()
This works nicely, there are two downsides though. First of all, it's hacky due to the use of double underscore methods. But even worse, we have to specify the default argument for the first kwarg too! That violates the DRY principle and could be a source of bugs. Sounds familiar, right?
Of course, we could try to retrieve the initial defaults, manipulate them and then monkey patch the __defaults__ attribute again. But that's even more hacky...
Using a partial function
A much nicer way is to use partial function application. It's a method mainly coming from functional programming. You can use it to override the value of some arguments and/or keyword arguments, yielding a higher order function.
As a short example, let's create a function that adds 2 to an input value:
from functools import partial # Regular add function def add(a, b): return a + b assert add(40, 2) == 42 # Partial function that overrides b add_two = partial(add, b=2) assert add_two(40) == 42
Now that you know how partial functions work, let's use them to override the default argument of our subfunc:
from functools import partial def test_func(monkeypatch): from b import subfunc monkeypatch.setattr('b.subfunc', partial(subfunc, do_step2=False)) # do testing of func()
Now we have a clean way to modify function behavior from tests through dependency injection, without having to resort to Python internals hackery :)