TransWikia.com

Is it possible/advisable to combine unit testing and integration testing?

Software Engineering Asked on November 19, 2021

I’ve built a Python script that consists of about 20 functions that extract data from MySQL via .sql files (I’m currently trying to learn SQLAlchemy ORM), do something to it and then put it back in MySQL. The process involves changing the same data multiple times so future steps rely on previous steps.

I’ve built a unittest test case per function that boots up a test database, populates it with data a fixture (in this case an Excel file – I will probably move this to a .sql dump file in future) and then tests the output against data also held in a fixture. I’m not sure if these class as unit tests as they test only one function or integration tests as they simulate interaction with other system components. Apparently though it doesn’t matter what you call them.

Currently each test is self-contained and the data used doesn’t simulate a full process flow – it uses simplistic inputs that don’t relate to previous examples. I now want to test the end to end flow of the process and currently I foresee this as building a single test case with every step of the process (function) in it. I then insert a dataset at the beginning and test the output at the very end. Worth mentioning that I’ve taken to thinking of the current tests being unit tests and the end to end test being an integration test, hence the title of this post.

My question is this… should I not simply build my current fixture data so that it follows a logical pattern from start to finish? That way I get the benefits of discretely testing the outputs of every step whilst ensuring I also have the end to end view too. It seems like a good solution but there’s something nagging me that this could be a pain to maintain over the long term.

Any advice for a Python and testing newbie?

One Answer

Is it possible/advisable to combine unit testing and integration testing?

Doing both unit testing and integration testing? Overwhelmingly yes.

Mashing them together in a single test suite? Not advisable.

Based on your comment, it seems you already understand the purpose of both, so I won't repeat that here. But I do want to address your suggestions that they can be combined in a single test sequence.

I guess I'm hypothesising that if the data is exactly aligned then you get an integration test from a series of unit tests

To achieve this, your unit tests need to store state that another unit test can therefore rely on. This means that your test is no longer a unit test, as it relies on an additional dependency (the state store).

What you're describing here is a series of integration tests, not a series of unit and integration tests.

Also, it's completely normal for a single integration test to assert itself at every step, e.g.:

  • Arrange: set up fixture
  • Act: create new Foo
  • Assert: was the Foo created?
  • Act: update Foo with new data
  • Assert: was the Foo updated?
  • Act: delete the Foo
  • Assert: was the Foo deleted?

Those aren't three separate tests, they're a single integration test that is comprised of multiple steps. If you had only asserted everything at the end, then you would've been unable to distinguish cases where the Foo was never created (and the rest of the logic didn't throw exceptions) and ones where the Foo was correctly created and deleted.

When you describe your suggested unit tests that follow each other, that sounds like what you really need is a combined integration test that is comprised of all of those steps.

should I not simply build my current fixture data so that it follows a logical pattern from start to finish? That way I get the benefits of discretely testing the outputs of every step whilst ensuring I also have the end to end view too.

Using the example integration test I mentioned, consider a bug whereby a created Foo can be deleted, but an updated Foo cannot. For some reason (randomly chosen for the purpose of this example), when you update a Foo, it renders the Foo undeleteable.

When you run your suggested test suite:

  • "Unit" test createFoo => PASS
  • "Unit" test updateFoo => PASS
  • "Unit" test deleteFoo => FAIL

Your sequence of "unit" tests (quotes because they're really mini integration tests) would flag the wrong step (i.e. delete) as the problematic one. The create would succeed, the update would succeed, but the delete would not.

Based on that test report alone, you would conclude that something is wrong in the delete logic. If you were a betting man, you'd bet money that the bug was being caused there.

However, if you had created actual unit tests and some multi-step integration tests, you would've narrowed down the issue better:

  • Unit test createFoo => PASS
  • Unit test updateFoo => PASS
  • Unit test deleteFoo => PASS
  • Integration test createAndDeleteFoo => PASS
  • Integration test createAndUpdateAndDeleteFoo => FAIL (failure reported on the delete step)

Now, you wouldn't conclude that the bug is in the delete logic, since the delete unit test passes. As a betting man, you would not be betting money on the delete logic being broken, as the test report almost invariably suggests that it's the updating of a Foo which is causing an issue in the deletion of that same Foo.

This is of course a cherrypicked example, but it highlights the purpose of having separate unit tests and integration tests. If unit tests pass and integration tests fail, that means you have narrowed the problem down to an interaction between two components, rather than a component itself.

These sorts of issues are very hard to spot naturally, as each code block you look at will individually appear correct (since each unit test passes). The cause of the bug is most likely a side effect whose cause (in the update logic) and effect (in the delete logic) are spread over different locations, making it very hard to spot, especially in a sufficiently large codebase.

That's still not an easy bug to fix, but at least now you know that you're looking for a side effect and can put the update/delete logic side by side when hunting down the bug.


As a general rule, when people suggest taking a shortcut in writing tests (as you have), the intention is generally good (trying to achieve the same thing with less effort), but in almost all cases it leads to less informative test failure reports, e.g. not being able to pinpoint to the location of a bug.

Your suggested testing strategy still catches the vast majority of bugs that you're going to encounter during development, but some types of failures will become harder to troubleshoot.

I'm not going to tell you you're not allowed to employ your testing strategy. I'm just pointing out the difference between your strategy and the "normal" (very much mind the quotes) testing strategy.

Answered by Flater on November 19, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP