By: Brian Meraz, Backend Team Lead
Testing code that interacts with AWS has its share of challenges. When combining the PyTest framework with Moto (python library), this gives us the ability to test units of code that interact with AWS, while mocking AWS responses.
What is a unit test?
The isolation and testing of a singular bit of code from within a code base is what is described as a unit test.
Some of the benefits of a unit test are (there are many more):
● Change the technical implementation while making sure you don’t change the behavior (refactoring).
● Great documentation for the code
● Find bugs early and simplifies the debugging process
● Maintain quality of code
Pytest is a framework that makes it easy to conduct small, scalable tests. These tests can vary from simplistic and straightforward to complex and comprehensive.
Getting started with pytest is extremely simple. By default, pytest only identifies the file names starting with, “test_” or ending with, “_test” as the test files. Pytest requires the test method names to start with “test”, this is not an option.
You’ll notice the use of the keyword, “assert” within the test_inc() function above. An assertion is a check that returns either True or False as verification for expectations. In pytest, if an assertion fails in a test method, then that method execution is stopped there. The remaining code in that test method is not executed, and pytest will continue with the next test method.
If this assertion fails you will see the return value of the function call:
The purpose of test fixtures is to provide a fixed baseline upon which tests can reliably and repeatedly execute.
Pytest fixtures offer improvements over the classic xUnit style of setup/teardown functions. Here are a few examples of what fixtures have to offer:
● fixtures have explicit names and are activated by declaring their use from test functions, modules, classes or whole projects.
● fixtures are implemented in a modular manner, as each fixture name triggers a fixture function which can itself use other fixtures.
● fixture management scales from simple unit to complex functional testing, allowing to parametrize fixtures and tests according to configuration and component options, or to re-use fixtures across class, module or whole test session scopes.
To use a fixture within your test function, pass the fixture name as a parameter to make it available.
You’ll notice the use of a scope from within the pytest.fixture() decorator. This allows for the fixture function name() instance to be instantiated only once per test module. (more applicable in a conftest file — below) This reduces the potential repeated requests that may be required for a particular fixture. (a better example would be a fixture making a request)
A convenient way to give access to potentially, multiple modules, the same fixtures is to create a file called, conftest.py. This file needs to explicitly reside within your test folder (houses all your test files)
Let’s say, for example, our test files (groups.py and individuals.py) both need to leverage the same fixtures; by keeping all of our fixtures in one file, we can share these fixtures with as many test modules as we’d like.
One important thing to note, as we discussed before, when we define the scope of a fixtures as, “module”, then any instance within our fixture function will only need to be instantiated once. All other references to this function will be referring to the same instance. This is convenient if we are making a request to a database service, etc. We wouldn’t want to call the database service multiple times to create whole new instances for every unit test. (more on this later)
Moto is a python library that allows the user to mock AWS services. This is convenient for obvious reasons, but when you combine the ability to mock an interaction with an AWS service together with pytest, specifically a conftest file housing fixtures, you get a powerful unit testing tool.
How does moto work?
There is a global list of handlers in botocore, which is the foundation of boto3 (AWS SDK — more on this later), that allows moto to interact with it. All handlers within this global list are registered every time a session is instantiated. Once an internal event is emitted, the handlers registered for that kind of event are called.
An important note here is the before-send-handler is executed before the actual http-request is made to AWS. It’s this before-send-handler that warrants a response. If there is a response, it is used for further processing and the http-request is disregarded.
After the before-send-handler is received it is appended to the botocore handlers (BUILTIN_HANDLERS) and it is this handler that is used to return any mocked responses from moto to any mocked backend that has been registered. This appending is done implicitly, when importing moto in your test code, but does not return (mock) anything by default. Mocking can be achieved by using moto-decorators (or other initializations of moto), which are available for most of the AWS resources. (Check out which services)
The moto-decorator registers a mock backend for the scope of the test function. The mock backend is used by the appended before-send-handler to return mock responses. Keep in mind, that the moto-decorator enables the mocking only for the scope of your test function. After your test passes, mock backends and testing credentials are being reset by moto.
Some important notes before using moto:
● An important step in keeping your tests away from actual AWS services is establishing test credentials before the initialization of any mock AWS service (more on this later).
● Make sure you use the moto decorator before your test function. Without this decorator, the mocked backend would not be initialized and this could lead to possible interaction with a live AWS service. (if you didn’t perform mock credentials)
You’ll notice the import of boto3, which is an AWS SDK (python) that allows for applications to integrate with AWS services.
You can see in the example above the use of the moto decorator, “@mock_ec2”. As explained earlier, the decorator registers the handler for the aws service via botocore / boto3. In this case it’s for the AWS service EC2 (elastic compute cloud). There are a couple of ways you can handle this type of test. In the example above, we are not only executing a mock client for EC2, but we are also interacting with the live function, add_servers().
If we have taken precautions (test credentials, etc.), then we can interact with a function without fear of interacting with a, “live” version of an AWS service. We don’t have to interact with a live function in our test function. It is perfectly fine to create, interact, assert, with a test instance.
Combine Pytest Fixtures with Moto
Now combine everything we’ve covered so far to create the type of unit tests that are modular, scalable, and very important for our use-case, interacting with AWS services.
First, the focus is on Pytest fixtures, specifically the conftest.py file.
In this conftest file you can see the use of:
- Pytest fixtures with module level scope (singular instance per test module)
- Mock credentials to safeguard from interacting with live AWS services
- Passing the mock credentials fixture into every single mock AWS service at the function parameter level
- Functions with names specific for AWS services we want to mock (S3 and DynamoDB)
- You’ll notice the use of, “with” statements with a return of, “yield”. This is a convenient way to assure a setup (open) and tear down (close) the connection to this mock service while returning the client instance to the caller.
- Note the need to import the specific mock AWS service (mock_s3 & mock_dynamodb2) from the moto library.
Next is an example of how our test files can leverage the conftest file and reduce the need to rewrite code.
In the example above you’ll note there is no need to import the conftest.py file in our s3 test file. As long as the conftest file is within the same directory as the test files, pytest will automatically import it. All that is needed is to reference the fixture function name and pass it as a parameter in our test functions.
In the TestS3Class, there is the use of a python library, contextlib. We can leverage this library to use the decorator @contextmanager. For our use-case, we can use a singular instance of creating an S3 bucket and then using that same bucket throughout the test class. This serves as both a test in creating a bucket in the AWS service S3, but also use that same mock bucket to conduct other S3 service tests (PUT).
Here is an example of conducting a test class for the AWS service DynamoDB:
You’ll notice the test class above is setup in the same manner as the S3 test class. First we use a context manager to create a DDB table and have that table available to the rest of the methods in the class. Next, we can create methods that test assertions about other interactions with the same DDB table.
There are some limitations as to which AWS services you can mock and to what extent the ones that are available have to offer (review the moto link in references for more details), but overall, moto is a great tool to use when interacting with AWS services for mocked responses. The combination of using a unit test framework like pytest and leveraging its power with fixtures, and implementing the fantastic python library moto, is what gives developers the ability to write modular, scalable and clean unit tests.