Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do i test fabric tasks

has anyone managed to test their fabric tasks? is there a library out there that can help with this?

I'm quite familiar with patching/mocking, but its pretty difficult with fabric, I've also had a look through fabrics own test suite, which was of no use unfortunately, and there don't seem to be any topics on it in fabric docs.

These are the tasks I'm trying to test... Id like to avoid bringing up a VM if possible.

Any help is appreciated, Thanks in advance

like image 218
farridav Avatar asked Aug 04 '14 21:08

farridav


People also ask

What is fabric script?

¶ Fabric is a high level Python (2.7, 3.4+) library designed to execute shell commands remotely over SSH, yielding useful Python objects in return.

What is a fabric run?

Objects that often cause snags are rough fingernails or toenails, or hangnails. When a snag occurs in certain fine textiles like pantyhose, it is called a run. This is because the snag breaks at least one fibre, causing the knit to come undone in a line which runs up the grain of the fabric.

What is fabric tool used for?

Fabric is a Python library (i.e. a tool to build on) used for interacting with SSH and computer systems [easily] to automate a wide range of tasks, varying from application deployment to general system administration.


1 Answers

Disclaimer: Below, Functional Testing is used synonymously with System Testing. The lack of a formalized spec for most Fabric projects renders the distinction moot. Furthermore, I may get casual between the terms Functional Testing and Integration Testing, since the border between them blurs with any configuration management software.

Local Functional Testing for Fabric is Hard (or Impossible)

I'm pretty sure that it is not possible to do functional testing without either bringing up a VM, which you give as one of your constraints, or doing extremely extensive mocking (which will make your testsuite inherently fragile).

Consider the following simple function:

def agnostic_install_lsb():
    def install_helper(installer_command):
        ret = run('which %s' % installer_command)
        if ret.return_code == 0:
            sudo('%s install -y lsb-release' % installer_command)
            return True
        return False

    install_commands = ['apt-get', 'yum', 'zypper']
    for cmd in install_commands:
        if install_helper(cmd):
            return True

     return False

If you have a task that invokes agnostic_install_lsb, how can you do functional testing on a local box?

You can do unit testing by mocking the calls to run, local and sudo, but not much in terms of higher level integration tests. If you're willing to be satisfied with simple unit tests, there's not really much call for a testing framework beyond mock and nose, since all of your unit tests operate in tightly controlled conditions.

How You Would Do The Mocking

You could mock the sudo, local, and run functions to log their commands to a set of StringIOs or files, but, unless there's something clever that I'm missing, you would also have to mock their return values very carefully. To continue stating the things that you probably already know, your mocks would either have to be aware of the Fabric context managers (hard), or you would have to mock all of the context managers that you use (still hard, but not as bad).

If you do want to go down this path, I think it is safer and easier to build a test class whose setup instantiates mocks for all of the context managers, run, sudo, and any other parts of Fabric that you are using, rather than trying to do a more minimal amount of mocking on a per-test basis. At that point, you will have built a somewhat generic testing framework for Fabric, and you should probably share it on PyPi as... "mabric"?

I contend that this wouldn't be much use for most cases, since your tests end up caring about how a run is done, rather than just what is done by the end of it. Switching a command to sudo('echo "cthulhu" > /etc/hostname') from run('echo "cthulhu" | sudo tee /etc/hostname') shouldn't break the tests, and it's hard to see how to achieve that with simple mocks. This is because we've started to blur the line between functional and unit testing, and this kind of basic mocking is an attempt to apply unit testing methodologies to functional tests.


Testing Configuration Management Software on VMs is an Established Practice

I would urge you to reconsider how badly you want to avoid spinning up VMs for your functional tests. This is the commonly accepted practice for Chef testing, which faces many of the same challenges.

If you are concerned about the automation for this, Vagrant does a very good job of simplifying the creation of VMs from a template. I've even heard that there's good Vagrant/Docker integration, if you're a Docker fan. The only downside is that if you are a VMWare fan, Vagrant needs VMWare Workstation ($$$). Alternatively, just use Vagrant with Virtualbox for free.

If you're working in a cloud environment like AWS, you even get the option of spinning up new VMs with the same base images as your production servers for the sole purpose of doing your tests. Of course, a notable downside is that this costs money. However, it's not a significant fraction of your costs if you are already running your full software stack in a public cloud because the testing servers are only up for a few hours total out of a month.

In short, there are a bunch of ways of tackling the problem of doing full, functional testing on VMs, and this is a tried and true technique for other configuration management software.

If Not Using Vagrant (or similar), Keep a Suite of Locally Executable Unit Tests

One of the obvious problems with making your tests depend upon running a VM is that it makes testing for developers difficult. This is especially true for iterated testing against a local code version, as some projects (ex Web UI dev) may require.

If you are using Vagrant + Virtualbox, Docker (or raw LXC), or a similar solution for your testing virtualization, then local testing is not tremendously expensive. These solutions make spinning up fresh VMs doable on cheap laptop hardware in under ten minutes. For particularly fast iterations, you may be able to test multiple times against the same VM (and then replace it with a fresh one for a final test run).

However, if you are doing your virtualization in a public cloud or similar environment where doing too much testing on your VMs is costly, you should separate your tests into an extensive unit testsuite which can run locally, and integration or system tests which require the VM. This separate set of tests allows for development without the full testsuite, running against the unit tests as development proceeds. Then, before merging/shipping/signing off on changes, they should run against the functional tests on a VM.

Ultimately, nothing should make its way into your codebase that hasn't passed the functional tests, but it would behoove you to try to achieve as near to full code coverage for such a suite of unit tests as you can. The more that you can do to enhance the confidence that your unit tests give you, the better, since it reduces the number of spurious (and potentially costly) runs of your system tests.

like image 99
sirosen Avatar answered Oct 19 '22 15:10

sirosen