When developing software, there are two critical goals: first, that the code does what it's supposed to do; and second, that it does so with as few bugs as possible. One of the best ways to help ensure that these goals are met is to maintain a comprehensive suite of automated functional tests.
By breaking down the software into its various parts, specifying a set of possible inputs and expected outputs for each of those pieces, and then ensuring that each piece behaves as expected, developers are able to gauge how well their code is meeting the specifications, and whether or not it's running successfully.
This means that functional tests provide an excellent mile-stoning ability. Once an application's specifications and design have been fully outlined with comprehensive use cases, and each test suite corresponding to those use cases completes successfully, it can be said with reasonable confidence that the application meets the base specifications. (When used in this way, these tests are also known as "acceptance tests.")
Of course, this does not necessarily mean that the job is done. It is possible (and even likely) that bugs may arise – even if all tests succeed, there may still be unexpected or incorrect behavior hiding "between the cracks." In these cases, the process is iterated: a new or modified use case is written to more clearly specify the expected behavior (as opposed to the observed behavior), the test suite is updated, and future runs will now check for the bug as a matter of course. Once these tests pass, we can presume that the bugs they've exposed are fixed.
This more formalized method of testing and maintenance helps avoid "regressions," where the fix for a bug is unknowingly broken or undone by later work, causing it to unexpectedly resurface. Because all tests can be run on commit, every change can be verified by the tests, to ensure that no previously-fixed bugs have been reintroduced.
We also use Jenkins automation to run tests on a regular basis. This can happen on a schedule (e.g. every night) or upon certain events – such as when a developer commits new or updated code. This makes our testing processes even better, because it allows for near-immediate feedback when things aren't working as expected. It also lets us "look back" and review our progress; we can review when and where things worked (or didn't) as needed, giving valuable insight into the development process.
Test Early, Test Often
The use of automated functional tests does require a certain commitment – the time and effort needed to build and maintain these tests can seem like a pretty significant investment. However, it's an investment that definitely pays off in the end. Relying on manual testing is time-consuming and inconsistent. Bugs can – and will – fall through the cracks, for your end users to stumble upon and struggle with. Smart Software is ready and willing to help you avoid those pitfalls, by developing your application with a robust suite of automated functional tests.