Automated API Testing — A Developer’s perspective

Sarada Sastri
9 min readSep 27, 2020

--

Agile methodology is put to practice using the shift-left-testing. Shift left testing paradigm suggests “Test early, Test often”. Test Driven Development is the approach towards development puts this thought process into action in Agile teams.

The onus of quality is not of the Qualify Assurance team. The QA Team is only a gatekeeper of quality to ensure only quality product goes out of the door. The onus of quality belongs to developers. Quality has several shades — Functional correctness & completeness, Non-Functional performance compliance, readability, maintainability, extendibility etc., to mention a few.

In this article, I am going to share my ideas on functional testing based on my experiences, detailing the pitfalls, the learnings, tips etc., along the way. This should help developers decide the strategy that they may choose for their business problems. Few questions I have tried to answer are as below —

Myths around tests

Manual testing versus Automated testing? When to use what?

What is the difference between unit tests versus others?

How good are Mocks? When to use them?

What are the different flavors of integration testing?

What are component tests, service tests, system test?

What to expect in Test Driven Development?

Sourced from https://images.app.goo.gl/zagkTdCayUmZU4A27

Myth around tests

Myth 1— The name leads many to believe that Junit tests are always unit tests

Reality — Junit like TestNG is also a testing framework. This framework can used to build unit tests, integration tests etc., These frameworks help organizations drive the TDD(Test Driven Development) approach towards implementation.

Myth 2 — Test coverage. If the test coverage metrics shows 90% and more, your system is very healthy.

Reality 2— 80% test coverage is a decent achievable target. Beyond this the effort to get every incremental coverage gets steeper that the ROI needs to be evaluated. The ROI needs to be thought through, as a system with 100% test coverage can still be unhealthy. The cost to maintain the last 20% being very high, it is quite possible, that over time these cases become invalid and therefore show a rosy picture though the real situation has already become grim.

Learning 2— Don’t aim for the last 20% of test coverage, instead keep your focus to have tests that are also easy to maintain. And whatever tests are defined, they better do what they are supposed to do, else consider eliminating these tests.

Myth 3 — Tests which are flaky retain them, for they fail only sometime/little time.

Flaky tests are tests that pass, but then suddenly fail and then when run again they pass with no change in code/configuration.

Reality 3 — These are harmful tests. They harm the developers and the product for the below reasons

  1. Developers have to invest good time just to find out it passes now. Developers now unknowingly are conditioned to start ignoring test results.
  2. When the test fails for real reasons, developers conditioned to ignore, ignore it.
  3. Such flaky tests give a false sense of the system being good.

Learning 3 — Tests when they fail, find out from logs why they fail. Fix the test or fix the code. Say logs are not sufficient and you find that tests pass next time you run them, you know this test has become flaky. Add more logs, keep an eye on this test, so the next time it happens you can find out from logs what went wrong. This way you get to fix the code in good time — the very objective of tests. If you don’t have the time to fix the test/code now for whatever priorities, stop the flaky test from running and bring them back to regular runs only after you fix them. Do not inculcate the habit of ignoring test failures

Automated testing

  1. There is initial investment involved in terms of defining the test.
  2. This is good for regression testing where the same piece of code has be tested again and again in the same manner.
  3. This enables the CI (continuous Integration) practice in the team where the code in develop branch is always production ready. Whether you can embrace CD(continuous deployment) aspect depends on your organization, but aligning to CI is always a possibility.
  4. This reduces the overall cost of testing
  5. Also this increases the confidence in the solution both for the product owner and the developer
  6. As a side effect, this also serves as a well-maintained documentation of the system usages for new developers.

Manual Testing

Manual testing is costly as compared to Automated testing both in terms of the time for execution as well as the cost of resources working/setting it up. But manual testing has it’s own place. Few scenario’s

— For initial testing of the solution, a 2nd pair of eyes that tell you how usable the solution is and this is important for the initial acceptance testing of a new feature

— Sometimes, a end to end use case, there would be more than one team/solution that play their own tiny role in the big picture. For a end to end service testing, you need co-ordination, collaboration, conversation across teams & eyes to tell you that the end to end feature functionality is all good.

— TDD can take you to 80%, but there is still some more coverage to be done. Of course, you need the final sign off for the release.

Unit tests, Integration Test, Service Tests, System Tests

Within testing, the tests are classified in different flavors depending on factors like

  1. Infrastructure dependency

2. Coverage Scope

And due the above reasons they end up having different confidence levels and each of them have their own place and value, explained below in detail

Unit tests (Basic)

They are tests which can work with the run-time engine (ex. JRE). It is self contained and does not make any calls to infrastructure elements like the databases, caches, message-bus etc.,

Coverage Small per test. However a suite of several such unit tests put together help you get a good coverage. This testing makes it possible to test the happy(positive)/exceptional(negative) code path, which sometimes cannot be covered by integration tests.

Execution Time — Fastest amongst all flavors. Therefore unit tests can be part of the build process itself.

Confidence Level — Least. The test only tells that the unit is good or not. A unit is good does not mean that the solution is good. How the units are stitched together matters very much.

Unit Testing using Mocks

When tests have to reach out to other modules or other infrastructure elements which sometimes are not available. Mocks are useful in below scenario’s & have their own place

  1. Infrastructure is not yet provisioned
  2. Dependent modules are also under development
  3. Dependent modules have heavy memory needs which you cannot run in local
  4. Existing infrastructure elements the latency is such that the tests cannot finish in finite time.
  5. The test has a dependency on another service the availability of which is beyond your control. To prevent tests from being flaky, you may have to mock the source of your system interfaces.

Advantage — It allows several systems to develop in parallel without waiting for each other.

Risk

  1. If mocks are introduced for reasons #1, #2, #3, with efflux of time, the mocks become invalid w.r.t. the real systems, are forgotten to be maintained, giving a false sense of healthy system, overruling the very purpose they are there and issues are finally uncovered directly in production.
  2. Several of the mock frameworks are provided by the mock frameworks not by infrastructure solution teams, therefore there is no guarantee that the API behavior will be identical between the mock objects and real elements. This could lead to incorrect tests.

Learning— If mocks are introduced for reason#1, #2, #3, we are incurring a operational cost of maintaining the mock objects. When the underlying reason #1, #2, #3 goes away with efflux of time, the mock tests should evolve to become integration tests that use the real infrastructure.

Execution Time — Less. Comparable to the basic unit test or a little more.

Confidence Level — Better than basic unit tests

Execution Environment — The build pipeline. Care should be taken that the 3rd party

Integration Tests

Integration tests are test that expect the dependent modules or infrastructure to be in place & will reach out to them during the test run. The test coverage therefore includes

  1. Creating the input to the test and feeding it to the test program.
  2. Testing your actual code
  3. Testing your talks correctly to the infrastructure
  4. Testing that the infrastructure is set up correctly for your test
  5. Testing that the infrastructure is available & responding
  6. **This can span several components and infrastructure elements
  7. Test the output is received by your code
  8. Test that the output matches your expectation.

Confidence Level- High. How high, it depends on what is under test Component interface, Service interface or System interface with increasing order of confidence respectively.

Execution Time — High. The network latency also gets baked in, therefore the tests take longer to finish execution as compared to unit tests.

Execution environment — Should not be part of the build pipeline. Including this in the build pipeline can cause longer build times due to the high execution time of the tests and also that the build may fail due to the dependency on the availability of external infrastructure elements.

Tip: It is recommended that the integrations tests are run as part of the daily cron jobs instead. It should also be available to run it on-demand.

Service Tests/System Tests

These both terms are used interchangeably depending what bounded context you are referring to. If you are testing the system interfaces, it is called a system test. If you are testing a service interface, it is called a service test. And follow are its characteristics

  1. They are integration tests using infrastructure elements.
  2. They are end to end tests of the service or the system, treating them as black boxes
  3. They have the maximum ROI. With minimum investment in tests you can go from 0 to 50% coverage just by covering the system interfaces.
  4. They are mandatory for any system aiming for 80% coverage.
  5. A lot of time, though Junits are good to start on this, Junit’s alone are not sufficient to test them. You have to either see if there some tool that can be put to use. In most cases, proprietary a hybrid testing framework that uses a a combination of data driven test and other testing flavors are employed.

Confidence Level — Highest

Execution Time — Highest. More components, more network calls, more the latency.

Test Driven Development

The tests are the 1st thing that are written during a feature development. The characteristics are

  1. TDD is the concrete implementation of the true Agile spirit and the shift left paradigm — Testing is no longer done at the end of development.
  2. This results in reduced cost of development and testing. Early detection of bugs and resolution. Early recognition of concerns and communication.
  3. First write the tests — What you expect out of the system. Define the input and the output from the test
  4. Build the functionality to satisfy the test— How you implement the feature to satisfy the what above.
  5. Tests are written in a all encompassing manner — Pre-requite data is prepared as part of the test. After the test, the data is clean up.
  6. These tests can be used for regression testing
  7. Technologies like Junit and TestNG are widely used in the Java world.

Few frameworks

There are lots of frameworks in the market today. I will touch upon the few most famous ones.

Junit/TestNG — These are favorite test frameworks used for TDD

Cucumber — In some organization there is little interactions between the product owner who defines the requirement and the developer who understands the requirement in that little interaction and implements it. The chances of requirement getting wrong is a good possibility. This is where cucumber finds its place. The product owner defines the requirement using cucumber which is then used by developers for TDD.

I will encourage you to explore the various tools in the market that will satisfy your business needs.

Conclusion

The ultimate objective of the testing is to create a quality product, create a smooth experience for the end-users, great working environment for developers, have a production ready code for product-owners, be production-ready always and thereby become Agile in the true sense. Quality is not built in a day, its a habit, that worth embracing.

Watch out for my next article on Non-Functional testing.

--

--

Sarada Sastri
Sarada Sastri

Written by Sarada Sastri

Java Architect | MongoDB | Oracle DB| Application Performance Tuning | Design Thinking | https://www.linkedin.com/in/saradasastri/

No responses yet