This is the fourth part of an article covering Service Virtualization.
In this part of the article, we will go into some more direct examples on how to implement Service Virtualization, and some related use cases. However, the concept of Service Virtualization can be applied to solve many other issues, and some of them will be covered in the next post.
Use Case #1: Test Environments
This is the most straight-forward use case! How often do you lack proper integrations in your test environment? Is there some component or system which you can't access during unit tests because of some constraint or restriction?
Typical constraints can be:
Typical constraints can be:
- Availability. A system which is lacking a test environment. This is common for legacy systems and Mainframes.
- Cost. You can't integrate costly external APIs or integrations in the test environment, as some providers will charge you for transactions against their test/sandbox environments, or limit the amount of transactions that you may provide.
- Capacity. If you have a core ERP, such as a CRM-system, Sales system, SCM-system or such, they may not be able to accept test runs during peak hours, and you have to perform all integration testing during evenings, nights and weekends.
- Data Synchronization. It's common to move transactional data to a System-of-Records (SoR) periodically. But in a test environment, the SoR is not always available, making data validation tests very difficult. Also, when using a distributed architecture such as SOA, it is a lot of work making sure that all independent databases are in the correct state, with the correct data load, for the tests to be successful and meaningful.
Using Service Virtualization you can create Hybrid Environments, where components usually unavailable in the test system can be fully integrated, even as early as during unit testing. Or you can even provide such integrations to each and every developer and tester! Issues that usually wouldn't be identified until Integration or UAT testing can now be handled much earlier in the software life-cycle.
Use Case #2: Test Data Mangement
Handling the data set in test environments is a headache. Non-completed test cases have a tendency to not restore the data set once executed, leading to duplicate entries and records in an inconsistent state. On top of this, restoring test data is time consuming, either because the data sets can be rather large and take a long time to import, or because the department responsible for data loading also have other, more prioritized tasks. And while the QAs wait for the Operations department to restore the test data, the whole test process comes to a grinding halt.
The key concept of working against a simulation instead of a physical test platform, is that there is no database behind it, and thus, there is no data set to corrupt with invalid test cases! Still, the simulated service will always give you a proper response so that you can verify your integration, which is what most tests aim to achieve.
However, this does not replace full-out testing using a proper physical environment, where you have tests down to the database layer. But using a simulation, developers and QA can start doing integration tests much earlier in the development life-cycle, so that once the deep-layer database tests start, most of the integration issues are already solved.
And hands on heart, how many of your software bugs stem from the database server not reading/writing the correct data, compared to the issues which stem from issues in the communication and system integration? Do you really need to do all that database reading and writing to isolate your most common bugs?
The key concept of working against a simulation instead of a physical test platform, is that there is no database behind it, and thus, there is no data set to corrupt with invalid test cases! Still, the simulated service will always give you a proper response so that you can verify your integration, which is what most tests aim to achieve.
However, this does not replace full-out testing using a proper physical environment, where you have tests down to the database layer. But using a simulation, developers and QA can start doing integration tests much earlier in the development life-cycle, so that once the deep-layer database tests start, most of the integration issues are already solved.
And hands on heart, how many of your software bugs stem from the database server not reading/writing the correct data, compared to the issues which stem from issues in the communication and system integration? Do you really need to do all that database reading and writing to isolate your most common bugs?
Use Case #3: Developers and QA can work in parallel (And Shift Left in the SDLC)
Traditionally, software development has been a rather sequential process. Developers wrote bits of code, and when they thought it was mature enough, it was deployed to a test environment, for QAs to evaluate. Any bugs found would be sent back to the development team for patching, and then the process was repeated again until QA was satisfied with the quality. This process caused developers and QAs to spend a lot of time waiting for each other to complete their tasks.
In the last couple of years, Continuous Integration (CI) has become a staple of software development. By validating the code base as soon as new code has been produced, developers will be aware directly if they have provided a faulty artifact. As soon as new code is committed to a code repository, a set of automated tests are executed. The automated CI validation usually rely heavily on unit tests, and other forms of module testing. If the new code artifact doesn't pass the automated tests, the developer is notified - usually by an e-mail or by a status screen. As developers now will be aware of code issues almost instantly, and don't have to wait for QAs to discover them later in the testing phase, the development process is shifted left, that is, moved to an earlier stage of the Software Development Life-Cycle (SDLC)
In the last couple of years, Continuous Integration (CI) has become a staple of software development. By validating the code base as soon as new code has been produced, developers will be aware directly if they have provided a faulty artifact. As soon as new code is committed to a code repository, a set of automated tests are executed. The automated CI validation usually rely heavily on unit tests, and other forms of module testing. If the new code artifact doesn't pass the automated tests, the developer is notified - usually by an e-mail or by a status screen. As developers now will be aware of code issues almost instantly, and don't have to wait for QAs to discover them later in the testing phase, the development process is shifted left, that is, moved to an earlier stage of the Software Development Life-Cycle (SDLC)
So what does all this have to do with Service Virtualization? Well, using virtualized services, QA can start doing tests long before the developers have handed over the code artifact to the testing department! As long as the contract, that is, the rules of communication has been established in the requirement/design phase, virtual services can be created early on in the project. This means that QA can start performing integration tests long before the developers have completed the new module, and both teams working in parallel - and no more waiting for the other department to complete their artifact!
If QA can start developing tests in parallel with the development team, you can alleviate the common issue in where the code artifact is handed over to the testers with way too short time before it is supposed to go into production. In fact, when the development department hands over the artifact, a majority of the tests can already have been performed!
If QA can start developing tests in parallel with the development team, you can alleviate the common issue in where the code artifact is handed over to the testers with way too short time before it is supposed to go into production. In fact, when the development department hands over the artifact, a majority of the tests can already have been performed!
In the fifth part of the article I will cover some more advanced use cases, such as performance testing, compliance issues and legacy systems.