Use Case #1: Test Environments
Typical constraints can be:
- Availability. A system which is lacking a test environment. This is common for legacy systems and Mainframes.
- Cost. You can't integrate costly external APIs or integrations in the test environment, as some providers will charge you for transactions against their test/sandbox environments, or limit the amount of transactions that you may provide.
- Capacity. If you have a core ERP, such as a CRM-system, Sales system, SCM-system or such, they may not be able to accept test runs during peak hours, and you have to perform all integration testing during evenings, nights and weekends.
- Data Synchronization. It's common to move transactional data to a System-of-Records (SoR) periodically. But in a test environment, the SoR is not always available, making data validation tests very difficult. Also, when using a distributed architecture such as SOA, it is a lot of work making sure that all independent databases are in the correct state, with the correct data load, for the tests to be successful and meaningful.
Use Case #2: Test Data Mangement
The key concept of working against a simulation instead of a physical test platform, is that there is no database behind it, and thus, there is no data set to corrupt with invalid test cases! Still, the simulated service will always give you a proper response so that you can verify your integration, which is what most tests aim to achieve.
However, this does not replace full-out testing using a proper physical environment, where you have tests down to the database layer. But using a simulation, developers and QA can start doing integration tests much earlier in the development life-cycle, so that once the deep-layer database tests start, most of the integration issues are already solved.
And hands on heart, how many of your software bugs stem from the database server not reading/writing the correct data, compared to the issues which stem from issues in the communication and system integration? Do you really need to do all that database reading and writing to isolate your most common bugs?
Use Case #3: Developers and QA can work in parallel (And Shift Left in the SDLC)
In the last couple of years, Continuous Integration (CI) has become a staple of software development. By validating the code base as soon as new code has been produced, developers will be aware directly if they have provided a faulty artifact. As soon as new code is committed to a code repository, a set of automated tests are executed. The automated CI validation usually rely heavily on unit tests, and other forms of module testing. If the new code artifact doesn't pass the automated tests, the developer is notified - usually by an e-mail or by a status screen. As developers now will be aware of code issues almost instantly, and don't have to wait for QAs to discover them later in the testing phase, the development process is shifted left, that is, moved to an earlier stage of the Software Development Life-Cycle (SDLC)
If QA can start developing tests in parallel with the development team, you can alleviate the common issue in where the code artifact is handed over to the testers with way too short time before it is supposed to go into production. In fact, when the development department hands over the artifact, a majority of the tests can already have been performed!