Araneo: DevOps & Automation
Araneo: DevOps & Automation
  • Home
  • Concepts
    • Development
  • Articles
  • Blog
  • About
  • Home
  • Concepts
    • Development
  • Articles
  • Blog
  • About

A BLOG

A Case for Service Virtualization - Key Concepts

3/3/2016

0 Comments

 
This is the third part of an article covering Service Virtualization.

Introduction

​If you go back to the first two articles, you should get a feeling for which problems that Service Virtualization can provide a solution to. It's all about breaking constraints; enabling developers, QA and Platform Engineers to model services which are not available.

Services may be unavailable for a number or reasons, for instance that a module is still in the development phase, or a that third-party service is too expensive to make available to QA in early tests. Other times test environments may be available, but the test data is constantly corrupted, so the tests don't create any actual value. You probably have heard the phrase "the application usually behaves like this in the test environment, but it should work when we get the artifact to production" a million times.

In this article, we will cover the key concepts of Service Virtualization; what it is, how it works and what it can do for you.

Key Concepts of Service Virtualization

What is a Virtualized Service?
In short - it's an intelligent simulation of a service. It's an API, or some other form of end-point, which acts in accordance with the rules of some sort of contract. The virtualized service is aware of parameters and data-types, and knows which requests are valid, and which aren't - and responds accordingly. So when you request the simulated service, you get a response which is valid according to the specified contract, just as if it would have been a non-simulated service. 
Why should I Virtualize Services?
Well, there are several reasons for this. But as stated earlier, it's all about breaking constraints, no matter if they are technological or financial. Common problems which can be resolved by simulating service include: 
  • Test Environments  - How much time do developers and QA spend just waiting for access to the test system? And when Dev get their turn, QA has to wait - and the other way around. It is usually the Operations department which is responsible for deploying the correct artifacts and loading the proper test data, and when they are occupied with important production issues instead - everyone has to wait for the test environment to become available or restored.
  • Test Data Management - The data set in test environments is easily corrupted. A single test case which fails to restore the test data once it's completed will cause errors relating to duplicate data the next time it's executed. And again, there is usually a dependency on the Ops team to take the time to restore a corrupt test environment. 
  • Identify Issues early in the SDLC - Software issues should be found as early as possible in the Software Development Life-Cycle (SDLC), as mitigation costs increase rapidly the later you discover them. If you can give developers and QAs access to systems which usually are not available until integration testing, or even the production system, how many more issues can you then find earlier in the SDLC?
  • Integration Costs - External providers may charge not only for using their production system, but also for the UAT/Sandbox environment. Even if the per-transaction fee may be low, it can add up to quite a cost if you want to run a load test on your system. 
  • The Problem with Mocks - The issues relating to mocked services, or "stubs", are covered in the next section. Mocking a service may seem like a easy way to validate integrations, but there are some problems associated to it.
Why not just write a Mock instead?
​To mock a service means to develop a stand-in, or stub, for the service. When you request the mock, it will respond with some sort of data. This sounds similar to a virtualized service, so why not go for the mock instead? Well, virtualizing the service actually has some key benefits:
  • Automatic Creation - A mock needs to be programmed by a developer, whereas a simulation can be created by anyone.
  • Rely on the Simulation - A mock needs to be maintained by the developer as well. Running tests against a mock integration which is not up-to-date with the live integration, may cause false-positives in testing, which will not be detected until the artifact has reached production. With simulations, changes in the integration can be updated by anyone - or even automatically.
  • Let Developers Develop -  Software Developers should be doing what they do best - transforming business requirements into software solutions. Writing and maintaining mock integrations will consume valuable resources, and lower over-all output.
  • Rich Test Data - Mocks are usually quite stupid. They may respond with "everything is OK!" no matter what data you provide to them, which is useless for negative test cases. It's not uncommon that a mocked service responds with the same data, no matter which data you request it with - making request/response and parameter validation almost impossible.
  • Test for Performance - Also, mocks are usually too good at what they do! They respond almost immediately, since there isn't any processing of business data going on in the background. So how do you know how your application behaves when the service you are integrating towards, isn't behaving perfectly? 
Which Services can be Virtualized?
As a concept, Service Virtualization is not bound to any specific set of services. All communication which is based on Requests and Responses, and carried over the standard internet protocols TCP or UDP can be simulated.

It's common to simulate Web Services wrapped in HTTP, such as SOAP and REST, but other Internet protocols such as SMTP, IMAP or SNMP works just as well. This means that you can simulate anything from a Web API to a Mail Server or even the performance reports from a Network Router. Is your SMS Provider charging you for sending text messages in their sandbox environment? Then you can set up a simulation of their SMPP-based end-point instead! You can create simulations of media protocols as well, such as RTP, to create a virtualized voice or video service.

Proprietary protocols works just as well. If you have a business solution from SAP or Oracle, or maybe an internal legacy business system, you can simulate those integrations too.  Actually, legacy systems are usually very beneficial to simulate, as their test environments often are limited, or completely unavailable. 

Furthermore, the concept is not bound to external modules, you can simulate an internal Java component, or even a SQL database. Any sort of constraint, and any sort of unavailable dependency, can and should be simulated!
How do you Create a Virtualized Service?
You can create a simulation in many different ways, but the most common ones are:
  • Using an API Specification - This is very useful when simulating Web Services where WSDL or XSD files may be available to describe the rules of the communication.
  • Using Live Data - If you haven't got an API Specification, you can provide the simulation with sample data from a test or live system. The system will listen in on the requests and responses in the communication, and start building a set of known functions, and their associated parameters.
  • Using R/R Data - If you for some reason can't listen in on the communication, you can feed the simulation from known Request & Response Pairs. These data pairs can usually be extracted from a test environment, or from the logs of a live system.
  • Manually - Of course you can create the simulation data from scratch, but this is usually very time-consuming. 
So what is Service Virtualization not?
It's important to remember that a simulation is not a database. So when you add a data record using a virtualized API, the record will not be available to retrieve from the virtualized API later on, as the simulated service will not keep it.

But is this really an issue? When you perform testing of a module, you are usually not interested in if some external service could add a record to its internal database or not. Instead, you want to verify that your module-under-test can integrate properly towards the service. And this is exactly what a simulated service can provide for you!

In the next article, I will provide examples of typical use-cases where Service Virtualization can help you out, including Test Data Management, External Integration, Compliance, Performance Testing and Verification Environments.
0 Comments

A Case for Service Virtualization - Introduction

2/19/2016

0 Comments

 
This is the first part of an article covering Service Virtualization.
What is Service Virtualization and why is it needed?
Service Virtualization is a concept where services are virtualized - or as I prefer to call it - simulated. In short, it's a way to replace an integration point with an intelligent recording of the service. It's a simulation which you can request, just like any integration point, and you will retrieve a valid response.

So why would you like to simulate a service? It's quite straight-forward, it's all about breaking constraints. It doesn't matter if you're a developer or in QA, you have probably - just as me - been limited in your work because you don't have access to some specific resource. It could be a testing environment where you're waiting for the correct test data to become available, or access to some third-party API.

And hands on heart, how often haven't you been aware of integration issues until the artifact has been deployed to your live system? Then your options are either to roll back the release, or roll forward with patches. Both options will cause you downtime and a dent in your SLAs. The main benefit of service virtualization is that developers as well as QA can get access to environments much earlier in the Software Development Life-Cycle (SDLC). Problems which otherwise would be found in a late stage of testing, usually integration, system or acceptance testing, can now be found during either development or initial testing! 
Picture
Find issues earlier in the SDLC!
It's all about SOA!
About a decade ago, Service Oriented Architecture (SOA) became the accepted way to replace older monolithic systems. The core concept of SOA was to decouple and isolate single services, so that the system as a whole can continue to work even if a single service would fail. This technique led to methods we today take for granted, such as load-balancing and other forms of scalability, where you can increase the capacity of a service, by just adding another instance.

But as a consequence, SOA created systems with a high rate of dependencies. As each service was decoupled, it depended on a multitude of other specific services to work. At this point, system integration became a headache for engineers and architects alike. And as platform design became more complex, so did the maintenance of the systems. Most developers and QAs are aware of faulty test environments, and not being able to develop against the third-party integration points - you might have a half-decent mock, at best. 

Today, the API Economy has really taken off, and modern applications rely not only on internal dependencies, but also on one or several external service providers. This means that developers as well as QAs have to struggle with both internal and external dependencies, and the Operations department have to spend time on restoring test systems, time they could otherwise have spent on what they do best - maintaining production systems to keep SLAs!
Picture
SOA Architecture and Constraints.
What can Service Virtualization do about this?
By simulating specific dependencies, either web services or other forms of communication, you can give developers and QA access to systems earlier in the software development life-cycle. Developers can work against third-party APIs which otherwise would not be available, QAs can start working against APIs which have not been developed yet. Platform engineers can create test and UAT environments with fully functioning integration against third-parties. 

When you are working against a virtualized service, you're not working against a physical platform with a core database, where the test data can (or rather will) be corrupted. This is the whole point - when you test your module against an external system, you are not interested if the recipient was able to add your record to the data set. You are only interested in that your module can integrate correctly with the external system, in accordance with the system specification. And that's the beauty of Service Virtualization - no faulty test data, no waiting for test environments to become available, no issues with test environments holding the wrong software version. Instead, you always have accurate integrations at the tip of your fingers!

And there are many other ways Service Virtualization can be used in your organization, I will get back on that in later posts. 
Which services can be simulated?
So what kind of services can you simulate? Well, basically anything carried over TCP/IP. The most common services are of course web services such as SOAP and REST, but inter-platform communication as JMS, MQ and database connectivity over ODBC/JDBC can be simulated just as well. Have you got a business suite with a proprietary protocol, such as SAP or an Oracle system? That works too!
I will write a six-piece article, making a case for service virtualization - showing why and how to use it.
In Part 2 of the article, I cover common issues in test data and testing environments.
0 Comments

4 Quick Linux Command Line Tips

4/23/2015

2 Comments

 
Today's small blog post will cover four handy tips, which makes your life on a Linux/bash terminal easier and faster!

Search your BASH History on-the-fly

One of the most time-saving features of bash is the "search bash history" quick command Ctrl-r. Find a terminal and see what happens when you issue the command. The prompt will say "(reverese-i-search)". Type in the beginning of a command you have used recently, which had some hard-to-remember extra parameters. This can be a scp-command with that awkward flag for pointing out which certificate to use for the connection, or just finding the path to a script deep in the file-system, which you edited the other day but can't remember where it's located. 

Press ctrl-r and type the first one or two letter of the command that you are looking for, and it will find the last executed command for your user which match. Type more letters to qualify the search even further, or press ctrl-r again, to find the next match.

If you are not already familiar with this, you will ask yourself very soon, how you managed to live without it!

Find and Kill a Background Job the Fast Way

You probably know how to put a process in the background by using "&" after a command. You use the command "jobs" to list the processes that you have put in the background, where each command has it's own unique numerical identifier. This ID can be used to kill the process, using "kill %<id>". See image below.
Kill bg job
Find and kill background jobs.

Show Last Exit Status

Scripts and programs in the unix-styled-world has a tendency not to send unnecessary (or even necessary ) information back to the command line. However, the exit code of a program can tell a a lot about why the program did not execute as expected. It can also be very useful in scripts and automation, to be able to determine specific exit codes.

To check the exit code of the last executed program, use "echo $?". Example

[[email protected] ~]$ ./my_script.sh --option=invalidValue
[[email protected] ~]$ echo $?
4

It's customary in the unix-world that 0 means "program executed successfully", and everything else means that the program failed in some way.

Go to Last Directory

You know what it's like. You're deep into the file-system, in some obscure configuration directory of an application, and just have to go to your home directory to check a config file. Then you need to find your way back, but was it /opt/app/version/config or /var/app/version/config, or....? 

Just use the command "cd -" to go to the directory you were in last!

[[email protected] tmp]$ cd -
/var/www/demoapp/html/api
2 Comments

List files of a Certain Size

4/21/2015

0 Comments

 
Just a short tech-tip today. I needed to list all the files with the size 64 bytes in a directory. This is because the size of a specific GPG-encrypted text string always ended up this size.

[[email protected] ~]$ find . -size 64c -ls
  3018    4 -r--r-----   1 johan    johan         64 Apr 21  2015 20150421-data.gpg

Very handy, and a new feature of find, for me.

0 Comments

Installing Atlassian Stash on CentOS

4/16/2015

0 Comments

 
If you, just like me, haven't taken the leap from RHEL/CentOS 6 to version 7 yet, you're probably stuck on git version 1.7.1. This version does not work very well with Atlassian Stash, my favorite GUI wrapper for git.

On this version you will get this error message when installing Stash:
Unsupported Git version found [[Ljava.lang.String;@1c1f3cd3]. Please upgrade
Git to a supported version before installing Stash.

Start by checking what version of git that you're currently on (if any):
[[email protected] atlassian]# yum list installed | grep -i '^git'
git.x86_64           1.7.1-3.el6_4.1    @base

Not good. Remove this package:
[[email protected] atlassian]# yum remove git

Download and install a suitable rpmforge yum RPM
[[email protected] atlassian]# wget http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm
[[email protected] atlassian]# yum install rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm

Check what version that is available in the new repos:
[[email protected] atlassian]# yum --disablerepo=base,updates --enablerepo=rpmforge-extras info git

Installed Packages
Name        : git
Arch        : x86_64
Version     : 1.7.1

Available Packages
Name        : git
Arch        : x86_64
Version     : 1.7.12.4

Install the new verison
[[email protected] atlassian]# yum --disablerepo=base,updates --enablerepo=rpmforge-extras install git

If you get the error
Error: Package: subversion-1.7.4-0.1.el6.rfx.x86_64 (rpmforge-extras)
           Requires: libneon.so.27()(64bit)

Find out what package is needed and install it by:
[[email protected] atlassian]# yum whatprovides "*/neon"
[[email protected] atlassian]# yum install neon-devel-0.29.3-3.el6_4.x86_64

After installation, check the version of git
[[email protected] atlassian]# git --version
git version 1.7.12.4

Restart the Stash installation
[[email protected] atlassian]# ./atlassian-stash-3.8.0-x64.bin

You shouldn't get any error notification regarding the version now!
0 Comments

    Author

    Hi, I'm Johan, I've been working as a consultant and entrepreneur in the IT-sector since 1999.

    I blog about ideas, tricks and tech tips from my daily work life as solution architect.

    Archives

    April 2016
    March 2016
    February 2016
    May 2015
    April 2015

    Categories

    All
    Atlassian
    CentOS
    Devops
    Git
    LAMP
    Linux
    Scm
    Security
    Svc Virt
    Testing
    Tips N Tricks
    Tips-n-tricks
    Virtualization

    RSS Feed

(cc) Araneo 1999-2016

Privacy Policy | Copyright