Become a JMeter and Continuous Testing Pro

Start Learning
Slack

Test Your Website Performance NOW!

arrow Please enter a URL with http(s)
Mar 28 2012

Guest Blogger: Peter HJ Van Eijk Cloud Computing Master - Hello Again

To have Cloud Scalability, or not to have Cloud Scalability?

That is the question.

 

 

The premise of the cloud is that it gives us scalable resources. As performance and load testers, this should have our attention. If the cloud is infinitely scalable, load testing does not make much sense, as we will never experience those limits, no matter how many resources we will throw at it (OK, obviously with limits imposed by our credit cards). So, as load testers, we are looking for those places where this illusion of infinite resources breaks.
In my previous post about Systematic Load Testing, I emphasized that we should be testing from hypotheses about where things will break. In this post I will show you a number of the ‘weak points’ where things might actually break in practice.

It is important to go back to the number one reason why we have cloud: scalability.

As was pointed out in a panel session that I attended at CMG 11, the most important thing you should be able to do in a cloud application is scaling back. Otherwise you would not be saving money. We will return to how this will break things later.

So, how can applications scale?

Take for example the back end of a website written in Java. One way to scale it is to run multiple cores on a single Java Virtual Machine (JVM). If it the JVM is multithread safe, each core can run a different thread. The obvious limit is the number of physical cores that exist on the physical processor that is running it. The less obvious limit is the amount of memory of that JVM. 

But, there are two problems with this.

Most programs don’t speed up a lot when they become multithreaded, because there is always a part of the code that has to run sequentially (think database locking). Furthermore, in a regular Java program taking physical memory away from a running program is not simple to do. So the desired scaling down does not work.

We can also scale the number of JVMs, which allows us to put in more processors. We will spread the web traffic (the workload) over those processors by using a load balancer. But, now we probably have introduced the need for some synchronization between the various JVMs. There is some data that they need to share, such as a common database. In practice, keeping that data the same across independent JVMs is an additional processing burden that not always scales linearly with the demand.
Then, if we are lucky, we can actually reduce the number of JVMs when our load reduces.

But wait, there's more.

If web sessions have their own state (such as ‘what's in my shopping cart’) than that state is on one of those JVMs. You cannot just turn them off, because if you do you will break that session.

So here are a number of specific hypotheses about things that can break in load testing in the cloud. This leads to a simple scenario to test: ramp up the load until the autoscaling kicks in, then ramp down, but keep some sessions alive, then ramp up again. And if stuff does not break on a functional level, there can be the issue that resources are not returned, and running the application is more expensive than it should be.

Note to the reader: some of the issues reported here have been discussed at the Application Performance Management workshop at the recent ceCMG conference.

For more developments in cloud computing follow my blog at http://blog.clubcloudcomputing.com.

Peter HJ van Eijk is a trainer, writer, consultant and speaker on Cloud Computing and other digital infrastructures, based in the Netherlands. He is master trainer for the Cloud Essentials course www.cloudessentials.net and a Cloud Credential Council Certified Trainer.

     
arrow Please enter a URL with http(s)

You might also find these useful:

Interested in writing for our Blog?Send us a pitch!