Become a JMeter and Continuous Testing Pro

Start Learning
Slack

Test Your Website Performance NOW!

arrow Please enter a URL with http(s)
Nov 20 2014

Web-Scale App Developers Need to Think About Scale and Load: 5 Guidelines, Part 2

In my previous blog post, I covered 3 important ways to ensure web-scalability via proper memory allocation, log sprawl, and open user session management. In this post, I will continue my exploration into web-scalability and focus on what to do if your database can’t hold all of your application’s request connections, as well as how to handle a high volume of http requests per second or minute.

 

So, without further ado, here are the two additional guidelines you should follow to enable and maintain efficient web-scalability:

 

1. Database Connections

 

Let's say you perform a load test on your application. If your application holds a separate connection for each request that you perform against the database, at some point, you may find that the database cannot hold all of these connections together.

 

While this might sound more like an IT issue, in most situations, it cannot be solved without making changes to the application. For example: if every database has a limited amount of connections that it can hold, sometimes this number is just less than what your application really needs. Of course, as a developer, you should disconnect any connections that you don't need and try to meet your database’s limits.

 

So, what's the solution?


First, try to accumulate a few requests into one query that does all of the work. While this is a good idea in and of itself, when taking database connections into consideration, sometimes this allows us to reduce database connection usage by dozens of percentage points, which can be crucial to application performance. This can make the entire process shorter. If your response times are low, users will benefit from a better experience and reduce the number of simultaneous processes running on the server.

 

Additionally, in some applications, instead of having the database hold the queue of requests then perform them one by one, you can hold the queue in the application instead. In that case, rather than sending a request to the database, the application will send a number of requests to the database at a time. When one request is fulfilled, another request is sent. Depending on your IT environment size (including compute network and storage) you can make sure the database never has more than a certain number of requests to process at a time. While this is not a very common solution, it is possible and might be the only solution in some cases.

 

Learn the Real Secret to Building a Database Test Plan With JMeter

 

2. Number of HTTP Requests per Minute

 

We often want to give users information that is constantly changing. For example, a news website uses HTTP requests to update its data on a regular basis. If an event occurs, you want your users to know about it as soon as you do. However, sending HTTP requests every 10 seconds may be too much for your server to hold. As a result, you need to decide how many requests you actually do need at a given frequency. In which case you would simply have to scale your resources to enable more servers or possibly a stronger database server. You could even manage some of the data in the memory in order to provide users with a positive experience.

 

In other cases, a trade-off may be preferred. For example, if you don't need the requested information every 10 seconds, maybe once every 30 seconds will suffice, which can dramatically change the resource usage and load for the respective server. If you think that a feature has the potential to become very popular, you should consider giving it special treatment, such as checking it specifically on a load of tens or even hundreds of thousands of concurrent users simultaneously, not just 10% of your total users.

 

Another popular option is to use WebSockets. These support full-duplex communication, creating an open connection between a browser and a given server in order to immediately push information to clients (even multiple events per second). For example: if you need an application that receives notifications multiple times per second (i.e. Facebook chat and notifications), you can’t send a request 10 times per second. That would kill the application. In such cases, using WebSockets would be a very good solution. While WebSockets are only supported by modern browsers, some libraries, like Socket.IO, work with older browsers as well. So, you need to consider all of these options and see what the best solution is for your company’s needs and available resources.

 

Learn How to Run JMeter Tests for 500,000+ Users

 

All in all, you need to align your users and forecasted demand with your application’s capacity and behavior. In order to maintain efficiency and eliminate the risks associated with rapid growth, it is imperative to take the abovementioned guidelines into consideration and develop your platform with load and scale in mind.

 


BlazeMeter is one solution for all of your performance testing needs. Now you can enjoy more agility in your performance testing. Use the same scripts to rapidly run load and performance tests on demand, and at any scale. Our dynamic platform meets all your testing needs at every stage of the lifecycle - from development through to operations. Check out BlazeMeter features or enterprise offering

 

     
arrow Please enter a URL with http(s)

You might also find these useful:

Interested in writing for our Blog?Send us a pitch!