Challenges to Mobile App Performance Testing
February 8, 2022

Challenges to Mobile App Performance Testing

Performance Testing

Mobile devices are a vital part of our modern businesses and our daily lives. In their 2021 Connectivity and Mobile Trends survey, Deloitte cites that during 2020, mobile application usage grew significantly over previous years in order for consumers to better access products and services during the COVID-19 pandemic. Their findings also indicate that this increased usage is likely to continue for the majority of users, even after the pandemic. This is a major reason why mobile app performance testing is so vital.

The accessibility of handheld devices, as opposed to larger computers or even laptops, allows users instantaneous access to information, products, and services. Mobile applications provide a gateway where users and service providers meet. Given the fierce competition in the mobile application industry, the importance of a good user experience cannot be overstated. 

Mobile application testing is crucial to producing a satisfying end-user experience and to ensuring the success of your mobile application. This article will explore the challenges that may arise in performing mobile application performance testing, as well as key performance indicators used to measure and evaluate application behavior.

Back to top

What is Mobile App Performance Testing?

Mobile app performance testing is the process of testing a mobile application to identify issues and bottlenecks when compared to the team's key performance indicators (KPIs).

One statistic cites that app crashes upon download lead to 71% of uninstalls. In such a competitive industry, mobile application uninstalls and end-user frustration from poor app performance can harm the reputation of the company or provider, and ultimately affect their online presence. 

During performance testing, the tester is interested in determining the responsiveness of the mobile application and the effect of the application on the mobile device. Mobile performance testing is conducted on both the server-side and client-side. This article will focus on the client-side of performance testing, which involves how the application runs on the mobile device, as well as the resulting performance due to network connectivity.  

Mobile applications come in the following types: 

  • Web-Based Applications 
    • Mobile web-based applications load information from the network to the mobile browser. 
    • These applications do not require a platform to run, but rely on mobile browsers to access data from a network
  • Native Applications
    • Applications that require download from an application store such as iTunes or Google Play
    • The system is loaded and installed directly on the device and is able to store information on the device.
    • Native applications have heavier hardware and system requirements than web-based
  • Hybrid Applications
    • Hybrid applications are able to run both on mobile web browsers and can be installed on the mobile device through a platform 

Proper mobile application performance testing requires the application to be tested across each application type to verify performance, as each type interacts both with the server and the device differently. 

Additionally, applications impose varying usage requirements based on the device specifications. Testing must take into account mobile device consumer trends to ensure a sufficiently large sample size to conduct performance testing on. 

Back to top

Mobile App Testing: Key Performance Indicators

In the process of verifying the performance of an application, the tester must define benchmarks that are used to evaluate the application, otherwise known as key performance indicators (KPIs). While a variety of conditions are considered for testing mobile applications, in general, the following key metrics are used to measure the performance of the application: 

Latency/Response Time

The time between a user sending a request and the application response is latency or response time, and it is measured in seconds. For example, if a user is making an in-app purchase and finalizes their purchase, the response time is the time it takes from when the user first confirms their payment, their request is sent and processed, and a complete confirmation to be sent to their device. 

Response time increases above a certain threshold of concurrent users, though depending on how severe this is, it may or may not be an issue you want to address. That said, response time is one of the most critical metrics to test, as a slow response time results in a negative user experience that drives them to competitors. Make sure the response time for your app is no more than two to three seconds.

Load Speed

Load speed is the time in seconds taken for the application to fully initialize and load on the client user interface, and it should be tracked under the following conditions:

  • Expected Usage - Performance testing needs to simulate the actual conditions the application is facing in real-time. As a baseline, the load speed should be verified at the set expected number of users or requests for the application. 
  • Max number of concurrent users - The resulting load speed of the application at the maximum number of concurrent users accessing the application at the same time or the maximum number of requests. Note that concurrent users are not simultaneously accessing the same information, but are accessing different facets of the application. 
  • Critical conditions - Load speed must also be tracked at conditions where the application is expected to hit the peak number of simultaneous requests. Testing for critical conditions is similar to stress testing the application where the application is pushed to the limits of its capacity. 

To ensure the best user experience, make sure that your page load times are as fast as possible. Google found that over half of mobile sites were abandoned if there was a load time of more than three seconds.

Screen Rendering

Screen rendering time, sometimes listed as page ready time, is the time it takes for the application to load the content to the interface and be usable. This is a frontend measurement that starts between when a user’s browser first begins downloading content received from a server and when all elements on the webpage are not only visible but also interactive. 

For example, if you load a resource-heavy recipe blog with many high-resolution images, it may not take long for initial content to load such as the banner, title, and text. However, once you begin scrolling, it may appear “sticky”, with the page movement not matching your hand gestures as images, ads, and videos continue to load. Screen rendering is the time it takes for all elements to be fully interactive.

One source cites that the ideal time range for screen rendering is less than one to three seconds depending on the size of the app. 

Throughput

Throughput describes the number of transactions or requests an application can handle comfortably without holding requests in queue. This is a fixed value defined as “transactions per second” or TPS and can be verified during performance testing. 

If the number of requests exceeds the set TPS of an application, then the end-user may experience waiting for the application to respond. 

Tracking throughput is useful for indicating which aspects of an application lead to bottlenecks. For instance, one example cited by TestGuild showed that while transactions were being queued normally by the web-server side (yellow), the throughput in bytes per second plummeted (see buckle point). This drop-in throughput corresponded with an increase in overall transaction response time, and by evaluating each element of the request individually, it was identified that there was a bottleneck on the database side (red and blue), which can then be targeted independently.

 Bad Throughput Chart with HP Diagnostics

Source: Test Guild

Error Rate

The error rate can be measured as the percentage of requests that ended in errors relative to the total number transmitted, or simply the number of errors per second that were measured. 

The error rate is an essential metric for monitoring performance. While error handling measures can be used to mitigate the effect of certain errors, tracking an application’s error rates can indicate areas for improvement that affect overall performance and consequently user satisfaction. After all, slow or inadequate performance can easily cause users to uninstall an app.

App Crashes

Another important KPI is the rate per app load that the application crashes. Applications are expected to crash under certain conditions. The rate that the app crashes at however can significantly affect user experience and lead to uninstallation. 

In general, the user expects a 1-2% crash rate. Performance testing should monitor the app crash rate to predict the user experience and improve applications further. 

Device Performance

Assessing device memory usage, CPU usage, and battery life collectively when the application runs are all important for evaluating device performance. This aspect of client-side performance testing can be challenging, as there is a wide range of device capabilities. That said, a high CPU usage and battery drain for example could indicate an app that is excessively draining on the screen. Ultimately, excessive CPU usage slowing the device or excessive battery consumption could negatively impact the app user’s experience and lead to uninstallation.

The above KPIs are a shortlist of the most commonly used metrics to evaluate application performance. Additional metrics can be tracked and recorded to evaluate specific aspects of the performance of an application. 

Back to top

Requirements for Mobile App Performance Testing

In order for device performance testing to be valuable and successful for a specific mobile application, users should consider the following starting requirements:

  1. Determine the scope of devices for your testing: Depending on the mobile app’s intended audience, testers can determine what kind of tool capabilities are required as well as if the testing protocols can be simplified. For example, an app being released only for Android devices does not need to be tested on other operating systems, giving testers more information on their tool requirements and allowing them to accelerate the testing process. 
  2. Determine which specific app functions need to be evaluated: Depending on the specific functions of the mobile app, this could include evaluating app-startup times, battery and CPU usage information, operation and retrieval from the background, and more.
  3. Select a testing tool: Based on the scope of testing established above, testers will be able to determine which tool best suits their needs. This includes matching the requirements for mobile operating systems and application types with a tool that can accommodate them, as well as accounting for other preferences, such as the ability to perform cloud-based testing. More information on tool selection for mobile performance testing will be covered in an upcoming article.

While these requirements can be considered similar across a range of testing scenarios, there are several challenges that are unique to the performance testing of applications on mobile devices.

Back to top

Challenges in Conducting Mobile App Performance Testing

Mobile application testing must consider the overall end-user experience. Hence, testing must closely resemble the conditions that the user might face. Testing needs to account for the performance of an application across each type of application, across the range of mobile devices available on the market, and with varying network connectivity. These considerations add to the complexity of mobile app performance testing. 

1. Range of Mobile Devices

Mobile devices come in a variety of hardware specifications (RAM, CPU, processors), software versions, operating systems (iOS, Android, Windows, etc), and screen resolutions. The performance of the application must be checked for a variety of mobile devices such that the app performs consistently for a user with an Android versus an iPhone user, for example.

With varying devices come different screen sizes and resolutions. To load an app successfully on a mobile device, performance testing needs to be conducted to verify if the application adapts to multiple screen sizes. An obvious example is within the iOS operating system, where iPhones come in varying sizes with each iteration. The application must load consistently across all sizes without sacrificing useability, graphics quality, or other aspects of visual performance.  

However, performance testing on real devices may be lengthy and costly. As an alternative, the tester may specify minimum hardware requirements for the application to run to limit the number of mobile devices tested.  

2. Testing Different Application Types

Another consideration specific to mobile devices is to test the performance across various application platforms. Native and web-based mobile applications must be tested independently. Native applications run on a platform installed directly on the device which will behave differently to mobile browser-based applications. 

For browser-server applications, different types of mobile browsers must also be considered in testing the performance of the application, where performance relies on server and network connection. Where native applications store information directly on the device, browser-based applications depend on connectivity. 

On the other hand, the type of application will behave differently when there are multiple applications running in parallel in the device. As a result, different client-server response times, device usage and overall performance will need to be tested. 

3. Addressing Different Networks and Connectivities

The mobility of a handheld device allows for quick access to information, however, network conditions can vary in terms of service provider, speed, (2G, 3G, 4G, LTE), bandwidth, and stability. As such, the mobile application must be tested under various network conditions to determine the resulting load and response time. 

As an additional consideration, mobile devices may run some applications at intermittent connection, or even offline, especially during transit. The stability of network connections here will affect client-server communication, and as a result, will impact the transmission of data and overall performance of an application. Overall, applications must be tested under varied network conditions to verify that the latency the application experiences is acceptable. 

Back to top

Bottom Line

Mobile application performance testing is useful in ensuring the user experience is consistent and satisfying across the various platforms, devices, and networks used to access the application. With the ever-increasing rise in the popularity of mobile apps for critical industries, commerce applications, and service providers, performance testing is more crucial than ever to ensure the success of an application.

START TESTING NOW

 

Related Resources: 

Back to top