Performance as a Feature – Addendum
Modern users expect highly responsive applications and web sites, therefore when building new applications, performance should be considered as a feature of the product.
Latency as the enemy of performance
The key enemy of performance is latency, which in its most basic terms is the time delay between the initiation of an action and the response from the system. That time delay is created by the combination of multiple layers of delay found within the application and its infrastructure1.
- The first is system layer latency. System layer latency consists of delays generated by hardware like the CPU, RAM, hard disks as well the operating system and other lower level software like virtualization or anti-virus software.
- The second is application layer latency. This is non-system level software including the application itself as well as external software dependencies like databases, ERP applications, and external services.
- The third is infrastructure layer latency. This primarily is the amount of time data has taken traversing the network. These delays can manifest in network in server to server scenarios as well as from the application to the user.
Performance’s Impact on User Behavior
Originally, it was the belief that users were incapable of perceiving delays under 200ms. This has been proven untrue by studies done by both Bing and Google2. These studies were able to determine that a user’s behavior is affected starting with delays as small as 50 ms. They also found that improving the latency of an application from 1 second down to 50 ms resulted in a much higher level of user engagement measured by additional time spent on the site, increased clicks, increased page views and overall faster responsiveness of the user. They found the user would make their next click almost 2 seconds faster when the page rendered at 50 ms than at 1 second. This ultimately resulted in a 2.8% increase in revenue per user. Conversely, in a study performed by the Aberdeen Group3, an increase in delay by one second resulted in an 11% drop in page views, 16% decrease in customer satisfaction and 7% loss in conversion rates.
Techniques for improving performance
Knowing that user engagement and satisfaction is highly dependent on a responsive application, what can be done to reduce latency? The first step is to make a concerted effort during the design of the application to incorporate patterns and practices that will aide in a more responsive application.
One technique is to implement progressive rendering. By providing a quick response with some baseline visual elements to the client, the application can keep the user engaged and provide the user a sense of a result even though the primary content has not arrived yet. This allows for content to be built out in phases.
Another technique is to reduce the amount of disk I/O the application is performing. Disk I/O is one of most expensive and highest latency operations that can occur. Disk I/O will occur most often when interacting with a database or the file system, so keeping key data elements in RAM will increase performance. Utilizing caching and in memory indexing can help to reduce the need for disk reads.
A third option is to utilize higher performance hardware. Standard spun disks are much slower than solid state drives and most data at some point will need to be retrieved from disk. Looking at the cost benefit of high performance hardware can truly make a significant impact on the overall latency of an application.
Responsibility for ensuring performance
Lastly, the burden of ensuring the application is responsive ultimately falls onto the development team to be committed to monitoring the performance of the application as it is being developed. The development team members are the ones who will be able to identify early on where the bottlenecks are. They can then work to identify an alternate solution or a means to mitigate the impact of issue. Developers should also leverage tools like Mini Profiler or Yahoo YSlow to aid in the identification of issues.
It is pertinent that developers are aware that the performance they witness is the best case scenario. Latency will only increase with additional factors like an external network or when bandwidth limitations are introduced.
Ultimately, incorporating performance as a feature into an application requires all aspects of the system to be considered from the inception of a project throughout delivery. Buy-in is required from the whole project team as well as project support staff to ensure that the application can be delivered successfully.
1 – Sourced from Latency Matters by Erik Ottem on January 24, 2014
2 – Slides of Google and Bing Findings
3 – Aberdeen Group Findings Sourced from Tag-Man March 14, 2014