<< back Page 2 of 2

Solving CPU Bottlenecks in a Mobile-First World


When identifying these bottlenecks, it’s also important to keep the user experience in mind. We have all felt frustrated when a page does not load in a timely manner. Such frustration can lead to rage clicking, repeatedly refreshing the page, or worse, leaving the site to go to a competitor’s if a business fails to meet user experience expectations. In fact, 53% of mobile users abandon sites that take longer than 3 seconds to load, according to an Akamai Developer blog post by Tammy Everts.

After abandoning an app or website, if the user was serious about buying on your site to begin with, short-term revenue, at the very least, has been lost. There’s even the potential for longer-term brand reputation damage that could impact future buying decisions.

Fixing CPU Bottlenecks

Ridding a digital property of CPU bottlenecks can be quite a challenge to overcome. Often, site owners will have to make the difficult decision about whether to optimize their loading strategy or set budgets to minimize the amount of work being required of the browser. However, one thing that all experts can agree on is that measuring the performance via RUM and analytics is vital to understanding performance.

As mentioned, given the increasing reliance on JavaScript, it is one of the primary causes of a CPU bottleneck. To address this, as Osmani points out, companies may also want to explore the following techniques where possible:

Code Splitting

The concept of code splitting is to only deliver the JavaScript that the page requires. For example, if you have functions unique to your homepage and different unique functions for your search page, you would only load those that are necessary for the page you are on. Shared functions would continue to be loaded in a single file.

As technology has moved forward, we have seen sites start to consider the difference between a first view and a returning user. Using “service workers,” it is possible to check whether a user has a large, single JavaScript file in their cache. If they do, then this can be used with no new requests made to the network. If they don’t, the user can be considered to be new, and the requests for only the required smaller JavaScript files can be made so the page loads as quickly as possible. The service worker script also has the ability to make a request for the large full JavaScript file when the browser is not as busy.

Lazy Loading

The concept of lazy loading builds on the theory of only giving the browser what it needs when it needs it. Some JavaScript will not be essential to make the page look ready. The concept of lazy loading is to get the page into a visual state of readiness as fast as possible and only when that is complete to start bringing in the remaining code.

Deferring JavaScript

The concept of deferring JavaScript is similar to lazy loading, but where lazy loading will require additional JavaScript to make it work, deferring is supported by browsers and only requires the simple defer attribute added to your script.

Deferring JavaScript will mean it is not downloaded until after the onload event and it will not block other tasks while downloading. And, more importantly, it will preserve the order in which it is run.

For more articles like this one, go to the 2020 Data Sourcebook

All of these techniques can reduce the amount of JavaScript being executed and help make sure the browser is busy with the important tasks of getting the page visually ready first.

In addition to these efforts, loading what is needed—when it is needed—is crucial, as hinted above. If businesses want to improve in this area, there are tools to help, such as the free platform webpack (https://webpack.js.org) for compiling JavaScript modules. Businesses can also embark on manual exercises such as tree shaking (the process of getting rid of unused code).

One common problem to keep in mind in terms of addressing these bottlenecks relates to setting the proper expectations. Often, this important step is not taken. For example, if you find that a website’s performance is impacted primarily by a CPU bottleneck, then the impact of other changes might not show a benefit. A site with numerous third parties might not show as much of an improvement in load times as a site that is bound by a bandwidth constraint. Properly analyzing the situation and painting an accurate picture of the issue can set the right tone for not only how to solve the problem of CPU bottlenecks but also what to expect in doing so.

What’s Ahead

As consumers become more mobile, more digitally savvy, and more impatient, they also become more demanding, with higher expectations and a lower tolerance for failure or delays.

Time is valuable and customers  are keenly aware that they can quickly and easily switch to a competitive service at a moment’s notice. That’s why placing an emphasis on web performance and investing in resources and processes to remove pesky bottlenecks is a must for organizations looking to boost the customer experience as well as the bottom line.

Web performance decision makers must invest in guides that set appropriate performance budgets while prioritizing code-splitting, loading only what is needed when it is needed, and perhaps most im­portantly of all, understanding how to truly recognize the issue in the first place.

<< back Page 2 of 2


Newsletters

Subscribe to Big Data Quarterly E-Edition