Keeping It 100, Web Performance
The quality of the experience of websites is crucial to achieving the business goals of your website and the satisfaction of your visitors.
Adobe Experience Manager (AEM) is optimized to deliver excellent experiences and optimal web performance. With the real-user monitoring (RUM) data collection, information is collected about your website and offers a way to iterate on real user performance measurements without having to wait for the CRuX data to show effects of code and deployment changes.
The Google PageSpeed Insight Service is proven to be a good lab measurement tool. It can be used to avoid the slow deterioration of the performance and experience score of your website.
If you start your project with the Boilerplate as in the Developer Tutorial, you will get a very stable Lighthouse score that is
100. On every component of the lighthouse score there is some buffer for the project code to use and still be within the boundaries of a perfect
Testing Your Pull Requests
It turns out that it is very hard to improve your Lighthouse score once it is low, but it is not hard to keep it at
100 if you continuously test.
When you open a pull request (PR) on a project, the test URLs in the description of your project are used to run the PageSpeed Insights Service against. The AEM GitHub bot will automatically fail your PR if the score is below
100 with a little bit of buffer to account for some volatility of the results.
The results are for the mobile lighthouse score, as they tend to be harder to achieve than desktop.
Why Google PageSpeed Insights?
Many teams and individuals use their own configurations for measuring Lighthouse scores. Different teams have developed their own test harnesses and use their own test tools with configurations that have been set up as part of their continuous monitoring and performance reporting practices.
The performance of a website impacts its rankings in search results, which is reflected by the Core Web Vitals in the crUX report. Google has a great handle on the relevant average combinations of device information (e.g. screen size) as well as network performance of those devices. But in the end, SEO is the ultimate arbiter of what good vs. bad web performance is. As the specific configuration is a moving target, performance practices should be aligned with the current average devices and network characteristics globally.
So instead of using a project specific configuration for Lighthouse testing, we use the continuously-updated configurations seen as part of the mobile and desktop strategies referenced in the latest versions of the Google PageSpeed Insights API.
While there may be additional insight that some users feel they can collect from other ways of measuring Lighthouse scores, to be able to have a meaningful and comparable performance conversation across projects, there needs to be a way to measure performance universally. The default PageSpeed Insight Service is the most authoritative, most widely accepted lab test when it comes to measuring your performance.
However it is important to remember that the recommendations that you get from PageSpeed Insights do not necessarily lead to better results, especially the closer you get to a lighthouse score of
Core Web Vitals (CWV) collected by the built-in RUM data collection play an important role in validating results. For minor changes, however, the variance of the results and the lack of sufficient data points (traffic) over a short period of time makes it impractical to get statistically relevant results in most cases.
Three-Phase Loading (E-L-D)
Dissecting the payload that's on a web page into three phases makes it relatively straight-forward to achieve a clean lighthouse score and therefore set a baseline for a great customer experience.
The three phase loading approach divides the payload and execution of the page into three phases
- Phase E (Eager): This contains everything that's needed to get to the largest contentful paint (
- Phase L (Lazy): This contains everything that is controlled by the project and largely served from the same origin.
- Phase D (Delayed): This contains everything else such as third-party tags or assets that are not material to experience.
Phase E: Eager
In the eager phase, everything that's needed to be loaded for the true
In many cases the
LCP element is contained in a block (often created by auto blocking), where the block
.js and and
.css also have to be loaded.
The block loader unhides sections progressively, which means that all the blocks of the first section have to be loaded for the
LCP to become visible. For this reason, it might make sense to have a smaller section containing as little as needed at the top of a page.
It is a good rule of thumb to keep the aggregate payload before the
LCP is displayed below 100kb, which usually results in an
LCP event quicker than 1560ms (
LCP scoring at 100 in PSI). Especially on mobile the network tends to be bandwidth constrained, so changing the loading sequence before
LCP has minimal to no impact.
Loading from or connecting to a second origin before the
LCP occurred is strongly discouraged as establishing a second connection (TLS, DNS, etc.) adds a significant delay to the
Phase L: Lazy
In the lazy phase, the portion of the payload is loaded that doesn't affect total blocking time (
TBT) and ultimately first input delay (FID).
blocks you are going to create to cover the project needs.
In this phase it would still be advisable that the bulk of the payload come from the same origin and is controlled by the first party, so that changes can be made if needed to avoid negative impact on
Phase D: Delayed
In the delayed phase, the parts of the payload are loaded that don't have an immediate impact to the experience and/or are not controlled by the project and come from third parties. Think of marketing tooling, consent management, extended analytics, chat/interaction modules etc. which are often deployed through tag management solutions.
It is important to understand that for the impact on the overall customer experience to be minimized, the start of this phase needs to be significantly delayed. The delayed phase should be at least three seconds after the LCP event to leave enough time for the rest of the experience to get settled.
The delayed phase is usually handled in
delayed.js which serves as an initial catch-all for scripts that cause
TBT. Ideally, the
TBT problems are removed from the scripts in question either by loading them outside of the main thread (in a web worker) or by just removing the actual blocking time from the code. Once the problems are fixed, those libraries can easily be added to the lazy phase and be loaded earlier.
Header and Footer
The header and specifically the footer of the page are not in the critical path to the
LCP, which is why they are loaded asynchronously in their respective blocks. Generally, resources that do not share the same life cycle (meaning that they are updated with authoring changes at different times) should be kept in separate documents to make the caching chain between the origins and the browser simpler and more effective. Keeping those resources separate increases cache hit ratios and reduces cache invalidation and cache management complexity.
Since web fonts are often a strain on bandwidth and loaded from a different origin via a font service like https://fonts.adobe.com or https://fonts.google.com, it is largely impossible to load fonts before the
LCP, which is why they are usually added to the
lazy-styles.css and are loaded after the
LCP is displayed.
There are situations where the actual
LCP element is not included in the markup that is transmitted to the client. This happens when there is an indirection or lookup (for example a service that’s called, a fragment that’s loaded or a lookup that needs to happen in a
.json) for the
In those situations, it is important that the page loading waits with guessing the
LCP candidate (currently the first image on the page) until the first block has made the necessary changes to the DOM.
To identify which blocks to wait for before blocking for the
LCP load, you can add the blocks that contain the
LCP element to the
LCP_BLOCKS array in
Bonus: Speed is Green
Building websites that are fast, small, and quick to render is not just a good idea to deliver exceptional experiences that convert better, it is also a good way to reduce carbon emissions.