Optimizing WordPress with the Right Data Lab and Field Metrics

In this session, Varun Raman (Customer Support & Content Lead, GTmetrix) explains how to optimize WordPress performance using the right data—specifically lab vs field metrics. He breaks down how each data type works, why they differ, and how to use both together to accurately diagnose and improve real-world website performance.

Hey everyone. Uh I took a short break and I am back on the hosting seat. Uh the reason to host this coming session uh because as a student of performance and as a person who has a technical background, I use GTMetrics a lot to test my websites, my client websites, my business websites. and GTMetrics uh helped me a lot to understand the optimization techniques and all the things. So I was trying to catch up with someone who is from the GTMetric team who give a session on the cloud boot camp because it is highly relevant for our audience and luckily I found Warun Raman. So the next upcoming session is on how you can optimize WordPress website with the right data and we have Wun Raman to present this session. Warun Raman is a customer support and content lead at GTMetrics and he is a performance expert. Hey Warun, how are you? Good. How are you? I am also good and it is a real pleasure to have you on cloud boot camp and I am really excited for this session and I hope uh the viewers will get so much insightful things from this session. So after any delay we can start the session. Thank you looking great to be here and let’s get started. Is are you able to see my presentation? Yes. Yes, I can. I can see the presentation. I think you need to again share the screen. Okay, let try again. Uh, sorry. Just give me a sec here. Some reason it keeps popping back out. Okay, you’re able to see now? Yes. Okay. Um, yeah, great to be here. Thanks everyone for joining us here. Um so uh as uh Danish mentioned uh my name is Bun and I’m the customer support and content lead uh at GGmetrics and uh I’ve been with GTMetrics for about six years now and previously based out of uh Vancouver in Canada but currently uh in New Zealand. uh and I’m here to talk about uh optimizing WordPress with the right data uh specifically focusing on uh the lab and field metrics you know sort of the differences between uh first of all what are lab and field metrics uh and then what are the differences between them um and how you can use them to optimize your uh WordPress site’s performance uh so talking a little bit about what we will be specifically looking at today. uh we will be I will be taking you through the technical differences between lab and field data um including how they’re measured and how they’re how they’re how they’re measured and you know what are the methodology differences between them uh how to interpret each uh data set uh because they do care uh give you two slightly different uh perspectives on your uh page performance and then which one is technically better better in quotes uh cuz uh you you’ll you’ll see the the why why I say that and then uh we’ll also look at some real world uh examples uh using web performance tools. Well, in this particular case, I’ll be showing you a gmetrics uh example uh of uh what the lab and field data looks like uh when we test the page and how they can be aligned or unaligned or misaligned in some cases and then what you can do uh if that’s the case. Um, okay. So, why does web performance matter? Well, I’m sure you’ve all been talking about uh this over the past uh couple of days. So, as you all know, uh good web performance is critical uh for a good user experience. uh fast responsive websites always get better engagement, higher conversions and rank better on Google. Uh even in this AIdriven age, this holds true. Uh better performance generally always means better engagement and visibility. Uh users increasingly have lower attention spans. So it is imperative that sites are quick and responsive to keep uh users engaged. Uh if you look at the makeup of the internet today, more than half of the websites use some form of a content management system. Uh CMS like WordPress or Wix, Squarespace, uh Shopify, etc. Um if you look at the popularity of different CMS, WordPress is undoubtedly the most popular CMS. It makes up for over twothirds of all CMS-based websites. Uh for comparison, uh Shopify is something like 7% of all CMS sites. Wix is like 5% and Squarespace is like 3%. So you can see WordPress is is is much more popular than the other ones. Uh and you could almost say it’s the default infrastructure for the web now. U yet only 45% of all WordPress 45% of WordPress sites pass all three web vital. uh this if you compare with the other CMS they tend to do better like for Shopify it’s something like 65% of all sites pass web vitals uh Wix it’s like 74%. Uh everybody knows web vital uh are critical to a delightful user experience and ranking well on Google. uh Google themselves have uh pushed web vital uh in the last few years trying to get everyone to rank better with that. So given that only 45% of WordPress sites pass all three web vital there is definitely room for improvement uh for WordPress sites uh if you look at the lighthouse performance score. So this is the sort of lighthouse is the the tool that Google uses u and it’s their platform for measuring uh web performance. Uh the median lighthouse score for a WordPress site is only 41 on mobile and 63 on desktop. Uh that is quite abysmal. U again if you were to compare with other CMS they do generally tend to do better. Shopify is like 52 on mobile 68 desktop. Uh Wix is even better, 64 on mobile, 87 desktop. Um even Drupal is slightly higher with 45 on mobile and 68 uh on desktop. Sorry if anybody out there likes Drupal. I didn’t mean to offend anyone. Um WordPress sites uh as you know they have a lot of flexibility. Uh you can self-host or you can go with so many other different hosting options. There’s shared hosting, dedicated hosting, VPS hosting, managed hosting. So a lot of flexibility there. And then on the front end side, many WordPress sites have complex themes, plugins. So all of this makes performance optimization essential. Uh so if you look at maybe other CMS like Shopify, Squarespace, you could perhaps think that the higher median scores are reflective of there being less control uh for hosting and certain application infrastructure. But performance is not just about the back end. Front end matters a lot too. Um so how do we actually measure this web performance? Well, we this is where we need data to tell us what is actually happening with our websites and generally speaking we can broadly categorize uh web performance data into two data sets which is lab data aka synthetic data and then field data also known as real user data. uh lab data as you know this is the kind of data you get from web performance tools uh like gtmetrics paid experience sites uh they come from a control test environment so results tend to be reproducible and stable um it is a powerful data source because the results you get can be very detailed and very specific to the conditions uh aka the analysis options these are the me and these are the metrics you get from lab data so you have the web vital your largest contentful pane cumulative layout shift and total blocking time and then you have time to first bite uh TTFB which in a nutshell is just basically tells you how quick your page’s initial response was. Uh you know the quicker your TTFB the less there is blank space that you that any potential visitor sees before content starts loading on your page. Uh first contentful paint again is a milestone that talks about uh how quickly uh your page starts to populate content. uh time to interactive speed index. These are other metrics um uh from Lighthouse and then you also have browser timings various other browser timings that may be useful probably more for developers but uh it is still something you can measure in the lab. Um and then on the other hand you have field data which is considered real user data. Now for just for context uh the field data that I would be referring to in this uh session is specific to what you get from Google’s uh Chrome user experience report. Uh there is of course other types of field data for example ones you can get from like real user monitoring tools but that is outside the scope of this session for now. Um so field data reflects actual user experiences. Basically whenever a real individual opens their device whether it’s a phone or a tablet or a a desktop laptop and then loads your page and whatever experience they’re getting that is the outcome that gets pulled into this data set. What this also means is that this data is an aggregate of various devices, browsers, locations, connections. So it is a comprehensive data set but it is still aggregate data. Um and then the metrics available over here. So you have the same co- web vital. Um you will notice that there is a slightly different metric here called interaction to next paint. Uh this is a recent metric introduced by Google that is uh aims to measure responsiveness. Uh this is only available in the field data. There isn’t really a direct equivalent in the lab data. But basically what it does is measure user interaction on your page. So it points to how responsive your page is beyond the initial load. Um and then the other metrics are similar to what you get in lab data. So you can have uh some form of comparison there. Now if you were to just look explain the difference between lab and field data using a metaphor. Uh lab data is like training on a treadmill. Um you have full control over the environment. Typically you’re indoors. You can set the speed of the treadmill, the incline, and you can choose the time of the day to train. Uh you can be in optimal conditions to train for the big event, and you can stop whenever you want. And you can adjust your diet and various other factors as you train indoors. Field data is like running the actual marathon. You just have to turn up on the day, work with whatever the weather is outside. If it’s hot, cold, rainy, or slippery, you may not have had the best of sleep. uh you you may not have had the best of uh meals. Uh whatever it is, you just have to turn up and deal with the actual conditions that are there. Uh so this this this in a nutshell is is you can think of as the main difference between lab and field data. Uh but of course you don’t just run a marathon without training usually. So you know that’s something to think about. U before I move forward I just wanted to use the poll over here to ask a quick question. Um, how many? Maybe I can put it in the poll here. Uh, Okay. I just added a poll over here basically asking how many of you have heard of uh GTMetrics. This is just sort of just wanted to know. Um and then we can move on. Um so let’s first look at lab data uh aka synthetic data. What are the pros and cons of lab data? Uh so lab data as I said is synthetic data that we get from web performance testing tools. The main uh advantage here is that you can control the environment. You can so you can test the page repeatedly and ensure that your results are consistent and repeatable. In other words, you can test your page multiple times either within minutes or you test it every hour and you ensure that you get stable results for the same page state in the same environment. Um because if there is if there are drastic variations then that is indicative of something that you have to go probably go and try to fix. Uh second you have flexibility in your testing. What this means is you can usually test your page in a variety of different conditions. So you can test with specific devices to see how your page loads with certain screen resolutions, different connection speeds, particularly when you’re testing on mobile. So you can do iPhone versus Samsung versus uh OnePlus comparisons. Uh or you can look at how it loads differently on tablet. So iPad versus Samsung Galaxy tab. Uh how your page loads on a 4G connection versus a 5G connection versus a Wi-Fi connection. And of course testing across different regions uh different continents even. Um if your audience is global you can ensure that you optimize your page to load the same way it does in London as it does in say Sydney or San Francisco. Uh you can also see how content differences in different locations affect your results and optimize for that. Um it can help you troubleshoot very specific issues. Say your page has good results on desktop but has poor results on mobile or it has a layout shift issue with certain screen resolutions. Maybe there’s certain image uh that is not appearing in the viewport on certain devices but appears in the viewport in certain devices which ends up pushing your content down. Uh or you have a situation where certain third party trackers or JavaScript only fires in certain conditions. You can test for these very specific scenarios or edge cases to ensure that you can catch things like this before you push your site u live or you might or you push any of your updates uh live uh or if it is live already you can then go and quickly fix it. Um finally another big advantage of lab testing is you get instant feedback for your optimization efforts. So you’re fixing you’re working on fixing performance issues and you just installed a caching plug-in and you configured it with various best practices that would be applied. You can see the results immediately and you don’t have to wait for a while to see what impact those changes have had. Uh for example, if you’ve applied a certain optimization, you can see if it’s giving you real benefits or not. And so you can you can play around with that to see which uh efforts are are actually yielding the best results. uh and you don’t have to wait for real time information to come in uh real data to come in to see those effects. Uh coming to the cons, it does not capture what visitors actually experience actually in air quotes. So what do I mean by that? Uh it just means that the results don’t come from real users. So you can’t treat it as what a user actually experienced. uh it’s still a good representation of what a real user experience is but ultimately it is still a synthetic result. Uh a key uh a key reason uh differentiation there again is browser versions and state differences. Uh what I mean by this is real users often have populated caches or they may have certain extensions on their local browser that could modify how the page behaves on their device. for example, a password manager or some other Chrome extension. They may also have dated versions of browsers or there could be other datad situations that could have an impact. Uh whereas in the lab, we tend to have clean browser profiles with no cache, no cookies, and no extensions. So, it likely won’t be a direct one-on-one comparison between your local browser and and the browser instance we use in a web performance tool. Um just to clarify over here many tools like GTMetrics uh do use real browsers and not headless browsers. So the browser instance the would be similar to a uh to what you would have in your local browser but still a bit different because of the the fact that we don’t have maybe the caching or the cookies in the same way. Um, an extension of that is the fact that real world variations and user interactions may not be sometimes reproduced easily in the lab. What I mean by that, for example, if your page has a model that shows up at initial load and then the next load then depends on what the user clicks. Depending on how that is configured or if it only fires in certain situations, that may not be something you could potentially reproduce in the lab. Um, so then with lab data, how do you actually measure it? Um, well, a variety of free and paid web performance tools exist. Uh, the most popular tools I’m sure you’re aware of, uh, include page speed insights, gmetrics, web page test. Um, not that I asked about Gmetrics. I wonder if the poll results are in. Oh, 100%. That’s great. Fantastic. Yes. So, um, so as you all know, security metrics is clearly popular. Um, page speed insights, uh, as you know, is a great tool for spot testing, especially if you want a quick check, uh, but you can’t dig deep into the data. If you do want more comprehensive uh data and actionable insights, uh yeah, consider using tools like ggmetrics or web page test. Uh I’m sure a lot of you already do. Uh basically when I say comprehensive data, I’m talking about look for tools that offer more features. For example, a waterfall chart as you can see over here. Uh this basically shows you a request by request load of your page so that you can specifically see which requests might be delaying things and figure out what’s happening and formulate a plan on how to address them. Whatever your tool of choice may be, remember to not simply test using the default uh test options because that might not actually represent your user base. uh look at your analytics like this so that you can get an idea of where your visitors are located, what sort of devices they’re using, what type of screen resolutions they may be using so that you can have this visitor profile loaded in your test tool of choice. Uh you can then change your location, device or connection speeds to match this uh so that it’s representative of what your visitors actually experience. And it’s always uh recommended to test your page uh when you launch new pages or if you’re pushing updates to existing pages so that you can evaluate their performance before you push those changes live. It’s always better to know how things perform beforehand uh rather than waking up to a uh 2 a.m. email saying something has drastically gone wrong. Now we come to field data pros first. As mentioned previously, this is real user data. So millions of users around the world are using their phones, laptops, tablets to browse and all of those browsing sessions result in real user metrics that are captured by Google through their Krux API. This data accounts for real world fluctuations. So server load changes, bandwidth limitations, caching issues, CDN misses, for example, uh weird routing, real traffic patterns. Uh it may be the case that a user in San Francisco is trying to load your page with the origin server also in California. So you would expect a lightning quick page but for whatever reason the network routing is weird and goes through Southeast Asia adding unnecessary latency to the connection. It’s happened. We’ve seen these uh cases in the wild. That’s why we need field data to validate what the user is actually experiencing so that we can investigate those scenarios. It’s useful to see if your fixes and optimizations are actually having an effect in the real world. Basically, with lab testing, you’ve done everything you’ve can. Your lab data is an A performance score of 100%. But, uh, to actually now see that reflected in Krux data is the ultimate validation. And last point, we know that web vital influences Google search rankings and the Krux report is the source for the page experience signal used in Google’s search ranking algorithm. Google wants faster web pages. They have not uh shied away from saying that and they ensure that you can know how well your page is doing in the real world by including web vital in the crux data set. Now we come to the cons and there are a few uh if your page has insufficient traffic you may only have origin level data uh or you may have no data. So what is origin level data? I will get into it uh a little bit later down the line, but it’s basically just aggregate data for your entire website. Um you will have to wait until your page satisfies Google’s eligibility criteria to start seeing results. So without that, you may not have any field data or insufficient data. U field data also indicates problems at a high level, but you can’t do a deep dive to troubleshoot specific issues. In other words, field data will tell you that your LCP is slow, but it won’t exactly tell you why it is slow. And because it’s aggregate data, you can’t drill down to specific scenarios that may be causing it. For example, if it’s happening on a specific device or if it’s happening in a specific location, uh this can’t be trouble. This can’t be diagnosed with uh field data. Anyway, uh it also needs 28 days for updates to reflect. So if your page is slow but you just push an update to your page today that won’t be reflected immediately. So you will have to wait for the next set of crux data to come in uh which happens on a 28 day rolling basis. Um moving on to uh how do we measure field data? Well similar to lab data you have a variety of free and paid web performance tools that provide the field data. the tools I mentioned before uh like page speed insights gmetric web pages they all provide field data as well uh as does Google search console again I reiterate that field data is aggregate data so it’s combined with a whole bunch of devices browsers locations connection speeds it’s not a snapshot in time of a single page load but it’s like a big picture view of multiple experiences over multiple devices and scenarios You cannot do direct comparisons between lab and field data. You can still compare of course but just know that they have different methodologies. So they may not always exactly line up the way you want them to. Use the data to guide your optimization efforts rather than treat it as gospel. So which data set is quote unquote better? Well, the exciting answer is neither. Both data sets have different sources and different methodologies, but at the end of the day, they have the same goal, which is to guide your optimization efforts. Lab data has its benefits in that you get a consistent performance benchmark for various scenarios. It helps you pinpoint specific bottlenecks and optimize for different audiences. This is generally not possible with field data alone. Uh but field data reflects what your visitors are actually experiencing in mobile. It basically validates all of your efforts and it accounts for real world variables like hardware differences, internet speeds, server loads. Uh it is worth noting that many people uh and people we’ve talked to believe field data is the only data set that actually matters because of its influence in search rankings. Uh that’s an understandable take, but getting there actually requires lab testing because like I’ve said, field data can tell you that something is slow, but it can’t tell you why exactly that’s the case. So you do need lab data to fill in those gaps. Uh so long story short, use both data sets to create a complete picture of your website’s performance. I just wanted to create another poll at this time. basically just want to know if users out there have used both lab and field data to assess uh your web forms. get now while that poll is collecting let’s move on now let’s talk about how well lab and field data can be aligned or misaligned when you look at them side by side uh I will be showing you some real world examples soon but let me just spend a quick minute setting some context over here uh you could have some major misalignment between your lab and field data simply because your website may be too new or having insufficient traffic. So it may not have met Google’s eligibility criteria. So you might not have any data or you might have origin level data alone. So that makes direct comparisons nearly impossible. So what is origin data? Origin data is basically an aggregate of all of your sites pages. So if your if your uh web site is say g https ggmmetrics.com it includes the origin data includes results from all of your websites pages. So it’s the homepage, uh dashboard, about page, contact page, uh landing pages, marketing pages, every single uh different every other URL that is included under gmetrics.com, it includes data from all of that. So what you get basically is an aggregate of some fast loading pages and some slow loading pages. So it gets averaged out. Um, so it it’s not really fair to compare that average data with a single page which may have good or poor metrics and your field data likely won’t correlate with a single snapshot of a specific test profile. Uh, say you have 35% of your visitors on desktop and 65% of your visitors on mobile. So your field results are probably more skewed towards mobile performance. But if you compare that with a desktop result with a screen resolution that is not very common among your visitor base and with a gigaf gigabit fiber connection speed that not a lot of your visitors are using then there probably won’t be any correlation at all. So in this case you will need to adjust your test conditions to ensure that they are closer to your real world visitor profiles so that you have a better uh frame for comparison. Apart from these differences, of course, field data is 28-day rolling data. So, immediate changes to your page won’t be reflected. Uh, now let’s look at some real world examples. So, all of the examples I’m about to show you are real examples from GTMetrics. These are actual websites we’ve tested. Uh, the screenshots, the data that you’re seeing, uh, they come from the crux tab of the GTMetrics report where we provide this field data, u, which comes directly from Google. And then we also this is where the field data is. Uh so this number you’re seeing is what Google calls the 75th percentile. U in a nutshell it basically just means that the the the quicker this number is that means at least 75% of your pages visits have been uh quicker or equal to this value. So if your LCP is 2 seconds, that means majority of your visitors are experiencing LCP of 2 seconds in the real world. And the goal of course is to get these uh as low as possible. That means they’re as quick as possible. Um and then we have the lab timings for whatever lab test result that you have right underneath here. So you can compare and see how aligned or misaligned they are. Um so we have the five main metrics from the crux data set over here. Uh these are the web vital. These are the ones that determine your search rankings. And then these two are also important metrics uh that help you get there because you generally need a fast TTFB and a first contentful paint to be able to have a fast largest contentful paint. Um so it’ll be useful to see where your your bottlenecks are and how you need to what you need to do to fix them. uh and of course we over here we have interactions and explain and then in the lab we have comparing it with the total blocking time uh as I mentioned before there is no direct equivalent of INP in the lab um so we compare it with TBT here that’s because from Google’s own assessments INP and TBT generally show good correlation a lot of the issues that affect INP are the same issues that affect TBT so fixing your TBT in the lab can help you achieve a good INP in the So back to this example, here’s a real website that we tested on Gmetrics. Now this is a large business with their visitors primarily located in Asia. They do use a CDN, so you would expect to see similar results all over the world. In this case, uh these results, the lab results are obtained from testing in a North American location, whereas the field data probably reflects most of the visitors coming from Asia. So straight away you can see that the lap timings are super quick uh with the exception of TBT. U so you can see FCP, TTFB, LCP are all really quick just the order of milliseconds whereas in the real world they are considerably slower 1.3 seconds to TTFB and 3.4 seconds to the LCP. Uh as I mentioned the TBT is bad in the lab, but we’ll see if that changes later on. The only sort of good correlation here is that CLS is fairly similar 0 0.0 some negligible differences there. We can surmise in this case that the test conditions don’t accurately represent what visitors experience. So looking at analytics and adjusting the test conditions to better match visitor profiles could potentially close the gap between lab and field results. So let’s see what happens when we do that. So here’s what happened after we’ve made changes to the we’ve changed the location, we’ve changed the device, we’ve changed the connection speed from a fiber connection pre uh previously to now a slow 4G connection based on the analytics data for the site. And now you can see that the TTFB and FCP are much closer. So TTFB is almost a perfect match and FCP is also much closer. uh TBT and IMP also show better correlation probably indicating that with the right device and the kind of JavaScript that is being served that this is a better reflection of what visitors are experiencing in the real world. LCP still doesn’t match that well but you can see that it is much better much closer compared to what it was before you know half a second compared to 2.2 seconds now. Um so it does certainly show you that we’re trending in the right direction now and then we can make further adjustments and use use that as a platform uh to then improve things. Um this was an example of having good lab data but having bad field data. So what happens when we flip the scenario? So your lab data is bad but your field data is good. This is a much less common scenario. usually the case. So we see better results in the lab versus the field. Uh but it does happen. So how come this is happening here? Again, this is likely happening because the test conditions are not representative what visitors are experiencing. Uh this example over here is from a real website which is a photography studio based out of London. And I mentioned previously that your analytics will be able to tell you where your visitors are located, what devices they use, and what screen resolutions are common. Now that’s going to play a huge role over here. This page was initially tested by the user uh on desktop in Sydney which probably explains why the results are so much worse in the lab compared to the field data probably the increased latency resulting in much slower timings. So if we were to actually adjust the uh test conditions to actually match the real uh what real users are seeing. So if you change the location to London and the device to an iPhone 13, then we can immediately see that lab and field data have much better alignment. So you can see that TTFB is much quicker. The FCP and the LCP are almost perfect match. The LCP especially almost is a perfect match. Um TBT also correlates well. uh the CLS doesn’t match but then previously the CLS was 085 and now it’s 33 which is much closer to the uh field result that we’re seeing here. What this is pointing to is that the devices viewport or screen resolution basically what the user is seeing immediately um when the page load that is affecting the CLS. So we can use that as a compass and run more exper experiments. So checking different devices screen resolutions and from there we can further work on identifying the elements that are causing the layout shift and then work on eliminating them. Now that we’ve seen two cases of the data being misaligned and how we had to work to get them uh to match, let’s look at an example of where field and lab data are well aligned. So this here is also a real website and you can straight away see that all of the metrics um you can see all of the metrics are pretty are pretty much perfect matches you know 1.1 versus 1.2 2 again 1 versus 1.1 473 millconds versus 427 um inpbt also show excellent correlation this page got a gtmetrics grade of A and its core web vital assessment uh is passed according to Google um FYI full disclosure this is the GTmetrics homepage so you know pat pat pat on the back for ourselves so now we come to some WordPress web performance best practices. I’m sure a lot of these would have already been covered by many of the presenters over the past few days. So, let me just quickly summarize what uh what these best practices are. Uh it all starts with hosting that is the backend component component of performance. WordPress websites especially e-commerce tend to be complex with the host of plugins and themes. So, ensure that you don’t pair it with a low cost or a low power hosting uh packages. They don’t generally mix well. You want a hosting solution that is powerful enough to handle the requirements put on the servers by complex website. Uh other optimizations here include updating your latest PHP versions as they tend to have better performance. And then when it comes to themes or plugins, uh ensure that you use performance focused or lightweight themes. Many theme publishers actually provide sample templates that you can often test yourselves with a web performance tool to see how they perform and that gives you a benchmark to aim for. Uh also audit your plug-in usage and remove the ones that you don’t need so that you can reduce page bloat. It’s often the case that you need some plugins firing only on certain pages but then because of the way sites are built uh you just it just gets added to every single page even when it may not be needed. So maybe do an audit and review your site. Remove the plug-in requests from the pages that are where they’re not needed. So you don’t have to unnecessarily load them. Uh then when it comes to caching, there are various different forms of caching. So on the backend side, you have serverside caching like varnish or engine X which gives you a faster TTFB. Then you have object caching for faster database query processing. You have op code caching for faster PHP code compilation. Then on the front end, you can use caching plugins to cache your entire site and apply well-known best practices like lazy loading images, CSS, JavaScript optimizations, and other requests. Uh for example, WP Rocket is a great caching plug-in that does this really well. And then optimizing media. So when creating images in the first place, get them sized properly, use the right formats and compress them. uh consider modern formats like a for webp as they tend to offer much better file sizes. There are plugins that help you do these as well such as Imageify or uh EW image optimizer. Uh for videos consider where possible not using autoplay videos and then use a light box instead where the user has to manually load it. So this gives you a quicker initial load and then the video gets delayed until the user needs to view it. uh optimizing CSSJ JavaScript. Yeah, deferring anything non-critical and uh preloading anything critical and reducing our new score where possible. Uh third party requests, you can’t do much over here except maybe review the third party embeds and ads and remove the ones that are not adding value to your uh to your website. CDN usage. Well, review if a CDN is needed for your site. Uh if you are a local site with a very local audience, you probably don’t need one. But it can be very helpful to boost performance. Uh if you have if your audience is in multiple regions and it also reduces your server load in the process. And finally, always test before pushing out new pages or updates to existing pages, which brings us to the idea that performance optimization is not a one-time job. You may have optimized your site and pushed it live and thought job well done. But it only really starts from there. Your website is constantly evolving. So there’s going to be updates to various plugins and themes. There’s going to be content changes. All of this is going to impact your site performance. You may run various marketing campaigns, ad campaigns, and other events from time to time. So trackers, third party requests, network conditions, more images, all of this can uh have an impact and vary performance at time at any time. So it’s always recommended to keep an eye on your website regularly so that you can ensure that these changes don’t negatively impact your revenue and operations. It’s better, like I said before, it’s better to have something sorted beforehand rather than waking up to a 2 a.m. email uh panicking. uh consider monitoring tools. They can act as a second set of eyes on your page and alert you if urgent action is needed. Um so that brings me to my final poll, which is have you used any monitoring tool to keep an eye on your website’s performance? just to get an idea of how popular these are. And that’s basically it. So in conclusion, think of lab and field data as the dynamic duo of web performance. Um apart they are useful but together they are really strong. So use them in tandem to keep things fast and responsive and your visitors will certainly thank you. And that is it for my presentation. Uh thank you so much Warun for the uh insightful presentation on the GT matrix and the lab data. Uh I hope there are multiple key takeaways for the viewers as well. Uh there are multiple questions for you. uh on the Q&A session and on the chat I am bringing some of them and then right after that I would request you to stay around on the chat section and the Q&A session so you can answer some of the questions as uh the next two guest speaker are already lined up for the next session. So let me ping it on the screen. So uh the question is for teams that don’t have deep performance expertise, what are the few fields metrics you believe matter the most when monitoring WordPress performance? Well, I would say the web vital. Um Google created the web vital mainly because they they they know that web performance can get very technical and can get very complicated. uh especially if you’re not wellversed uh in in this in this world. So the idea behind the web vital is to create three simple metrics that you can focus on uh that will give you a good representation of whether your page is doing well. So the three web vital are LCP, INP and CLS. So LCP just measures how quickly your biggest content element on your page loads which typically tends to be uh the hero image or or some some big uh photo from like a background video or things like that. So that is a good indicator of when your page has loaded most content and users can see uh everything and then inp measures responsiveness. So how responsive it is because users typically start interacting with the page the moment they start seeing something uh some content that gets populated and then layout shifts is again visual stability. So, you know, you don’t want elements bouncing around, moving around as things load because that is just going to annoy the user. And so, these are the three main metrics that you would focus on. And even if you look at them from a lab data perspective, together these three metrics are like 80% of the performance score. So, just focusing on these three metrics would give you uh probably have the biggest impact uh on your performance. Yeah, I would say. Perfect. Uh thank you so much Vun for your valuable time. Uh and I would again request you to stay around in the chat section and Q&A session. So it was Vun from the GTMetrics team. Uh we have just one remaining session before we announce the winners. So don’t go anywhere. We have a session with Cloudflare team and the session is on the global website hack. How a CDN makes you local everywhere. We have Tan and Trevor Jackson from the CloudFare team. And this one is the last session of this cloud boot camp. And right after we will announce the winners of the activities and the leaderboard. So stay tuned. Thanks. Thanks.