https://abs.twimg.com/emoji/v2/... draggable="false" alt="⚡️" title="High voltage sign" aria-label="Emoji: High voltage sign">https://abs.twimg.com/emoji/v2/... draggable="false" alt="🧵" title="Thread" aria-label="Emoji: Thread"> here’s something important to know about recently announced core web vitals.

#webperf #perfmatters
the core web vitals as of now (subject to change in the future) are:

https://abs.twimg.com/emoji/v2/... draggable="false" alt="🔹" title="Small blue diamond" aria-label="Emoji: Small blue diamond"> LCP (largest contentful paint)
https://abs.twimg.com/emoji/v2/... draggable="false" alt="🔹" title="Small blue diamond" aria-label="Emoji: Small blue diamond"> CLS (cumulative layout shift)
https://abs.twimg.com/emoji/v2/... draggable="false" alt="🔹" title="Small blue diamond" aria-label="Emoji: Small blue diamond"> FID (first input delay)

these are good indicators of core aspects of experience. but, there’s a bit of a caveat...
FID can only be reported with field tools (real user monitoring). both @____lighthouse and @ChromeDevTools are synthetic. you can’t get FID.

Lighthouse does report Max Potential FID but that metric is not recommended. TBT is. (confused yet?)
if you’re using synthetic testing you have to replace FID with TBT.

if you’re using RUM, you might be able to track FID. or use a library to collect it yourself on a continuous basis (I& #39;m not even going to entertain of reliably collecting & reporting perf data here).
if you’re eager to start tracking core web vitals, that’s great! but be aware of the limitations, use cases and metric replacements when trying to create a set of core metrics.

fin.
You can follow @fox.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: