\\n <path d=\\"m5 11 9 9\\"/>\\n <path d=\\"M22 21H7\\"/>\\n</svg>\\n
Unlike raster image formats like gif/jpg, SVGs are specified in an XML format, just like HTML tags! In fact, we can directly embed this code in our HTML (or in our JSX). This allows us to manipulate specific portions of our icon!
\\nA full explanation of SVGs is well beyond the scope of this blog post, although it’s something we’ll cover in depth in the course. To quickly summarize what’s going on here: our eraser icon consists of 3 <path>
tags. Each <path>
is a set of drawing instructions. When we layer these three instructions together, we get our eraser icon:
Path A and B are the eraser itself, and Path C is the surface being erased. For our purposes, we want the eraser to move back and forth without affecting the surface:
\\nWe can accomplish this by wrapping the first two paths in a <g>
tag, which stands for “group”. Then, we can apply a CSS transform to that group, sliding those two <path>
tags along!
I’m doing a similar trick on the “Bomb” icon. At first glance, it appears to be a standard transform: rotate()
, but there’s a bit more to it than that.
Play along with this slider to exaggerate the effect, to make it clearer:
\\nTo explain what’s going on here: the whole bomb rotates by 10 degrees. Then, on a <path>
within the bomb’s SVG, I’m applying a nested rotation to the little fuse. It’s affected by both the parent rotation on the bomb and an additional rotation on the fuse.
The trick to making this work is to make use of transform-origin
to make sure each rotation pivots correctly. The parent rotation is anchored on the center of the bomb’s circle, while the fuse rotation is anchored to the tip of the bomb:
SVG animation is one of the most important tools in my toolbox, and we’ll be covering it in depth in the course. In the meantime. you can learn more about transform-origin
in my blog post, “The World of CSS Transforms”.
The bomb tool in the Chaos Toolbar has its own little secret feature: if you click and hold, something happens:
Check it out on the Whimsical Animations landing page(opens in new tab).
The fourth tool in the Chaos Toolbar, the magic wand, is by far the most elaborate. Several elements on the page can be transformed, with unpredictable results. For example, the main heading can swap between different styles:
\\n\\nWhenever an element is transformed, the wand cursor emits a few stars. This is an example of a particle effect, and it’s one of my favourite “genres” of effects.
\\nYou might notice that the particles don’t fire in a completely random direction. They all wind up within a 45° cone:
\\n\\nEach particle is positioned right under the cursor using absolute positioning and top
/ left
, with transform: translate()
used to fling them up and to the left. But how do we come up with the specific values for each particle?
The key is to think in terms of polar coordinates. This stuff gets so much easier to reason about with the right coordinate system.
\\nOn the web, we’re used to thinking in terms of cartesian coordinates: we specify things in terms of their X/Y displacement. transform: translate(-30px, 10px)
will move the element 30 pixels to the left and 10 pixels down.
With polar coordinates, we don’t think in terms of X and Y. We think in terms of angle and distance.
\\nThis will be easier to explain with a demo. ClickTap around inside each graph to see how the coordinates are calculated. If you don’t use a pointer device, you can also use the keyboard by focusing the handle and using the arrow keys:
\\nWith cartesian coordinates, it’s not really clear how to come up with valid X/Y values for my wand effect. But with polar coordinates, it’s pretty straightforward; I can generate random values within a specified range:
\\nimport { random } from \'@/utils\';\\n\\nfunction generateParticle() {\\n // Generate a random angle between 200° and 240°:\\n const angle = random(200, 240);\\n // Same thing for distance, between 30px and 60px:\\n const distance = random(30, 60);\\n\\n return { angle, distance };\\n}
(random is a small utility function that picks a random number between two values.)
\\nNow, we can’t actually apply a CSS transformation using polar coordinates; we need to convert them back to cartesian values before we can use them. This can be accomplished with trigonometry. I’ll spare you the math and give you the formula:
\\nfunction convertPolarToCartesian([angle, radius]) {\\n const angleInRadians = convertDegreesToRadians(angle);\\n\\n const x = radius * Math.cos(angleInRadians);\\n const y = radius * Math.sin(angleInRadians);\\n\\n return [x, y];\\n};\\n\\nconst convertDegreesToRadians = (angle) => (angle * Math.PI) / 180;
I used to do all this logic in JavaScript and apply the final value in CSS, but these days, CSS has trigonometric functions built in! By combining them with CSS variables, we can set up a keyframe animation like this:
\\n@keyframes flingAway {\\n to {\\n transform: translate(\\n calc(cos(var(--angle)) * var(--distance)),\\n calc(sin(var(--angle)) * var(--distance))\\n );\\n }\\n}\\n\\n.particle {\\n animation: flingAway 1000ms ease-out;\\n}
Then, when we render our particles, we define --angle
and --distance
for each one. Here’s what that looks like in JSX:
function Particle() {\\n const angle = convertDegreesToRadians(random(200, 240));\\n const distance = random(30, 60);\\n\\n return (\\n <div\\n className=\\"particle\\"\\n style={{\\n \'--angle\': `${angle}deg`,\\n \'--distance\': `${distance}px`,\\n }}\\n />\\n );\\n}\\n\\nexport default React.memo(Particle);
This is the core strategy I’ve been using for particles, and it works great. There’s a bunch of other stuff we can do to make it even better, like:
\\ntransform: rotate()
.animation-duration
and animation-delay
, to make it feel less choreographed/robotic.linear()
.Unless you’re a math enthusiast, this “polar coordinates” stuff probably doesn’t send a thrill up your leg, but honestly, it’s a critical concept for the sorts of things I build, one of the secret little keys that I rely on all the time.
\\nFor example, the interactive rainbow on this blog’s homepage(opens in new tab) relies on polar coordinates:
\\n\\nSo does this “angle” control I created for my Gradient Generator:
\\n\\nAnd this absolutely-ridiculous effect in Tinkersynth(opens in new tab), my generative art toy, relies entirely on shifting between cartesian and polar coordinates:
\\nThese are the first three examples that came to mind, but the list goes on and on. We’ll see more examples in the course. 😄
\\nThe “Whimsical Animations” landing page is littered with random shapes: tubes and octahedrons and eggs, all sorts of stuff.
\\nI made these shapes myself using Blender, which is 3D modeling software. After creating 22 of these lil’ shapes, I realized I had a problem. 😬
\\nAll of the optimization tools I use (like next/image, cwebp, tinypng, etc) strip out color profile information. They flatten my beautiful wide-gamut images into the sRGB color space, losing a ton of richness and vibrance in the process:
\\nIf the two images look the same to you, it’s likely because you’re not using a display that supports the P3 color space.
When I keep them in their native P3 color space, each image is between 50kb and 150kb. With 22 individual images, I’d be sending almost two megabytes of assets, which feels like way too much for decorative images like this!
\\nIt would also mean that each image would blink into existence whenever it finished loading, on its own schedule, creating a distracting flurry with no rhyme or reason.
\\nTo solve these problems, I used a sprite. ✨
\\nA sprite is a single image that contains all of the individual shapes packed together. Here’s a shrunk-down version:
\\nIn my markup, I create individual <img>
tags for each shape, using the object-position
property to pan around inside the image and show a single shape. The code looks something like this:
<style>\\n .decoration {\\n object-fit: none;\\n object-position: var(--x) var(--y);\\n /*\\n Support high-DPR screens by rendering at 50%\\n of the image’s true size:\\n */\\n transform: scale(0.5);\\n }\\n</style>\\n\\n<img\\n alt=\\"\\"\\n src=\\"/images/shape-sprite.png\\"\\n class=\\"decoration\\"\\n style=\\"--x: -387px; --y: -125px; width: 120px; height: 240px\\"\\n/>\\n<img\\n alt=\\"\\"\\n src=\\"/images/shape-sprite.png\\"\\n class=\\"decoration\\"\\n style=\\"--x: -42px; --y: -201px; width: 456px; height: 80px\\"\\n/>\\n<!-- ...and so on, for all 22 shapes --\x3e
This is pretty tedious work: using image-editing software, I go through the shapes one by one, measuring its distance from the top/left corner, as well as its width/height. I hardcode all of this data in a big JSON object, and then map over it and render an <img>
tag for each one.
In order for images to look crisp on high-DPR displays like Apple’s Retina displays, the image is actually twice as big as its displayed size. I use transform: scale(0.5)
to shrink it down to its intended size. Ideally, I should have two or three different versions of the spritesheet and swap between them based on the monitor’s display pixel ratio, but ultimately this’ll still look fine on standard displays.
By using a sprite, we also solve the problem of each image popping in whenever it finishes loading. Instead, I set it up so that the images would fade in sequence, starting from the center and moving outwards. Here’s what that looks like, at half-speed:
\\n\\nThis fade animation uses a keyframe animation:
\\n@keyframes fadeFromTransparent {\\n from {\\n opacity: 0;\\n }\\n}
Then, I use animation-duration
and animation-delay
to create the staggered swelling effect:
<img\\n alt=\\"\\"\\n src=\\"/images/shape-sprite.png\\"\\n class=\\"decoration\\"\\n style=\\"\\n --x: -42px;\\n --y: -201px;\\n width: 456px;\\n height: 80px;\\n animation-duration: 800ms;\\n animation-delay: 200ms;\\n \\"\\n/>
Each <img>
element is given custom values for both animation-duration
and animation-delay
, based on their perceived distance from the center of the screen.*I’m oversimplifying a bit here; I actually gave each element a “fadeScale” value between 0 and 1, and then normalized that value based on min/max values I could tweak to come up with the perfect sequence.
This works great on localhost, but it doesn’t work in production: keyframe animations start immediately, the moment the <img>
element is created. It doesn’t wait for the image to be loaded!
Here’s how I solved that in React:
\\nfunction ShapeLayer() {\\n const [hasLoaded, setHasLoaded] = React.useState(false);\\n\\n React.useEffect(() => {\\n const img = new Image();\\n img.src = \\"/images/shape-sprite.png\\";\\n\\n img.onload = () => {\\n setHasLoaded(true);\\n };\\n }, []);\\n\\n if (!hasLoaded) {\\n return null;\\n }\\n\\n // Once `hasLoaded` is true, render all of the shapes...\\n}
On first render, this component doesn’t specify any UI. It creates a detached dummy image and registers an onload
handler. When the image has finished downloading, I change a state variable, which causes all of the <img>
tags to be created. This way, the fade sequence only starts when the image is available.
One last little trick: Despite my best optimization efforts, this image still wound up being pretty hefty (474kb). I saved some space by consolidating everything in a single image, but png compression can only do so much.
\\nOn slower connections, it might take several seconds for the image to download, and I didn’t want to disrupt their experience by randomly introducing a bunch of images long after the page has loaded! I wanted something akin to font-display: optional
— if the image doesn’t load within the first 5 seconds, don’t even bother showing it.
Here’s how I set that up:
\\nReact.useEffect(() => {\\n const start = Date.now();\\n\\n const img = new Image();\\n img.src = \\"/images/shape-sprite.png\\";\\n\\n img.onload = () => {\\n const loadTime = Date.now() - start;\\n\\n if (loadTime <= 5_000) {\\n setHasLoaded(true);\\n }\\n };\\n}, []);
I measure the time when the image-loading process starts, and then get the difference when the image has finished loading. If it took more than 5 seconds, I don’t do anything, and this component continues to return null
.
So, a ~500kb image used exclusively for decorative purposes does feel a bit indulgent. Ideally, users would be able to opt out of receiving it if they have a limited amount of bandwidth.
This may be possible in the future; the prefers-reduced-data
media query would allow a user to specify this preference in the same way they can currently specify motion preferences. Unfortunately, it’s not implemented in any browsers(opens in new tab).
If/when support is added, I plan on updating my code to avoid downloading this large sprite image when this media query is specified by the user.
Two of the assorted shapes are intended to be translucent, made of glass. I thought it would be fun if they also blurred anything that moved behind them. Using the bomb, you can reposition the glass shapes to sit in front of stuff, like this:
\\nThis was surprisingly tricky. Blender does include the transparency as part of its export, but it was too clear. It didn’t look realistic. Plus, the png compression added some weird artifacts:
\\nI recently wrote about the backdrop-filter property, which allows us to apply a blurring algorithm to everything behind an element, but things didn’t quite work out:
\\nbackdrop-filter
works based on the shape of the <img>
DOM node. It’s not smart enough to only apply the blurring to the stuff behind the opaque pixels within the image!
To solve this problem, I used the clip-path
property to draw a polygon in the shape of the glass pane, fiddling with the points until it looked right. Here’s the shape of that polygon:
The polygon()
function doesn’t allow us to specify a corner radius for rounded corners, so our clipped area isn’t perfect, but it’s close enough to work well in this situation. 😄
In an earlier version of this landing page, I used pieces of candy rather than the random shapes you see now:
I spent a few days creating all of these assets by hand in Blender, and then realized that it didn’t feel right. I was focused on making each type of candy look realistic, like adding small sugar crystals on the gummy worms. When I put them all together on a landing page, though, it didn’t feel cohesive. I also wasn’t sure I wanted to use candy as a metaphor for the course. So in the end, the candy got thrown out.
I am a perfectionist when it comes to the work I release, which means I spend a lot of time on stuff that y’all never see. This landing page took many weeks of full-time work, with lots of iteration and revision. Please don’t think that I whipped this up over a weekend, or that you’re seeing my very first attempt. 😅
So this was totally unnecessary, but I built a fully-functional synthesizer. 😅
\\nThe synthesizer is revealed by transforming the signup form using the “wand” tool. It’s exclusive to the desktop experience.
\\nThe synthesizer is played either by clicking the keys with your mouse, pressing keys on a QWERTY keyboard, or with a MIDI controller. All of the sound it makes is generated live in-browser; no pre-recorded audio is used*A single audio file of a long echo is used for the convolution effect, enabled with the “Reverb” slider. I built it using the Web Audio API. Most stuff was built from scratch, though I did use tuna(opens in new tab) for some of the effects.
\\nFor most of the bells and whistles on this landing page, I tried to pick things strategically, showing off the things you’ll actually learn to build in the course. For this one, though, it was purely an exercise in self-indulgence 😅. We won’t cover the Web Audio API in the course.
\\nThat said, there are some pretty interesting UI details here too. For example: aside from the nameplate in the top-left corner, zero images are used. The UI was created entirely using layered gradients and shadows!
\\nDoing this sort of “CSS art” can seem really intimidating, but it’s honestly not as scary as I expected. It’s actually pretty remarkable how good things look almost by default when you start layering gradients!
\\nLike all good easter eggs, the synthesizer has 3 hidden features of its own. I won’t spoil them here, but I’ll give you some hints:
\\nIf you’ve poked around with the landing page, you’ve likely discovered that just about everything has a sound effect.
\\nThis is a bit controversial; people generally don’t expect websites to make noise! But our devices do have volume controls, so it’s easy for people to opt out of sound. I think as long as our sound effects are tasteful and not too loud, we can get away with it.
\\nLots of folks have told me that they’d love to start adding sound effects to their projects, but they don’t know where to find high-quality sound effects.
\\nFor years, my main source was freesound.org(opens in new tab). As the name implies, freesound is a huge database of free sound effects. They’re free in both senses of the word: you don’t pay anything to download them, and you’re free to use them however you wish, without restriction.
\\nThat said, browsing freesound often feels like a “needle in a haystack” situation. There are some real gems in there, but you need to sift through a lot of rocks to find them.
\\nAlternatively, there are paid options. I used Splice(opens in new tab) to find this “industrial machinery” sample I used for the marble cannon on the confirmation page:
\\nAnd finally, the thing I’ve been doing the most recently is recording my own sound effects! Most of the examples we’ll explore in this section were recorded by me, using a Zoom handheld recorder, experimenting with random objects in my environment. Not only is this incredibly fun, but it tends to produce the best results.
\\nLet’s talk about some of the sound-related tricks I used on this page.
\\nOne of my favourite techniques is to have multiple versions of each sound, to get a bit of natural variation.
\\nThis’ll be easier to explain with a demo. Try to drag the slider, with sound enabled. Flip between the two modes to hear the difference.
\\nWhen using the slider, you should hear a quiet ticking sound whenever its value changes. In some environments (like iOS), it’s a bit of a gamble as to whether you’ll actually hear these sounds or not. 😅
If you don’t hear anything and have confirmed that your device isn’t muted, it may help to play the video just above; that video has sound, and it seems to sometimes enable sound for the rest of the page.
The Single sample mode plays the exact same sound every time the slider’s value changes, while the Multiple samples mode randomly picks one of five sounds I recorded each time. It’s a subtle difference, but the multi-sample approach feels a bit less robotic to me, a bit more natural. Especially when dragging the slider quickly.
\\nThis is why I’m such a big fan of recording my own sounds. Most existing soundbanks will give you a single “version” of a sound, but when we record our own, we can collect a palette of samples.
\\nFor buttons, I play one sound when the button is pushed, and a separate sound when it’s released:
\\nTo make these samples, I tried pressing a bunch of buttons on various devices I had lying around the house. When I found something that matched the UI, I recorded myself pressing it a bunch of times, and selected the nicest samples.
\\nLike in the previous example, I’m not playing the exact same sound each time. I have 6 total samples (3 pushing, 3 releasing).
\\nIn a similar vein, the magic wand uses a plunger sample, and I broke the sample up so that it plays the first half of the sound on mouse-down, and the second half on mouse-up:
\\nOne of the easter eggs in the synthesizer is the ability to \\"pull up\\" a secret button:
\\nTo make this feel more tactile, I recorded a series of ascending clicks. Specifically, I dragged a pen along the plastic fins of my humidifier, which naturally rose in pitch since the plastic fins get shorter towards the top.
\\nIf you’re a React developer and you’d like to start adding sound effects to your projects, I have a lil’ library that can help! A few years ago, I open-sourced the custom hook I use, use-sound(opens in new tab).
\\nUnder the hood, it uses Howler.js(opens in new tab), a longtime battle-tested JavaScript library for playing sounds. So I’m delegating all the hard audio stuff to them.
\\nTo set expectations: it’s not a project I’m actively maintaining, in the traditional sense. I don’t really look at the issues or PRs. But I use it in my own projects and it works well for me, so I figured I’d make it available for anyone else who wants to use it!
\\nWhen someone signs up for the waitlist, celebratory fireworks are released:
\\n\\nBy default, this effect is pretty tame, but things can get pretty wild using the FIREWORKS PER SECOND slider at the bottom of the screen. 😄
\\nI created these fireworks from scratch using 2D Canvas. No additional libraries were used. The code honestly isn’t too scary; it’s a bunch of smaller ideas (like polar coordinates!) combined to create something that feels complex.
\\nWe’ll build this effect in the course. And in the process, you’ll learn the underlying techniques that can be used to build all sorts of celebratory effects.
\\nAs I mentioned earlier, my main goal with this course is to give you the tools you need to create your own interactions and animations. The web is full of generic NPM-installed confetti and formulaic ChatGPT-generated effects, and they fail to spark joy because we’ve all seen them 100 times before.
\\nA crucial ingredient for whimsy is novelty. A charming, delightful effect becomes mundane and annoying surprisingly quickly! So I’m not really interested in giving y’all a handful of “whimsy formulas”, or snippets that you can copy/paste. I want to teach you the core building blocks you can use to design and build effects that are unique to you. ✨
\\nIf you’ve registered for one of my existing courses, you’ve probably seen this firework effect before; I use it in my course platform’s onboarding flow. I try not to “recycle” effects like this, but I really do like my fireworks. 😅
That said, I did add one new hidden feature, exclusively for this landing page. I won’t spoil the surprise, but something happens when you use the “wand” tool to transform the FIREWORKS PER SECOND slider.
You can see it for yourself by joining the waitlist(opens in new tab). If you’ve already signed up, you can enter the same email to go through the flow again (you won’t receive a second confirmation email, and it won’t affect anything).
You might’ve noticed that the fireworks don’t make any noise. This might seem like a curious absence, given how lively everything else is!
All of the sound effects I use on this landing page are short and in direct response to a user action, like pushing a button or dragging a slider. The fireworks happen automatically, and they run forever. Sound effects might’ve been charming for the first few seconds, but they would quickly become irritating.
I thought about adding a mute button, but some users might not notice it, especially if they’re distracted. I think the ideal solution would be an “unmute” button, so that the fireworks start silent but can have their sound turned on… but making realistic firework sounds seems hard, and not worth the trouble for an opt-in feature.
Ultimately, we don’t have to add sound effects to everything!
There’s so much more I could share, like the physics of explodable content or the dozens of people who submitted translations for the main tagline, but this blog post is way too long already. 😅
\\nIf you have any questions about this landing page, or my upcoming course, you can shoot me a message(opens in new tab), or hit me up on Bluesky(opens in new tab).
\\nAnd if this course sounds worthwhile to you, the best way to stay in the loop is to join the waitlist on the landing page:
\\nThanks for reading! ❤️
February 24th, 2025
The most exciting thing about container queries, in my opinion, is that they expand what’s possible in terms of user interface design. They give us new options when it comes to responsive design, creating UIs that would be impractical or impossible using traditional media queries.
\\nIn this post, I’ll share the most useful pattern I’ve discovered, so that you can start taking full advantage of container queries in your own work.
\\nThis blog post assumes that you understand the fundamentals of CSS container queries. If you’re still getting the hang of them, you may wish to start with my other post, “A Friendly Introduction to Container Queries”.
There’s one pattern in particular that I find myself using over and over again. Let’s look at an example, from this blog:
\\n\\nThis layout is used to display newsletter issues.
\\nOn desktop, it’s a two-column layout. The email metadata on the left, the content on the right. On mobile/tablet, it collapses to a single column:
\\n\\nThis is a pretty common design pattern, and it’s easily solved using media queries, but it leads to a curious side-effect: the width of each column actually increases when the viewport shrinks below the mobile threshold.
\\nKeep your eye on the width of the left-hand column as you shrink the (virtual) window, using the slider:
\\nWhen we reach the mobile threshold, our two-column layout becomes a one-column layout, which means the metadata column actually gets bigger, expanding to fill the entire width of its container.
\\nNow, here’s where it gets tricky. I have two different layouts for the metadata column, depending on the available space:
\\nSome thoughts about passion…
When there’s enough room, I want to show the key/value pairs in a single row. Otherwise, the values should move to a new line.
\\nBut we want to do this based on the container’s size, not the viewport’s size! There isn’t a clear linear relationship between the two.
\\nHere’s the ideal behaviour. Notice how the metadata layout changes back-and-forth as the window changes size:
\\nWe’re using media queries to control the “top-level” layout, flipping from two columns to one column, but we can’t really use media queries to describe how the stuff within those columns should respond dynamically.
\\nOr, well, technically we can, but it’s messy and fragile. We could do something like this:
\\n.metadata-column {\\n /* Condensed styles here */\\n\\n @media (min-width: 35rem) and (max-width: 42rem),\\n (min-width: 60rem) {\\n /* Sparse styles here */\\n }\\n}
This approach combines multiple media conditions using a comma, which acts like an OR operator. We apply the “Sparse” styles if our viewport is between 35rem to 42rem, or at least 60rem.
\\nBut where did these numbers come from? 35rem, 42rem, and 60rem aren’t breakpoints in our design system, they’re magic numbers. They’re the arbitrary values that happen to work given the current layout. When I’ve gone with this approach in the past, I’ve literally measured the width of the viewport at the points where I wanted it to flip.
\\nThis is extremely fragile. It’ll work as long as none of the styles change, but even minor tweaks like adjusting the padding on one of the columns can cause problems.
\\nIt’s easy to imagine another developer coming along in a few months and tweaking, say, the gap
between the columns. All of a sudden, our calculated values are wrong, and the content starts to overflow near those arbitrary breakpoints. The developer probably won’t even notice, since the issue only happens at a narrow range of viewport widths, but some of our users will definitely notice!
Check out how much nicer the solution is with container queries:
\\n<style>\\n .metadata-column {\\n container-type: inline-size;\\n }\\n\\n .metadata {\\n /* Condensed styles here */\\n\\n @container (min-width: 19rem) {\\n /* Sparse styles here */\\n }\\n }\\n</style>\\n\\n<div class=\\"metadata-column\\">\\n <div class=\\"metadata\\">\\n <!-- Stuff here --\x3e\\n </div>\\n</div>
As we cover in “A Friendly Introduction to Container Queries”, the @container
at-rule works just like @media
, except it uses the size of a defined container element. We specify which element should act as the container with the new container-type
property.
Instead of 3 arbitrary numbers, we have 1 intentional number; 19rem, in this example, is the actual size that we’ve chosen for the flip, because the “sparse” layout would feel too cramped below that threshold.
\\nYou might feel like it’s still a bit of a code smell that we have to define any breakpoints at all. Shouldn’t the ideal solution flip automatically between layouts based on whether the text can comfortably fit on 1 line or not?
This sort of “numberless” adaptibility is known as fluid design. Some parts of CSS do work like this, like the flex-wrap
property or the auto-fit
keyword in CSS Grid.
Fluid design is wonderful, but it really only works in a narrow set of circumstances. In cases like this, where we have multiple children that all need to flip at the same point, I don’t believe it’s possible to solve this problem with fluid design.
And honestly, I’m not bothered about having an explicit breakpoint in this sort of situation. It doesn’t feel fragile to me at all; it won’t break if we change unrelated things, like the other column or the parent row, since we’re measuring the metadata column itself.
You might be wondering: what happens when someone visits using an older browser, one which doesn’t support container queries?
The great thing about this strategy is that it fails gracefully. For folks on older browsers, the CSS within our @container
at-rule will never get applied. The “condensed” styles will never get overwritten, no matter what the container size is.
This means that our UI won’t stretch out to take advantage of the extra real estate in larger containers, which isn’t ideal, but it also isn’t really a problem. By using min-width
container queries, we avoid the real problem of trying to cram too much stuff into a tiny container, leading to overflows and broken UI.
Containers are defined using the container-type
CSS property. This is how we create the boxes that our container queries will measure.
One of the lesser-known features of this API is that we can choose which container to use, if multiple ancestors establish themselves as containers.
\\nBy default, the nearest ancestor will be used. Try to resize the “RESULT” pane below by dragging the middle divider (or focusing and using left/right arrow keys):
\\nThe RESULT pane can only be resized on larger screens. If you’re on a desktop device, please increase the window size until the RESULT pane sits beside the code editor.
Code Playground
Result
Refresh results paneThe setup here is that our .child
element has two container ancestors, its parent <section>
and grandparent <main>
. By default, the nearest ancestor will be used, and indeed, we can see that the element’s parent <section>
is currently being used.
We can manually select a different container, though! Check this out:
\\nCode Playground
Result
Refresh results paneWhen we define our container query with @container
, we can optionally specify a container-name
, which allows us to override the default behaviour and specify a different container!
We can simplify our code a bit by using the container
shorthand property:
main {\\n container: outer / inline-size;\\n\\n /* Equivalent to: */\\n container-name: outer;\\n container-type: inline-size;\\n}
The slash character (/
) is a modern convention in CSS as a way to separate groups of values. It has nothing to do with division.
So, I’ve been using container queries in my own work for a few months now, and it still feels to me like we’re just scratching the surface of what’s possible. ✨
\\nIn this blog post, we explored the killer use case I’ve discovered so far: creating sub-layouts within our media queries that expand as the viewport shrinks. I’ve used this trick all over this blog, from the Gradient Generator to my About page:
\\nI’ve also seen other developers discover the same pattern; Ahmad Shadeed uses it for adjusting the caption on “feature images”, and has a wonderful blog post(opens in new tab) on the subject:
\\n\\nOnce you start using this pattern, you’ll see opportunities for it everywhere. 😄
\\nAnd this is just the tip of the iceberg. The ability to select which container is used could unlock some really interesting possibilities, allowing us to create multi-layered UIs, layouts within layouts.
\\nWhether anyone will be clever enough to use this stuff to its full potential remains to be seen, but it makes me excited for the future of web UIs!
\\nI’m going to continue experimenting with container queries, and I’d encourage you to as well. Browser engineers have given us some incredible new tech, and now it’s up to us to show them that their time was well spent. 😄
\\nDid you know I have a comprehensive CSS course? It’s like a supercharged version of this blog: in addition to interactive articles like this one, my course features bite-sized videos, challenging exercises, real-world-inspired projects, and even a few minigames. 😄
I made it specifically for folks who work with a JS framework like React, Angular, or Vue. Most of the course is focused on CSS principles, but we learn about them in a component-driven context.
You can learn more here:
January 27th, 2025
One of my all-time favourite CSS tricks is using backdrop-filter: blur()
to create a frosted glass effect. I use it in just about every project I work on, including this blog!
Here’s a quick demo, to show what I’m talking about:
\\n \\nThis effect helps us add depth and realism to our projects. It’s lovely.
\\nBut when I see this effect in the wild, it’s almost always missing some crucial optimizations. A couple of small changes can make our frosted glass so much more lush and realistic!
\\nIn this post, you’ll learn how to make the slickest frosted glass ever ✨. We’ll also learn quite a bit about CSS filters along the way!
\\nI learned about this effect from Artur Bien’s(opens in new tab) incredible demos. He credits Jamie Gray(opens in new tab) for the original concept.
To briefly explain the underlying concept: CSS gives us quick and easy access to SVG filters via the filter
property.
For example, we can give elements a Gaussian blur with filter: blur()
:
There are lots of fun filter options, the sorts of things you’d find in image-editing software. Like, rotating the hue of all the colors:
\\nIn these examples, I’m applying the filters to an <img>
tag, but we can apply them to standard DOM nodes as well:
Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry\'s standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged.
Pretty neat, right?
\\nThings get even cooler with backdrop-filter
. This property lets us apply these same filters to the stuff behind a given element.
For example:
\\nIn this demo, the .magic-ring
element sits in front of a photo ( source(opens in new tab)). It uses the backdrop-filter
property to apply some filtering to everything behind it, which can be used for some pretty artistic effects.
In practice, I pretty much only use backdrop-filter
for one use case: blurring everything behind an element, usually a header, to create the “frosted glass” effect I mentioned earlier:
Alright. Let’s talk about the thing most developers miss.
\\nHere’s the problem: The backdrop-filter
algorithm only considers the pixels that are directly behind the element.
For filters like brightness
or hue-rotate
, that makes perfect sense. With blur, though, we actually want to consider pixels that are near the element too.
This is one of those things where a demo is worth a thousand words. Check out the difference:
\\nBy default, the gaussian blur algorithm is applied to all of the pixels behind the element. This means that if a big colorful element is near the element, it won’t have any effect.
\\nThat’s not really how frosted glass works in real life though. Light bounces off of objects and then goes through the glass. It looks so much better when the blurring algorithm includes nearby content.
\\nUnfortunately, this isn’t something we can configure directly. Instead, we need to be a bit crafty.
\\nHere’s the code:
\\nCode Playground
Result
Refresh results paneIt looks complicated, but the principle isn’t too scary.
\\nIf we want the blur to consider elements nearby, we need to extend that element so that it covers those elements. Then, using a mask, we trim the excess away, so that it’s visually the same size as we originally intended.
\\n\\nLet’s walk through it step by step. First, we have a header with a backdrop blur:
\\nCode Playground
Result
Refresh results paneBecause the red ball isn’t behind the header at all, it isn’t being considered by the blurring algorithm, and so we don’t get that soft red glow. We need to extend the header so that it covers at least some of the ball.
\\nRather than give the <header>
an explicit height (which would lock us into a specific size, rather than a dynamically-calculated one), let’s move the backdrop-filter
to a child element, and set that child element to be twice as large as its parent:
Code Playground
Result
Refresh results paneAlright, now we’re getting somewhere! The .backdrop
child grows to cover most of the red ball, blurring it correctly.
Now, we don’t actually want to see all of this excess backdrop. We need to trim it back to the size of the <header>
parent element.
Maybe we can solve this with overflow: hidden
?
Code Playground
Result
Refresh results paneWhat you see here depends on your browser. On Firefox and Safari, this works great! But sadly, it doesn’t work on Chrome. There’s no soft red glow.
\\nI think it’s an order-of-operations issue. In Chrome, the overflow trimming occurs before the filters are applied, so when the blurring algorithm is executed, the content has already been hidden.
\\nFor the same reasons, we can’t use overflow: clip
or clip-path
, but fortunately, we can use mask-image
. The masking algorithm happens after the filters, in all browsers. ✨
Masking is a huge topic which is well beyond the scope of this tutorial, but the basic idea is that we can specify how transparent parts of an element should be. For example, if our mask is an opaque circle in a transparent box, that opaque shape can be applied to any other element:
\\nMost commonly, masks are images in a format that supports transparency (like .png
or .gif
), but we can also use gradients as masks. For example, we can fade an image from opaque to transparent:
Code Playground
Result
Refresh results paneFor our glassy header optimization, we’re using mask-image
to make the original header size fully opaque, and everything past that point fully transparent. Essentially our mask looks like this:
The relevant code looks like this:
\\n.backdrop {\\n height: 200%;\\n mask-image: linear-gradient(\\n to bottom,\\n black 0% 50%,\\n transparent 50% 100%\\n );\\n}
Our mask doesn’t look like a gradient, does it? I typically picture gradients fading smoothly from one color to the next.
\\nIt might feel like an illegal building technique?Term from the LEGO world, referring to assembling LEGO bricks in a way that the manufacturer did not intend., but this is what we need in this case. Our gradient is solid black from 0% to 50%, then it instantly becomes transparent for the final 50%.
\\nWhy 50%? We set height
to 200%, so that .backdrop
will always be twice as tall as its container. The percentages inside mask-image
’s gradient are relative to the current element’s size.
For example, if our <header>
is 200px tall, our .backdrop
will grow to 400px (200% of its parent). Then, our mask will show the first 50% of this element (0px to 200px), and hide the rest (200px to 400px).
Here’s the code again. Feel free to experiment with it, to develop your intuition for what’s happening:
\\nCode Playground
Result
Refresh results paneThis is the basic idea behind this solution, but there’s a bug we need to fix, and a couple more optimizations we can consider.
\\nOur current implementation has a pretty big issue: nearby elements become unclickable and unselectable.
\\nTry to select the text just below the header:
\\nCode Playground
Result
Refresh results paneHere’s what happens when I try on desktop:
\\n\\nHere’s the problem: the mask-image
property will visually hide parts of an element, but the element is still there. We’re not able to click on the text because that .backdrop
is extending out and covering it!
Fortunately, it’s an easy fix:
\\n.backdrop {\\n position: absolute;\\n inset: 0;\\n height: 200%;\\n backdrop-filter: blur(16px);\\n mask-image: linear-gradient(\\n to bottom,\\n black 0% 50%,\\n transparent 50% 100%\\n );\\n pointer-events: none;\\n}
The pointer-events
property allows us to specify that an element should be ignored when resolving click/touch events. mask-image
makes the backdrop invisible, and pointer-events: none
makes the backdrop incorporeal?Something that can be seen but not felt, like a mirage or a ghost.
This is another reason why .backdrop
needs to be a child element. We don’t want the <header>
itself to ignore clicks, since it typically has navigation links. We want to target the frosted glass element specifically.
By extending the glassy backdrop below the header, we can ensure that the blurring algorithm takes it into consideration even before that element reaches the header.
\\nBut what about when things leave the top of the viewport?
\\nThings aren’t quite so nice. Scroll down slowly in this demo:
\\n \\nNotice that weird goopy flickering, at the very top of the viewport?
\\n\\nIt’s the same issue as before. The gaussian blur algorithm is only considering the pixels directly underneath it. When a yellow longboard is scrolled out of view, for example, that data is no longer factoring into the blur algorithm, causing those unnatural color flickers.
\\nUnfortunately, we can’t re-use our solution here. As far as I can tell, elements outside the viewport are never considered by backdrop-filter()
, even if the elements are layered correctly.
The best solution I’ve found for this solution is to add a gradient that covers the flickering:
\\n \\nHere’s the code:
\\n.backdrop {\\n position: absolute;\\n inset: 0;\\n height: 200%;\\n background: linear-gradient(\\n to bottom,\\n /*\\n Replace this with your site’s\\n actual background color:\\n */\\n hsl(0deg 0% 0%) 0%,\\n transparent 50%\\n );\\n backdrop-filter: blur(16px);\\n mask-image: linear-gradient(\\n to bottom,\\n black 0% 50%,\\n transparent 50% 100%\\n );\\n pointer-events: none;\\n}
Until now, the .backdrop
element has been fully transparent; we haven’t applied a background
at all. This gradient makes it opaque at the very top, blocking the flickering colors from view, but fading to transparent, to show the frosted glass effect.
In some circumstances, the frosted glass effect can be a bit distracting:
\\nThis feels too “busy” to me; the blurry text sitting behind the header makes the site name and navigation too hard to read. It all feels a bit messy, and not as subtle as I want.
\\nThere are two main ways to fix this. We could increase the blur radius:
\\nOr, we could add a background-color
to the parent <header>
, making it semi-opaque:
(We could also tweak the gradient we added in the previous section, making it fade from fully-opaque to semi-opaque, but I prefer to keep the two things separate, so that I can tweak them independently.)
\\nbackdrop-filter
has been around in all major browsers for a number of years now; according to caniuse(opens in new tab), it’s above 97% support as I write this in December 2024. For our main optimization, we also need mask-image
, which is almost as well supported(opens in new tab), sitting at 96.3%.
Both properties require a -webkit
prefix for some browsers, but most CSS tooling will add this for you automatically.
At the bottom of this blog post, I’ll include the full copy-ready code, which uses feature queries to make sure that older browsers still have a usable experience. They won’t get the frosted glass effect, but everything will still be readable and usable.
\\nWhile working on this blog post, I ran into a known bug(opens in new tab): backdrop-filter
stops working in Firefox on a position: sticky
element if an ancestor has both overflow
and border-radius
set.
It’s niche enough that you’re not likely to run into it, but if you do, it’s completely baffling 😅. Hopefully this saves you some time!
As if this stuff wasn’t complicated enough already, Artur Bien came up with an extra twist; we can create the illusion of a 3D piece of glass by adding a second blurred element with different filter settings:
\\nIsn’t that lovely?!
\\nHere’s how this works: The bottom edge is a separate DOM node with its own backdrop-filter
. I find it looks better with a smaller blur radius (eg. 8px in the bottom edge, 16px in the main backdrop), and with an extra brightness
filter to really make it pop. ✨
The code for this is a bit gnarly 😅. I’ve done my best to explain it in the comments below:
\\n<style>\\n .backdrop {\\n position: absolute;\\n inset: 0;\\n height: 200%;\\n border-radius: 4px;\\n background: hsl(0deg 0% 100% / 0.1);\\n pointer-events: none;\\n backdrop-filter: blur(16px);\\n mask-image: linear-gradient(\\n to bottom,\\n black 0,\\n black 50%,\\n transparent 50%\\n );\\n }\\n\\n .backdrop-edge {\\n /* Set this to whatever you want for the edge thickness: */\\n --thickness: 6px;\\n\\n position: absolute;\\n inset: 0;\\n /*\\n Only a few pixels will be visible, but we’ll\\n set the height by 100% to include nearby elements.\\n */\\n height: 100%;\\n /*\\n Shift down by 100% of its own height, so that the\\n edge stacks underneath the main <header>:\\n */\\n transform: translateY(100%);\\n background: hsl(0deg 0% 100% / 0.1);\\n backdrop-filter: blur(8px) brightness(120%);\\n pointer-events: none;\\n /*\\n We mask out everything aside from the first few\\n pixels, specified by the --thickness variable:\\n */\\n mask-image: linear-gradient(\\n to bottom,\\n black 0,\\n black var(--thickness),\\n transparent var(--thickness)\\n );\\n }\\n</style>\\n\\n<header>\\n <div class=\\"backdrop\\"></div>\\n <div class=\\"backdrop-edge\\"></div>\\n</header>
In the example above, I increased the brightness of everything behind the glassy edge by 20% with a secondary filter, brightness(1.2)
.
And in my original cupcake example, I tweaked both the brightness and saturation:
.backdrop {\\n backdrop-filter: blur(8px) brightness(90%) saturate(140%);\\n}\\n.backdrop-edge {\\n backdrop-filter: blur(6px) brightness(110%) saturate(120%);\\n}
I’ve heard that increased saturation can help reduce the muddiness that comes from blurring, but really, this is more art than science. I recommend experimenting with various filter effects to come up with something that works for your particular use case!
Phew! We covered a lot of ground in this one.
\\nHere’s the final code, with all of the optimizations we’ve discussed. I’ve also included feature queries, to make sure that our website remains legible on older browsers.
\\nFeel free to copy this code, and make it your own! This is intended to be a starting point, not a complete solution. For example, you may wish to tweak the size of the backdrop’s overlap for your particular circumstances.
\\nCode Playground
Result
Refresh results paneThere’s one big drawback with this approach: we lose the ability to have rounded corners.
Our strategy works by extending the bottom edge of the backdrop downwards, and then hiding that overflow with mask-image
. When we apply a border-radius
, we’re rounding that hidden bottom edge!
The only workaround I can think of would be to create an image of a rounded rectangle in image-editing software and specify that as the mask, but this would only really work if our element has a known aspect ratio. Otherwise, the corners would get stretched and look funny.
If anyone comes up with a solution to this problem, please do reach out and let me know!
If you’ve enjoyed this blog post, you might like to know that I have an entire course about CSS!
\\nIt’s called CSS for JavaScript Developers(opens in new tab), and it’s built using the same tech stack as this blog: it’s chock full of interactive articles, demos, and opportunities to experiment. There are also bite-sized videos, exercises, workshops, and even a few mini-games!
\\nYou can learn all about my course here:
\\nIf you currently work as a software developer, your employer may be able to cover the cost of registration for my CSS course! Many companies offer an “education stipend”, a budget you can use to attend conferences or purchase books and courses. If your employer offers this perk, don’t let it go to waste!
I also created a template letter(opens in new tab) you can use to help convince your manager that this course would be a worthwhile investment for the business.
February 17th, 2025
According to caniuse, container queries are supported for almost 93% of users(opens in new tab) (as of November 2024). That sounds pretty good! My mom would have been thrilled if I came home with 93% on my report card. But is it actually sufficient when we’re talking about browser support levels?
\\nLike so much in software development, the answer is it depends. In this blog post, I’m going to share the framework I use when deciding whether it’s appropriate to use a new CSS feature. We’ll look at the 3 individual factors I consider, and at the end, we’ll assemble them into a formula you can use to help you decide whether it’s OK to use a feature or not.
\\nWe’re focusing on CSS in this blog post, but the same framework can be used to evalute whether a modern JavaScript or HTML feature can be used.
\\nThis blog post is written primarily for developers who work for companies and are trying to figure out where to draw the line in terms of browser support.
That said, very little knowledge of CSS is assumed for this blog post, so it should be accessible for web developers of all experience levels. 😄
Have you heard of text-wrap: pretty
?
It’s one of my favourite lil’ CSS features. It tweaks the line-wrapping algorithm so that it produces nicer-looking paragraphs. It avoids things like this:
\\nI often end my sentences with an emoji, and the default line-wrapping algorithm will sometimes lead to it being stranded awkwardly on its own line. This is known as an “orphan” in typography.
\\nThe text-wrap: pretty
declaration switches to a more sophisticated line-wrapping algorithm which avoids orphans, and generally makes paragraphs feel more symmetrical and balanced:
As I write this in November 2024, text-wrap: pretty
doesn’t have very good browser support(opens in new tab), about 72%. Probably too low to use in production, right?
Well, let’s consider what happens when someone visits from an unsupported browser. The text-wrap: pretty
declaration has no effect. Those users don’t get the benefit of this nice little enhancement, but there also isn’t any downside.
In my mind, this is a perfect example of a progressive enhancement. It’s a bonus, a nice little extra for folks using modern browsers. For features like this, I don’t really care what the browser support is; even something low like 20% would be fine.
\\nOther features fail a bit less gracefully. For example, I’ve become a fan of overflow: clip
. It’s like overflow: hidden
, but it doesn’t create a scroll container.
This means that we can finally clip things in one axis but allow them to overflow in the other axis:
\\nThe browser you’re using doesn’t support the overflow: clip
declaration.
Here’s what this demo should look like:
This clipping behaviour is how most of us expect overflow-x: hidden
to work. It’s great that we finally have a straightforward way to do this. But is it safe to use in production?
According to caniuse(opens in new tab), the clip
value is supported for ~93% of users. If someone visits using an older browser, the property won’t have any effect, which means the content will overflow normally (like the visible
option shows).
If we’re using overflow-x: clip
for purely cosmetic purposes, this might be an acceptable trade-off. ~7% of users will have a slightly jankier experience, but we’re not really interfering with their ability to access and use our website/webapp.
We do have to be careful though. In other scenarios, this property can cause bigger problems for folks in unsupported browsers:
\\nIn this situation, we’re using overflow-y: clip
to hide the overflow, but when this property isn’t supported, the overflow winds up blocking the text in the subsequent paragraph!
This is the first factor in our framework. When deciding whether or not a CSS feature can be used, we should consider what the fallback experience is like.
\\nSome properties, like text-wrap: pretty
, are purely progressive enhancements and can be used regardless of browser support. But it gets more complicated with properties like overflow: clip
; we need to evaluate these properties on a case-by-case basis to determine whether the fallback experience is acceptable or not. Other features, like subgrid, will very likely cause layout issues in unsupported browsers.
If the property you’re testing has 90%+ browser support, it can actually be kinda tough to find an unsupported browser to test with! Fortunately, we don’t actually need to run our website/webapp in an unsupported browser to check out the fallback experience. We can simulate it.
When the browser encounters a CSS key/value pair it doesn’t recognize, like overflow: clip
, it gets ignored. It has no effect at all. So to test the fallback experience, we can temporarily delete this CSS declaration and see what happens in a modern browser.
I’ve made this a part of my standard workflow, so that I can check these instances one at a time, as they come up, by commenting out the potentially-unsupported CSS.
It’s a bit more difficult if you haven’t been testing along the way and have a bunch of CSS to check, but it’s not so bad. Here’s how I’d do it:
overflow: clip;
) and replace it with nothing (\'\'
).git stash
).Some developers/designers have a bit of resistance to the idea of progressive enhancement and fallback experiences, since it means that every user has a slightly different experience. Don’t we want to ensure a consistent experience for all users, no matter what device they’re on?
The thing is, the experience can never be 100% consistent. That’s the whole idea behind responsive design; the websites/webapps we build should adapt dynamically to the user’s device, to provide the best experience possible for them.
We’re already used to thinking this way in terms of screen size, but that’s not the only dimension of responsiveness. Another dimension, for example, is text size: folks with low vision will need the text to be much larger, and so our website should still render correctly even at 200%+ font scaling.
Browser support is just another dimension, something for us to adapt to so that we can provide the best experience for each person based on their unique circumstances.
If the default fallback experience is unacceptable, we can often improve this fallback experience by providing alternative CSS.
\\nThe simplest way to do this is to provide multiple values for the same property. For example:
\\n.thing {\\n overflow: hidden;\\n overflow: clip;\\n}
In CSS, declarations are evaluated from top to bottom, with later values overwriting earlier ones. In a supported browser, overflow: hidden
will be overwritten by overflow: clip
.
In unsupported browsers, however, overflow: clip
will not be recognized as a valid value, and will be ignored. As a result, overflow: hidden
will be applied instead.
In other situations, we might want to apply an alternative set of styles when a feature is unsupported. We can use the @supports at-rule(opens in new tab) for this:
\\n.parent {\\n display: grid;\\n gap: 1rem;\\n}\\n\\n@supports not (gap: 1rem) {\\n /*\\n Any CSS in here will be applied only if the “gap”\\n property is unsupported.\\n */\\n .child {\\n margin-inline: 0.5rem;\\n }\\n}
I’ve had mixed results using @supports
in practice; it doesn’t always work the way I expect, since sometimes a feature is recognized by the browser but not fully supported. That said, it can still be a very handy tool!
And remember: the goal isn’t to produce exactly the same UI for all users. The goal is to provide a reasonable fallback experience.
\\nHave you ever wondered where caniuse gets its data from? How, exactly, does it know that 92.77% of people are using browsers which support container queries?
\\ncaniuse gets its data from statcounter, a web analytics tool which is used on ~1.5 million websites. Whenever a person visits one of these 1.5 million websites, data about their browser is sent to statcounter, which is aggregated and released publicly(opens in new tab).
\\nBecause these 1.5 million websites are spread across all sorts of industries in hundreds of different languages, we wind up with a pretty good worldwide sample of internet usage. But your product’s audience might look very different!
\\nFor example: this table shows the difference between statcounter’s global sample and the traffic that visits this blog:
\\nBrowser | Global | joshwcomeau.comBlog | Delta |
---|---|---|---|
Chrome | 66.7% | 74.5% | 7.8% |
Safari | 18.1% | 12.2% | 5.9% |
Edge | 5.3% | 5.0% | 0.3% |
Firefox | 2.7% | 6.8% | 4.1% |
Opera | 2.2% | 0.9% | 1.3% |
IE | 0.13% | 0.0% | 0.13% |
Other | 4.87% | 0.6% | 4.27% |
There are some interesting differences between these two data sets!
\\nGlobally, Firefox has been on the decline for a while, falling to 2.7%. Readers like you, however, are 2.5x more likely to use Firefox than the worldwide average! Solid Firefox support is therefore especially important for me.
\\nAnd while Internet Explorer (IE) has fallen to low levels of usage globally, it’s even less popular amongst my audience; in the 3-month sample I looked at, only 1 visitor used Internet Explorer, which works out to be 0.0000018% of my traffic!
\\nBut when I worked at Khan Academy, the numbers told a different story. Khan Academy, if you’re not familiar, is an education platform that mainly covers elementary/high-school topics. It’s frequently visited by students using computers in their school’s computer lab, machines that tend to be older and less-frequently-updated than personal computers. As a result, Internet Explorer was a surprisingly big percentage of overall traffic.
\\nSo, this is an important thing to consider in our framework. The global values supplied by caniuse may or may not represent the people who use our websites and web applications, and it’s worth figuring out which browsers are over-represented or under-represented.
\\ncaniuse has a pretty neat feature: you can connect your Google Analytics account and use your own data instead of its statcounter worldwide sample.
I haven’t tried this myself, since I don’t use Google Analytics, but if your company does, you can give it a spin on caniuse’s import page(opens in new tab).
So let’s suppose there’s a CSS feature you really want to use. Your best estimate is that 99% of your product’s audience uses a supported browser, but the fallback experience is totally broken. You’re wondering if you should spend time trying to fix it.
\\nIs it OK to break the user experience for 1% of users? I think it depends on what service your website/webapp offers.
\\nFor example, let’s suppose that you work on a yacht rental service. Wealthy clients can use your app to rent luxury boats for weekend getaways.
\\nI don’t believe there’s a moral imperative to provide this service, and so I think it’s fine to say “we are intentionally deciding not to support X browser”. This becomes a purely financial calculation, and you can weigh the lost revenue from 1% of users against the increased cost of supporting legacy browsers.
\\nOn the other end of the spectrum, let’s suppose you work for ClicSanté, a tool developed by the Quebec government to allow people to book vaccines, blood tests, and other medical appointments. For something like this, it’s super important that as many people as possible can access the service. We don’t want people to skip getting a vaccine because our webapp didn’t work for them!
\\nThis is the final consideration in my formula: if the fallback experience for legacy browsers is unusable, what is the harm caused by this lack of support? What are the real-world consequences if people can’t access your service?
\\nYou might be tempted to make a similar calculation when it comes to accessibility. If you don’t provide an essential service, is it really important that your website be accessible for people who navigate with a keyboard, or use a screen reader?
The difference is that people can switch browsers. A person using an outdated browser has the ability (most of the time) to update it. A blind person, however, does not have the ability to switch from a screen reader to a screen.
Even if we aren’t providing an essential service, it’s important that our websites and web applications are accessible.
Alright, so I’ve gone over the things I consider when evaluating browser support. Let’s put these factors to the test with a real-world example: is it OK to start using container queries on this blog?
\\nAccording to caniuse, container queries are at ~93% browser support, but this is based on its worldwide sample. I dug into my analytics software, and based on some rough calculations, I think this number is closer to 97% for this blog’s audience.*This is a very rough estimate; the privacy-focused analytics tool I use, Fathom, doesn’t break down browser support by version number, so I calculated this by extrapolating from less-common browsers like Opera and UC Browser.
\\nIs 97% sufficient? Well, let’s think about the other factors.
\\nFirst, what is the fallback experience? This will depend on what specifically we’re looking to do with container queries. As an example, let’s use the newsletter issue page, where I use container queries quite a bit.
\\nTo test the fallback experience, I deleted all of the CSS within container queries. This is the result:
\\nClick the “Toggle” button to switch between the default UI (with container queries fully supported) and the fallback UI (with all of the CSS within container queries deleted, simulating an unsupported browser).
\\nThe fallback experience is definitely worse. The sender email is overflowing its container, obscuring the end of the domain. And my 3D mascot is taking up way too much space, making the text feel super cramped (I’m using container queries to hide the mascot when the container is too small, to prevent this exact issue).
\\nIn my opinion, though, these issues aren’t showstoppers. I don’t think they affect the user’s ability to read the newsletter issue, which is the primary purpose of this page. So I think this fallback experience is adequate.
\\nBut this is just one particular case. I’ve started using container queries in lots of places on my blog. Do I need to check the fallback experience of each one? After all, container queries often change layout properties, which could lead to some very broken UI.
\\nWell, to answer this question, let’s consider the final factor in my formula: what is the potential harm caused by a broken experience?
\\nMy blog provides tutorials for web developers. The information is useful (at least, I hope it is 😅), but I don’t think anybody would say that it’s an essential service.
\\nSo, to summarize:
\\nGiven all of that, I’m comfortable making the call that I can use container queries on this blog. 👍
\\nBut if I was working on a different product, I’d very likely come up with a different answer. I am definitely not saying that container queries should always get the green light!
\\nFor container queries specifically, there’s one thing we can do to dramatically increase the likelihood that our fallback experience is usable: we should use min-width
container queries rather than max-width
ones.
In classical Responsive Design, we call this strategy “mobile first”; our baseline set of styles are for the mobile view, and then we add/overwrite stuff inside min-width
media queries for tablet and desktop:
.main {\\n /* Mobile styles here */\\n}\\n\\n@media (min-width: 37.5rem) {\\n .main {\\n /* Tablet styles here */\\n }\\n}\\n\\n@media (min-width: 55rem) {\\n .main {\\n /* Desktop styles here */\\n }\\n}
If we erased all of the code inside our media queries, we’d be left with only the mobile styles. This wouldn’t produce the best user experience, but it would probably still be usable. It would be a heck of a lot better than the inverse: trying to cram the desktop layout into a 5\\" phone screen.
With container queries, we don’t really think in terms of mobile/tablet/desktop, but the same sort of idea applies: if our baseline styles are for when the container is small, those styles will probably still work alright in larger containers when container queries aren’t supported. But if our baseline styles are the “big container” styles, it will probably create quite a mess in unsupported browsers.
For example, here’s what my Shadow Palette generator is supposed to look like on desktop:
And here’s what it looks like when I erase all of the code from inside container queries:
That “Light Position” control is too big, which is awkward, but really the experience is still totally usable. Because I’m using min-width
container queries, the same styles will work for all screen sizes.
As engineers, we tend to appreciate simple pass/fail tests. Ideally, we’d have some number we could use as a benchmark: “if browser support is above 97%, use Feature X, otherwise, use Feature Y”.
\\nBut the real world is messier than this! I’ve done my best to distill my typical thought process into a repeatable framework, but honestly, it’s always going to be a bit murky, the sort of thing that requires real consideration to figure out.
\\nHopefully, though, the ideas in this blog post give you some confidence when it comes to making these decisions!
\\nSpeaking of confidence: did you know I have an interactive CSS course that helps you build a mental model for the entire CSS language? 😄
\\n\\nCSS for JavaScript Developers(opens in new tab) is designed for people who are sick of feeling like they just don’t have a solid grasp on CSS. I felt like this for years; I knew the basics, but I would frequently run into funky situations where the CSS snippets I had used over and over would inexplicably do something totally different. I was constantly getting tripped up by unexpected behaviour.
\\nI spent years deepening my understanding of CSS, and it has been an absolute gamechanger. Things that used to seem inconsistent and arbitrary to me now make perfect sense. It turns out that CSS is actually a pretty great language when you get the hang of it!
\\nYou can learn more here:
\\nNovember 26th, 2024
For a very long time, the most-requested CSS feature has been container queries. That’s been our holy grail, the biggest missing piece in the CSS toolkit.
\\nWell, container queries have finally arrived. They’ve been supported in all major browsers for almost two years. Our prayers have been answered!
\\nWe can now apply conditional CSS based on an element’s container, using familiar syntax:
\\n@container (min-width: 40rem) {\\n .some-elem {\\n font-size: 1.5rem;\\n }\\n}
Curiously, though, very few of us have actually been using container queries. Most of the developers I’ve spoken with have only done a few brief experiments, if they’ve tried them at all. We finally have the tool we’ve been asking for, but we haven’t adopted it.
\\nThere are lots of reasons for this, but I think one of the biggest is that there’s been a lot of confusion around how they work. Container queries are not as straightforward as media queries. In order to use them effectively, we need to understand what the constraints are, and how to work within them.
\\nI’ve been using container queries for a few months now, and they really are quite lovely once you have the right mental model. In this blog post, we’ll unpack all of this stuff so that you can start using them in your work!
\\nIt appears that you’re using a browser that does not support container queries. As a result, the demos on this page won’t work.
You’ll probably still be able to make sense of this article, but for the best experience, I suggest updating or switching to a supported browser(opens in new tab).
So for the past couple of decades, our main tool for doing responsive design has been the media query. Most commonly, we use the width of the viewport to conditionally apply some CSS:
\\n@media (min-width: 40rem) {\\n .mobile-only {\\n display: none;\\n }\\n}
Most developers use pixels for media queries, but this tends to produce worse user experience for people who increase their browser’s default font size. In this blog post, I\'ll be using rem
units exclusively for all media/container queries.
You can learn more in my blog post, “The Surprising Truth About Pixels and Accessibility”.
Media queries are great, but they’re only concerned with global properties, things like the viewport dimensions or the operating system’s color theme. Sometimes, we want to apply CSS conditionally based on something local, like the size of the element’s container.
\\nFor example, suppose we have a ProfileCard
component, to display critical info about a user’s profile:
In this particular circumstance, each ProfileCard
is pretty narrow, and so the information stacks vertically in 1 tall column.
In other circumstances, though, we might have a bit more breathing room. Wouldn’t it be cool if our ProfileCard could automatically shift between layouts, depending on the available space?
\\nMaybe something like this:
\\nThis profile card uses container queries to demonstrate an alternative layout when space allows. Because your browser does not support container queries, it will appear the same for you.
Please consider switching to a supported browser(opens in new tab).
In some cases, we can use media queries for this, if our ProfileCard
scales with the size of the viewport… But this won’t always be the case.
For example, maybe we’re arranging these cards in a flex grid like this:
\\nWith dynamic layouts like this, each ProfileCard
will use whichever layout makes the most sense given the amount of space available. It has nothing to do with the size of the viewport!
Clearly, media queries aren’t the right tool for this job. Instead, we can use container queries to solve this problem. Here’s what it looks like:
\\n.child-wrapper {\\n container-type: inline-size;\\n}\\n\\n.child {\\n /* Narrow layout stuff here */\\n\\n @container (min-width: 15rem) {\\n /* Wide layout stuff here */\\n }\\n}
Pretty cool, right? I\'m using native CSS nesting to place the @container
at-rule right inside the .child
block so that all of the CSS declarations for this element are in the same chunk of CSS.
But wait, what’s the deal with .child-wrapper
? What is that container-type
property doing??
Well, this is where things get a bit tricky. In order to use a container query, we first need to explicitly define its container. This can have some unintended consequences.
\\nIt’s worth spending a few minutes digging into this. Understanding this core mechanism will save us hours of frustration down the line. Let’s talk about the “impossible problem” with container queries.
\\nThe ProfileCard
component uses several container queries to shift things around. If you’d like to see the full code for this component, it’s available in the Sources tab in your browser’s developer tools. I use sourcemaps to ensure that the original unminified code can be viewed.
If you haven’t used the Sources tab before, it can be a bit intimidating, but it’s really not so scary! Expand this sidenote to learn how to do it:
Show moreFor something like 20 years now, ever since “responsive design” became a thing, developers have been asking for container queries. So why are we only being introduced now??
\\nWell, for something like 20 years, the CSS Working Group has been saying the same thing: It’s impossible to implement container queries. It can’t be done.
\\nThis’ll be much easier to understand with an example. Consider this scenario:
\\nCode Playground
Result
Refresh results paneIf you’re not familiar with the fit-content
keyword, it’s a dynamic value that grows/shrinks based on the element’s content. If you add/remove some words to the paragraph, you’ll notice the paragraph change size:
Now, let’s suppose we want to bump up the font-size
of that bold text, depending on the size of its container. We can imagine doing something like this:
p {\\n width: fit-content;\\n\\n @container (max-width: 10rem) {\\n strong {\\n font-size: 3rem;\\n }\\n }\\n}
This seems to make sense… When our parent <p>
tag is 10rem or smaller, we apply font-size: 3rem
to the <strong>
tag within.
But let’s really think about this. When we change an element’s font-size
, it doesn’t just affect the height of the characters. It also affects the element’s width:
When our container is 10rem or smaller, we apply styles that cause the container to grow beyond 10rem. The CSS that we apply conditionally causes the condition to no longer be met!
\\nThis next demo shows what would happen if this sort of thing were allowed. Reduce the number of characters until the container is less than 10rem, and notice what happens:
\\nInteracting with this demo will cause the UI to flicker. The rate of flickering is roughly 2x per second (so, relatively slow).
This is mindbending stuff, and it took me a minute to really understand the problem here.
\\nWhen our <section>
is less than 10rem wide, our condition is met, and so we apply some CSS, bumping up the font size. But this causes our <p>
tag to expand, which causes the parent <section>
to grow beyond the 10rem
threshold! The CSS we write inside a container query can affect the container itself, leading to these infinite loops of flickering UI.
This is the core problem that the CSS Working Group said was unsolveable. This is why we haven’t had container queries until now.
\\nWe don’t run into this problem with media queries because their conditions are based on immutable global states. CSS does not give us the power to change things like the width of the viewport or the user’s motion preferences. So there’s no way for us to invalidate a media query from within it.
\\nI’m using the fit-content
keyword to demonstrate the issue here, but the problem is much more broad than this one niche property. Lots of things in CSS work this way, with parents dynamically responding to their children.
The solution to this unsolveable problem appeared suddenly, with the introduction of a completely unrelated API.
\\nThe Containment API, released a few years ago, allows us to specify that certain slices of the DOM are self-contained, and won’t leak out and affect other parts of the DOM.
\\nI don’t want to go on too much of a tangent here, but here’s a quick demonstration that shows how this API works:
\\nBy default, our red box will grow and shrink to contain its children. This is exactly the sort of dynamic behaviour that causes problems for container queries.
\\nBy setting contain: size
on the parent, we sever this connection. As a result, the height of the container no longer depends on its content. If we don’t specify an explicit height
, the container will collapse down to 0px (plus padding).
The Containment API was designed with performance optimizations in mind. CSS is a very dynamic language, and this means the browser often has to do a lot of work when things change. For example: when we tweak the height of the axolotl image, it affects not only the elements within that demo, but everything that follows in this article. Paragraphs like this one gets shifted up and down on every size change, causing a layout recalculation and a repaint.
\\nAnd so, if we know that an element is self-contained and won’t affect anything else, we can use the contain
property to let the browser know that it can skip certain calculations. A helpful analogy for React devs: it’s a bit like React.memo()
. We can use contain
to opt out of recalculations that we know are unnecessary.
Now, truthfully, I haven’t found myself using contain
on a regular basis. Modern browsers are already heavily optimized and will skip calculations that are obviously unnecessary. I get the impression that contain
is mostly intended for edge-cases, or for situations where every last drop of performance is critical.
But this API has provided the final foundational piece for container queries! This is how we solve the impossible problem. This API gives us the ability to “short-circuit” the infinite loop by specifying that a parent shouldn’t respond dynamically to its content.
\\nWe’re really only scratching the surface of the Containment API here. If you’d like to go deeper, I recommend checking out this article from Rachel Andrew: Helping Browsers Optimize With The CSS Contain Property(opens in new tab).
Honestly, I don’t think most front-end developers need to use this API, but if you are curious about it, this is the best introductory resource I’ve found.
With all of that context in mind, let’s write a “hello world” container query:
\\nCode Playground
Result
Refresh results paneFirst, we declare that the <section>
element is a container. This will allow any of its descendants to use it as a measuring stick, to apply CSS when certain conditions are met.
Next, we create a container query, selecting the <p>
within our container and tweaking its cosmetic styles when the container is 12rem wide or less. When that condition is met, the CSS within that block will be applied, and the text will become bold and red.
If you’re viewing this on a device with a large screen, you can see this for yourself: resize the RESULT pane by clicking and dragging the divider, or focusing it and using the left/right arrow keys.
\\nThere’s a problem with this implementation, though. It becomes apparent when we give our container some cosmetic styles:
\\nCode Playground
Result
Refresh results paneLike we saw with the axolotl example, the parent element is no longer responding dynamically to its children. Instead of growing to fit the paragraphs within, it collapses down to nothing; the only reason we can see the background color at all is because this element happens to have some padding!
\\nWhen we set container-type: size
, we tell the browser that this element’s layout doesn’t depend on its children. This prevents the infinite loop we saw earlier, but it also breaks one of our core assumptions about how CSS works!
We don’t often think about it, but there’s a fundamental difference between width and height on the web:
\\nConsider an empty <div>
with no CSS applied to it. It will be 0px tall, but it won’t be 0px wide. It’ll grow to fill the entire horizontal space, regardless of whether it has any content or not.
When we set container-type: size
, we tell CSS to ignore its content, which means it reverts to the default behaviour of collapsing down to zero!
Fortunately, there’s another value we can use for the container-type
property, inline-size
:
Code Playground
Result
Refresh results paneThe term inline-size
here refers to the inline dimension, which is typically width.
Essentially what we’re saying here is that the width of the element does not depend on its content. As a result, it can be used as a measuring stick by its descendants. The element’s height, by contrast, retains its default behaviour of growing/shrinking based on its content.
\\nThe golden rule with container queries is that we can’t change what we measure. container-type: inline-size
lets us use min-width
/max-width
conditions in our container queries, but not min-height
/max-height
.
(Credit to Miriam Suzanne(opens in new tab) for coining this golden rule. Miriam is also the person who solved the impossible problem with container queries, and the main reason we have them today. She’s the best.)
\\nYou might be wondering why the CSS specification authors are making our lives more difficult by using fancy terms like inline-size
. Why not make it something more intuitive, like container-type: width
?
The idea with logical properties is that they’re abstracted so that they can change dynamically based on the user’s language. margin-inline-start
will apply some left-side margin in English, but flip to right-side margin in a right-to-left language like Urdu.
Traditional properties like width
or margin-left
probably won’t be deprecated or removed, but when it comes to brand-new language features like container queries, they’ll use logical properties exclusively.
For container-type: inline-size
specifically, I think it’s fair to think of it as width. Even languages that have traditionally been written vertically (eg. Chinese) are usually presented horizontally on the web:
Container queries are supported in all 4 major browsers, starting from:
\\nAs I write this in November 2024, container queries are at ~93%. Here\'s a live embed with up-to-date values:
\\n\\nI should also note that my examples in this blog post use CSS Nesting(opens in new tab). This recently became a native CSS feature, though it’s been a standard feature in just about every CSS preprocessor / framework out there. If you’re not using any CSS tooling, you should also check out the native CSS nesting browser support(opens in new tab).
\\nIs 93% acceptable? That depends on a number of factors. I recently published a blog post that shares the mental model I use when deciding where the threshold should be:
As I said in the introduction, container queries have been surprisingly underutilized. Very few of the devs I’ve spoken with have actually started using them regularly in their work.
\\nOne of the core reasons for this is that container queries are complicated, and my goal with this blog post is to help clarify them. But I think there’s another big reason why container queries haven’t been adopted, something we haven’t talked about yet.
\\nAs developers, we implement the mockups that designers prepare for us. This has always been a back-and-forth, a negotiation between what the designers want and what the developers can implement. And for almost 20 years now, we’ve made it clear that “responsive design” was limited to the viewport.
\\nI don’t think most designers are even aware that they have this exciting new capability. It’s our job to share these developments with them, so that they can use them in their designs!
\\nFor the projects I work on (this blog and my course platform), I’m both the developer and the designer. I have no excuse. When I redesigned my blog over the summer, I made a conscious effort to use container queries. And once I started thinking in terms of containers, I kept seeing opportunities to use them!
\\nI recently published another blog post about container queries, which is a sort of spiritual successor to this one. It goes beyond the fundamentals, and shares some of the exciting new design patterns that are made possible by container queries. You can check it out here:
\\nDid you know I have a comprehensive CSS course? It’s like a supercharged version of this blog: in addition to interactive articles like this one, my course features bite-sized videos, challenging exercises, real-world-inspired projects, and even a few minigames. 😄
I made it specifically for folks who work with a JS framework like React, Angular, or Vue. Most of the course is focused on CSS principles, but we learn about them in a component-driven context.
You can learn more here:
November 4th, 2024
Over the past few months, I’ve been working on a brand new version of this blog. A couple of weeks ago, I flipped the switch! Here’s a quick side-by-side:
\\nFrom a design perspective, it hasn’t changed too much; I like to think that it’s a bit more refined, but the same general idea. Most of the interesting changes are under-the-hood, or hidden in the details. In this blog post, I want to share what the new stack looks like, and dig into some of those details!
\\nOver the years, my blog has become a surprisingly complex application. It’s over 100,000 lines of code, not counting the content. Migrating everything over was a big project, but super educational. I’ll share my honest thoughts on all of the new technology I used for this blog.
\\nIf you’re planning on starting a blog yourself, or are thinking about using some of the technologies I’m using, this post will hopefully be quite helpful!
\\nLet’s start with a quick list of the major technologies used by my blog:
\\nNext.jsv15.0.0 (beta)
Reactv19.0.0 (beta)
MDXv3.0.1
Linariav6.1.0
Shikiv1.17.7
Sandpackv2.13.8
React Springv9.7.3
Framer Motionv11.2.10
MongoDBv6.5.0
TypeScriptv5.6.2
PartyKitv0.0.108
This list probably seems like overkill for a blog, and a few people have asked me why I didn’t opt for a more “lightweight” alternative. There are a few reasons:
\\nIf it wasn’t for reasons 2 and 3, I probably would have given Astro(opens in new tab) a shot. I’ve also been curious about Remix(opens in new tab) for a long time! I think both are likely fantastic options.
\\nAs you may know, Next.js recently introduced a brand new routing system, the App Router. It’s a fundamental reimagining of how routing and rendering can work in React, the result of years of work from both the React and Next.js core teams.
The previous version of this blog was also built with Next.js, but it used the older Pages Router. For my new blog, I worked exclusively with the App Router.
It’s been really interesting getting to compare and contrast the two approaches. Later in this post, I’ll share my thoughts about this new direction, and whether it’s been worth migrating or not.
I write blog posts using MDX. It’s probably the most critical part of the tech stack for me.
\\nIf you’re not familiar with MDX, it’s essentially a combination of Markdown and JSX. You can think of it as a superset of Markdown that provides an additional superpower: the ability to include custom React elements within the content.
\\nWith MDX, I can create interactive widgets and drop ‘em right in the middle of a blog post, like this:
\\nTaken from my blog post, An Interactive Guide to Flexbox.
\\nThis ability is crucial for the sorts of content I create. I didn\'t want to be limited by the standard set of Markdown elements (links, tables, lists…). With MDX, I can create my own elements! It feels so much more powerful than traditional Markdown, or rich-text content stored in a CMS.
\\nYou might be wondering: Why not go “full React”, and skip the Markdown part altogether? When I built the very first version of this blog, way back in 2017, that’s exactly what I did. Each blog post was a React component. There were two problems with this:
\\n<p>
, for example, gets old really fast.MDX solves both of these problems, and without really sacrificing anything. I still have the full power of React when I\'m writing blog posts!
\\nIn terms of workflow, I edit my MDX files directly in VS Code and commit them as code. Article metadata (eg. title, publish date) is set in frontmatter at the top of the file. There are some drawbacks to this method (eg. I have to re-deploy the whole app to fix a typo), but I’ve found it’s the simplest option for me.
\\nThere are several ways to use MDX with Next.js. I\'m using next-mdx-remote(opens in new tab), mostly because it’s what I use on my course platform and I want the two projects to be as similar as possible. If you’re building a brand-new blog using Next.js, it’s probably worth giving the built-in MDX support(opens in new tab) a shot; it seems a lot more straightforward.
\\nThe old version of this blog used MDX as well, but it used version 1. As part of the blog rebuild, I took the opportunity to update to the latest version, v3.
The migration was honestly pretty frustrating at times 😅. Things that “just worked” in v1 became unsupported in v2/v3. In some cases, I was able to restore the old behaviour by installing a remark/rehype plugin, but most of the time I either had to hack together my own solution, or manually edit my MDX to match the new format.
It wound up being a pretty big task, and it was demotivating at times because I didn’t feel like the new version was an improvement. I preferred the trade-offs made in v1. 😬
That said, certain things are definitely better in the newer versions, and most of my gripes are subjective / a matter of taste. MDX remains the best solution I\'m aware of for creating interactive content. If I was starting a brand-new project today, I would still choose MDX v3.*I might prefer the semantics of v1, but it’s unmaintained and hasn’t been updated since 2020, so it’s more prudent to use the latest version.
The old version of my blog used styled-components, a CSS-in-JS library. As I’ve written about previously, styled-components isn’t fully compatible with React Server Components. So, for this new blog, I\'ve switched to Linaria(opens in new tab), via the next-with-linaria integration(opens in new tab).
\\nHere’s what it looks like:
\\nimport { styled } from \'@linaria/react\';\\n\\nconst Wrapper = styled.div`\\n background: red;\\n`;
Linaria is an awesome tool. It offers a familiar styled
API, but instead of working its magic at runtime, it instead compiles to CSS modules. This means that there is no JS runtime involved, and as a result, it’s fully compatible with React Server Components!
Now, getting Linaria to work with Next has been an uphill battle. I ran into a few weird issues. For example, when I import React in a file without actually using it, I get this bewildering error:
\\nEvalError: TextEncoder is not defined
/node_modules/.pnpm/@wyw-in-js+transform@0.4.1_typescript@5.4.5/node_modules/@wyw-in-js/transform/lib/module.js:223
throw new EvalError(e.message, this.callstack.join(\'\\\\n| \'));
The error messages / stack traces didn’t really help, so I solved most issues by walking backwards through my changes and/or deleting random things until the error disappeared. Fortunately, all of the issues I’ve found are consistent and predictable; it’s not one of those things where the error happens sometimes, or only in production.
\\nOnce I learned all of its idiosyncracies, it’s been pretty smooth sailing, though there is one significant remaining issue. And it doesn’t have to do with Linaria at all, it has to do with how Next.js handles CSS modules.
\\nIt’s too much of a detour to cover properly in this post, but to quickly summarize: Next.js “optimistically” bundles a bunch of CSS from unrelated routes, to improve subsequent navigation speed and guarantee the correct CSS order. This blog post, for example, loads 245kb of CSS, but it only uses 47kb.*Both of these numbers are the full uncompressed values. The actual amount of data sent over the wire is smaller. There is an active discussion on Github(opens in new tab) about this, and it sounds like some upcoming config options could improve the situation.
\\nGiven all of this, I can’t really recommend Linaria. It’s a wonderful tool, but it just isn’t battle-tested enough for it to be a prudent decision for most people/teams.
\\nI\'m currently most excited about Pigment CSS(opens in new tab), a zero-runtime CSS-in-JS tool being developed by the team behind Material UI. In the future, it will be the CSS library used by their popular MUI component library, which means it will quickly become one of the most battle-tested CSS libraries out there.
\\nIt’s still early days, but once they release their version 1.0, I plan on trying to switch. Hopefully by then, Next.js has fixed the bundling issue with CSS Modules. 🤞
\\nOver the past couple of years, Tailwind has become the most popular styling tool for React applications. Surely I could avoid all of these baffling issues by switching to Tailwind?
There are a few reasons why this wasn’t the right solution for me:
Code snippets look very different on the new blog, thanks to a custom-designed syntax theme! Here’s a before/after:
\\nIf you’d like to use this theme in your IDE, you can download the JSON files (dark, light). I haven’t tested it, but it uses the same grammar as VSCode and other editors, so it should work.
\\nYou might’ve noticed that the coding font has changed between the old blog and this one!
After experimenting with a dozen different options, I landed on Connary Fagen’s Cartograph CF(opens in new tab). It’s a wonderfully whimsical font. I especially like the cursive italics:
// Check out these sweet italics!
Cartograph CF is a paid font. You can buy Cartograph CF directly from its foundry, but it may be cheaper to purchase through Font Bros(opens in new tab).
I\'m using Shiki(opens in new tab) for managing the syntax highlighting. While not specifically built for React, Shiki is designed to work at compile-time, making it a perfect fit for React Server Components. This is surprisingly exciting.
\\nIn my old blog, I was using Prism, a typical client-side syntax highlighting library. Because all of the code gets included in the JavaScript bundle, several sacrifices have to be made:
\\nWith the minimal set of built-in languages, Prism winds up being 26kb minified and gzipped(opens in new tab), which is incredibly small for a syntax highlighter, but still a substantial addition to the bundle.
\\nWith Shiki, it adds 0kb to the JavaScript bundle, it uses the same industry-standard TextMate grammar as VS Code, and it can support dozens of languages at no additional cost.
\\nThis means that when I want to include a Haskell snippet, as I did in a random blog post I wrote years ago, it will be fully syntax-highlighted:
\\npe58 = n\\n where\\n a p q = scanl (+) p $ iterate (+ 8) q\\n b = [[x,y,z] | (x,(y,z)) <- zip (a 3 10) $ zip (a 5 12) (a 7 14)]\\n c = zip (scanl1 (+) . map (length . filter isPrime) $ b) (iterate (+ 4) 5)\\n [(n,_)] = take 1 $ dropWhile (\\\\(_,(a,b)) -> 10*a > b) $ zip [3,5..] c
Shiki is a joy to work with as a developer. It’s incredibly flexible and extensible. For example, I created my own “annotation” logic, so that I can highlight specific lines of code:
\\nfunction someRandomFunction() {\\n // These two lines are highlighted! You can tell by the\\n // background color, and the little bump on the left.\\n\\n return 42;\\n}
On my old blog, syntax highlighting didn\'t work properly for CSS-in-JS. My template strings would be treated as a standard string, rather than a bit of injected CSS within JS:
\\nWith Shiki, I was able to reuse the syntax-highlighting logic that the styled-components VSCode Extension(opens in new tab) provides. And so now, my styled-components are highlighted correctly:
\\nconst FunkyButton = styled.button`\\n position: absolute;\\n background: linear-gradient(\\n to bottom,\\n red,\\n gold\\n );\\n\\n @media (min-width: 24rem) {\\n &:focus {\\n background: gold;\\n }\\n }\\n`;\\n\\nexport default FunkyButton;
As much as I love Shiki, it does have some tradeoffs.
\\nBecause it uses a more powerful syntax-highlighting engine, it’s not as fast as other options. I was originally rendering these blog posts “on demand”, using standard Server Side Rendering rather than static compile-time HTML generation, but found that Shiki was slowing things down quite a bit, especially on pages with multiple snippets. This problem can be solved either by switching to static generation or with HTTP caching.
\\nShiki is also memory-hungry; I ran into an issue with Node running out of memory(opens in new tab), and had to refactor to make sure I wasn\'t spawning multiple Shiki instances.
\\nThe biggest issue, however, is that sometimes I need syntax highlighting on the client. For example, in my Shadow Palette generator tool, the snippet changes based on how the user edits the shadows:
\\n\\nThere’s no way to generate this at compile-time, since the code is dynamic!
\\nFor these cases, I have a second Shiki highlighter. This one is lighter, only supporting a small handful of languages. And it isn’t included in my standard bundles, I\'m lazy-loading it with next/dynamic(opens in new tab). Since the syntax highlighting itself is slower, I\'m using useDeferredValue to keep the rest of the app fast.
\\nThe trickiest part was that I needed both a static Server Component as well as a dynamic Client Component, in order for SSR to work correctly. I secretly swap between them on the client, after everything has loaded.
\\nIn addition to code snippets, I also have code playgrounds, little Codepen-style editors:
\\nCode Playground
import React from \'react\';\\nimport range from \'lodash.range\';\\n\\nimport styles from \'./PrideFlag.module.css\';\\nimport { COLORS } from \'./constants\';\\n\\nfunction PrideFlag({\\n variant = \'rainbow\', // rainbow | rainbow-original | trans | pan\\n width = 200,\\n numOfColumns = 10,\\n staggeredDelay = 100,\\n billow = 2,\\n}) {\\n const colors = COLORS[variant];\\n\\n const friendlyWidth =\\n Math.round(width / numOfColumns) * numOfColumns;\\n\\n const firstColumnDelay = numOfColumns * staggeredDelay * -1;\\n\\n return (\\n <div className={styles.flag} style={{ width: friendlyWidth }}>\\n {range(numOfColumns).map((index) => (\\n <div\\n key={index}\\n className={styles.column}\\n style={{\\n \'--billow\': index * billow + \'px\',\\n background: generateGradientString(colors),\\n animationDelay:\\n firstColumnDelay + index * staggeredDelay + \'ms\',\\n }}\\n />\\n ))}\\n </div>\\n );\\n}\\n\\nfunction generateGradientString(colors) {\\n const numOfColors = colors.length;\\n const segmentHeight = 100 / numOfColors;\\n\\n const gradientStops = colors.map((color, index) => {\\n const from = index * segmentHeight;\\n const to = (index + 1) * segmentHeight;\\n\\n return `${color} ${from}% ${to}%`;\\n });\\n\\n return `linear-gradient(to bottom, ${gradientStops.join(\', \')})`;\\n}\\n\\nexport default PrideFlag;
Taken from my blog post, Animated Pride Flags.
\\nFor React playgrounds, I use Sandpack(opens in new tab), a wonderful editor created by the folks at CodeSandbox. I’ve previously written about how I make use of Sandpack, and all of that stuff is still relevant.
\\nFor static HTML/CSS playgrounds, I\'m using my own fork of agneym\'s Playground(opens in new tab). Sandpack does support static templates, but they rely on Service Workers, which are sometimes blocked by browser privacy settings, leading to broken user experiences.
\\nLots of folks have asked me how I build the interactive demos in my posts like this:
\\nThis action cannot be undone.
Taken from my blog post, Designing Beautiful Shadows in CSS.
\\nI never quite know how to answer this question 😅. I don’t use any specific libraries or packages for this, it’s all standard web development stuff. I built my own reusable <Demo>
component which provides the shell and a suite of controls, and I compose it for each individual widget.
That said, there are a couple of generic tools that help. I use React Spring(opens in new tab) to smoothly interpolate between values in a fluid, organic fashion. And I use Framer Motion(opens in new tab) for layout animations.
\\nIt feels indulgent to have two separate animation libraries, especially since neither is tiny ( 19.4kb(opens in new tab) and 44.6kb(opens in new tab), respectively). I include React Spring as a core library and dynamically import Framer Motion when needed.
\\nTruthfully, though, Framer Motion should be able to do everything that React Spring can do, so if I had to pick a “desert island” animation library, it would probably be Framer Motion.
\\nIf you’re reading this on desktop, you might’ve seen this little fella off to the side:
\\n\\nIt’s a like button! Which is kind of silly… social networks use like buttons to inform their algorithm about which pieces of content to surface. This blog has no discovery algorithm, so it serves no purpose other than being cute.
\\nEach visitor can click the button up to 16 times, and the data is stored in MongoDB. The database record looks something like:
\\n{\\n \\"slug\\": \\"promises\\",\\n \\"categorySlug\\": \\"javascript\\",\\n \\"hits\\": 123456,\\n \\"likesByUser\\": {\\n \\"abc123\\": 16,\\n \\"def456\\": 4,\\n \\"ghi789\\": 16,\\n // ...\\n }\\n}
The IDs are generated based on the user’s IP address, hashed using a secret salt to preserve anonymity. This blog is deployed on Vercel, and Vercel provides the user’s IP through a header.
\\nOriginally I used IDs generated on the client and stored in localStorage, but legendary sleuth Jane Manchun Wong showed me why that was a bad idea by spamming the API endpoint and generating tens of thousands of likes. 😅
\\nOne of my favourite things about Next.js is that you don’t need a separate Node.js backend. The logic for liking posts is dealt with in a Route Handler(opens in new tab), which functions almost exactly like an Express endpoint.
\\nThe even-more-modern way to solve this would be with Server Actions(opens in new tab). I experimented with them and honestly, I thought it was more trouble than it was worth. 😅
To be fair, I’m really happy with the fetch + Route Handler solution, so I did have sort of a “don’t fix what isn’t broken” mindset. It’s very possible that if I spent more time using them, I would see the light.
It’s also still very early days for Server Actions, so I\'m going to wait and see how the community makes use of them, and how they evolve.
I spent an unreasonable amount of time on contextual styles, making sure that my generic “LEGO brick” components composed nicely together.
\\nFor example, I have an <Aside>
component for sidenotes, and a <CodeSnippet>
component (discussed earlier). Check out what happens when we put a <CodeSnippet>
inside an <Aside>
:
Here is some random code inside an aside:
function findLargestNum(nums: Array<number>) {\\n if (nums.length === 1) {\\n return nums[0];\\n }\\n\\n return Math.max(...nums);\\n}
Compare it to a code snippet not inside a sidenote:
\\nfunction findLargestNum(nums: Array<number>) {\\n if (nums.length === 1) {\\n return nums[0];\\n }\\n\\n return Math.max(...nums);\\n}
Instead of having a transparent background and gray outline, the CodeSnippet
inside the Aside
gets a brown background. Other details, like the annotations and the “Copy to Clipboard” button, also have custom colors.
Instead of having a light blue background, the CodeSnippet
inside the Aside
gets a golden background. Other details, like the annotations and the “Copy to Clipboard” button, also have custom colors.
I created custom colors for all four Aside
variants (info, success, warning, error), for each color theme (light, dark). Code snippets also receive different margin/padding when they’re within an Aside, and this changes based on the viewport size, as well as whether or not they’re the final child in the container. It gets quite complicated, considering all of the possible combinations!
This is just one example, too. Lots of other components have “adaptive” styles that change depending on their context, to make sure everything feels cohesive. It was a ton of work, but I find the result super satisfying. 😄
\\nLike my previous blog, this blog is closed-source. There are lots of reasons for this, enumerated in “Why My Blog Is Closed-Source”.
That said, if you’d like to spelunk through the code, there’s still hope! I’ve enabled sourcemaps on this project, meaning that you can browse through the original unminified code through your browser devtool’s Sources pane:
On the desktop homepage, you might’ve noticed that there’s a big new rainbow:
\\nThis rainbow responds to your cursor, segments bending towards it like iron shavings reacting to a magnet.
\\nThere’s an extra little easter egg as well: if you hover over the rainbow for a few seconds, a little “edit” button appears. Clicking it opens the 🌈 Rainbow Configurator.
\\nHere’s the twist: you’re not just changing the rainbow on your device, you’re changing it for everybody. Each change is immediately broadcast around the world, rainbows shooting through network cables and wifi signals so that we can all enjoy the rainbow you’ve designed. 💖
\\nThis is made possible by PartyKit(opens in new tab), a fabulous modern tool created by the illustrious Sunil Pai. It uses WebSockets so that the changes are lightning-fast. I can’t say enough good things about PartyKit. The developer experience is world-class.
\\nOne thing I failed to consider is how chaotic it would be with hundreds of people trying to edit the rainbow at the same time 😅. When I first launched the new blog, I received several bug reports from people thinking that the rainbow was glitching out, not aware that other people were wrestling over the controls. Things have calmed down now, but I should still find a way to make this clearer.
\\nIf you’re curious how I built the rainbow interactive, you might want to check out my appearance on Alex Trost’s wonderful charity fundraiser(opens in new tab). Over the course of ~20 minutes, we build a simpler version of this effect from scratch in React.
When navigating between pages, there should be a subtle cross-fade animation. If the header is in a new location, it should slide into place:
\\n\\nThis uses the very-powerful View Transitions API(opens in new tab). It isn’t yet supported in all browsers, but I think it’s a neat little progressive enhancement.
\\nThis API works by capturing virtual screenshots of the UI right before a transition, and manipulating that screenshot and the real UI, sliding and fading things around to create the illusion that two separate elements on two separate pages are the same.
\\nIt’s honestly pretty tricky to work with; I think the API design is great, but the underlying problem space is just so complicated, there’s no way to avoid some complexity. Expect to run into little quirks, like things not maintaining their aspect ratio, or text being glitchy.
\\nI’ve found Jake Archibald’s content super helpful for wrapping my mind around View Transitions. For example, his article on handling aspect-ratio changes(opens in new tab).
\\nGetting it to work within the Next.js App Router was a bit of a challenge. I used the use-view-transitions(opens in new tab) package, and created a low-level Link
component that wraps around next/link
. You can check it out in the Sources pane if you’re curious!
Framer Motion’s big superpower is the ability to do “layout transitions”. At first glance, the View Transitions API seems to solve the same problem! Does that mean we shouldn’t bother learning Framer Motion?
In my experience, View Transitions are great for page transitions, but they\'re not as good at handling smaller animations like micro-interactions. They don\'t handle interrupts gracefully; if a new transition starts before a previous transition has ended, the element teleports to a new location.
View Transitions are a great new tool to keep in the toolbox, but I don\'t really think it replaces anything.
My blog finally has a search feature! You can access it by clicking the magnifying glass in the header.
\\nI\'m using Algolia(opens in new tab) to do all the hard stuff, like fuzzy matching. At some point, I may feed all of my blog post data to an AI agent and make a chatbot, but for now, basic search seems to do the trick.
\\nOne cute little detail: clicking the “trash” icon will clear the search term, but I set it up so that it isn’t instantaneous. I wanted it to seem like the trash can was gobbling up each character. 😄
\\n\\nAt first glance, the icons on this site seem pretty much like the old icons, but they’ve been refined. Many of them have new micro-interactions!
\\nMy process for this involves starting with the icons from Feather Icons(opens in new tab), since they fit my aesthetic well. Then, I either pick apart or reconstruct their SVG so that I can animate independent parts.
\\nFor example, I have an arrow bullet that stretches out on hover:
\\n\\nI started by grabbing the SVG code for Feather Icons’ ArrowRight
, and turning it into JSX. The final code looks something like this:
import { useSpring, animated } from \'react-spring\';\\n\\nconst SPRING_CONFIG = {\\n tension: 300,\\n friction: 16,\\n};\\n\\nfunction IconArrowBullet({\\n size = 20,\\n isBooped = false,\\n}: Props) {\\n const shaftProps = useSpring({\\n x2: isBooped ? 23 : 18,\\n config: SPRING_CONFIG,\\n });\\n const tipProps = useSpring({\\n points: isBooped\\n ? \'17 6 24 12 17 18\'\\n : \'12 5 19 12 12 19\',\\n config: SPRING_CONFIG,\\n });\\n\\n return (\\n <svg\\n fill=\\"none\\"\\n width={size / 16 + \'rem\'}\\n height={size / 16 + \'rem\'}\\n viewBox=\\"0 0 24 24\\"\\n stroke=\\"currentColor\\"\\n strokeWidth=\\"2\\"\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n >\\n <animated.line\\n x1=\\"5\\"\\n y1=\\"12\\"\\n y2=\\"12\\"\\n {...shaftProps}\\n />\\n <animated.polyline {...tipProps} />\\n </svg>\\n );\\n}\\n\\nexport default IconArrowBullet;
Like a real arrow, this icon is composed of a shaft and a tip, made with an SVG line
and polyline
. Using React Spring, I change the x/y values for some of the points when it’s booped. This was a process of trial and error, moving individual points until it felt right.
Lots of the icons on this site are given similar micro-interactions. I even have one more special easter egg planned for one of the icons, something I didn’t quite finish in time for the launch. 😮
\\nI’m in the middle of working on my third course, Whimsical Animation, which covers these sorts of SVG icon animations in depth! It’s very early days, but you can join the waitlist(opens in new tab) to follow its development and be the first to know when it’s available!
In “The Surprising Truth About Pixels and Accessibility”, I show how using the rem
unit for media queries is more accessible. It ensures that our layout adapts gracefully if the user cranks up their browser’s default font size.
Every now and then, a reader would notice that my actual blog used pixel-based media queries. I wasn’t even practicing what I was preaching! What a hypocrite!
\\nWhen I first built the previous version of my blog, I wasn’t aware that rem-based media queries were more accessible; I discovered it while building my course platform. Retrofitting my blog to use rem-based media queries was a big job, and I didn’t want to wait until that was done to share what I had learned!
\\nAnd so, whenever someone emailed me about this, I would share this rationale, but I would still feel quite embarrassed about it. 😅
\\nNeedless to say, this new blog uses rem-based media queries throughout. I’ve learned a lot about accessibility over the years (including through my own short-term disability), and I’ve applied everything I’ve learned to this new blog.
\\nOf course, I’m always still learning, so if you spot anything inaccessible on this blog, please do let me know!
\\nAs I mentioned earlier, one of the biggest changes with the new blog was switching from the Pages Router to the App Router. I know lots of folks are considering making the same switch, so I wanted to share my experience, to help inform your decision.
\\nHonestly, my experience was a bit of a mixed bag 😅. Let’s start with the good stuff.
\\nThe mental model is wonderful. The “Server Components” paradigm feels much more natural than getServerSideProps
. There’s definitely a learning curve, but I got the hang of it pretty quickly. In addition to the improved ergonomics, the new system is more powerful. For example: in the Pages router, only the top-level route component could do backend work, whereas now, any Server Component can.
Another benefit with Server Components is that we no longer need to include each and every React component in our client-side bundles. This means that “static” components are omitted entirely from the bundles. It also means we can use more-powerful server-exclusive libraries like Shiki, knowing that we don’t have to worry about bundle bloat.
\\nIn theory, that should lead to some pretty significant performance benefits, but that hasn’t really been my experience. In fact, the performance of my new blog is slightly worse than my old blog:
\\nThere are a ton of caveats to this though:
\\nIt’s easy to get disheartened looking at numbers, but when I throttle my CPU/network and do side-by-side comparisons, I can’t really tell the difference. I’m a bit concerned about the SEO impact of a lower Lighthouse score, but I think if the Next team addresses the CSS bundling issue, it should wind up being roughly equivalent.
\\nWhile we’re on the topic of slow performance, the development server is much slower with the App Router. It’s gotten worse and worse as my blog has grown. Here are the current stats:
\\nIt’s pretty painful 😬. When I switch gears to work on my course platform (which still uses the Pages Router), it feels like a breath of fresh air.
\\nI should note: because I’m using Linaria, I’ve had to opt out of Turbopack, their modern Rust-based alternative to Webpack. It’s possible that dev performance is not an issue with Turbopack enabled. But I suspect that lots of us will be in the same situation, where we need Webpack for some package or other, and it shouldn’t be this slow; the Pages router used Webpack, and it was zippy!
\\nThe good news is that the Next.js team is aware of these sorts of issues and have made dev performance a priority. The App Router is still in its infancy, and there are bound to be some growing pains. I have a lot of confidence that the Next.js team will fix this stuff (the team is awesome and they’ve already addressed many of the issues I’ve brought up!). The App Router may have been marked as “stable”, but honestly it still feels pretty nascent to me.
\\nThe vision behind React Server Components and the App Router is inspiring. For all of the jokes about the React community “reinventing” PHP, I really do think that Meta/Vercel have done something truly remarkable, and once they work out all of the kinks, it will definitively become the best way to build React applications. But today, it feels like we’re firmly in “early adopter” territory.
\\nI’m glad to have migrated my blog to the App Router (and I\'ll feel even better about it when the CSS issues are resolved 😅), but I\'m also in no rush to migrate my course platform.
\\nI’ve been teaching React for something like 7 years now. I started teaching at a local coding bootcamp, developing their React curriculum and working with students one-on-one. I’ve published 22 articles on React through this blog. And I’ve created The Joy of React(opens in new tab), a comprehensive online course that digs into how React works and how to use it effectively.
\\nThis course is focused on the core mechanisms of React, but the final module is all about the Next.js App Router and React Server Components. In fact, the final project has you build an interactive MDX-based blog. 😄
\\nIt looks like this:
\\n\\nIt’s not the most complex thing we cover in the course, but it is one of the most practical. And the best part is that you can use it as the foundation for your actual blog! This isn’t just a contrived course project, it can become your own real-world home base on the internet. 😄
\\nYou can learn more here:
\\nBack in 2021, I wrote the original “How I Built My Blog” post, and it contains a bunch of odds and ends that didn’t make it to this new edition (this post became too long as-is 😅).
If you’re curious how a particular component works, you can dig into its source! I’ve enabled sourcemaps in production, so you can browse through all of the unminified client-side code through the Sources pane in your browser’s devtools.
And finally, you’re welcome to reach out and ask me your questions directly! I always love hearing from readers ❤️. Though I can’t promise a super-in-depth response.
September 24th, 2024
I don’t know if you’ve noticed, but the CSS world has been on fire recently. 🔥
\\nBehind the scenes, all major browser vendors and the CSS specification authors have been working together to deliver tons of highly-requested CSS features. Things like container queries, native CSS nesting, relative color syntax, balanced text, and so much more.
\\nOne of these new features is the :has
pseudo-class. And, honestly, I wasn’t sure how useful it would be for me. I mostly build webapps using React, which means I tend not to use complex selectors. Would the :has
pseudo-class really offer much benefit in this context?
Well, I’ve spent the past few months rebuilding this blog, using all of the modern CSS bells and whistles. And my goodness, I was wrong about :has
. It’s an incredibly handy utility, even in a CSS-in-JS context!
In this blog post, I\'ll introduce you to :has
and share some of the most interesting real-world use cases I’ve found so far, along with some truly mindblowing experiments.
This blog post is intended for developers who are already comfortable with the fundamentals of CSS, but no prior experience with :has
is expected.
Parts of this blog post are specifically written for fellow JavaScript developers who use a framework like React/Vue/Angular, but this blog post should still be very useful even if you’ve never written any JS.
Historically, CSS selectors have worked in a “top down” fashion.
\\nFor example, by separating multiple selectors with a space, we can selectively style a child based on its parent:
\\nCode Playground
Result
Refresh results paneThe :has
pseudo-selector works in a “bottom up” fashion; it allows us to style a parent based on its children:
Code Playground
Result
Refresh results paneIt appears that your browser does not support the :has
CSS selector. Unfortunately, that means that none of the demos in this blog post will display correctly.
If possible, please visit this page on a modern, up-to-date browser like Chrome, Firefox, or Safari.
This might not seem like a big deal, but it opens so many interesting new doors. Over the past few months, I’ve had one epiphany after another, moments where I went “Woah, that means I can do this??”
\\nBefore we get to all the cool demos, we should briefly talk about browser support. :has
is supported in all 4 major browsers, starting from:
As I write this in September 2024, :has
is at ~92% browser support. Here\'s a live embed with up-to-date values:
Honestly, 92% isn’t great when it comes to browser support… That means roughly 1 in 12 people are using an unsupported browser!
\\nFortunately, most of the use cases I’ve found for :has
are optional “nice-to-have” bonuses, so it’s not really a big deal if they don’t show up for everyone. And in other cases, we can use feature detection to provide fallback CSS.
The @supports
at-rule allows us to apply CSS conditionally, based on whether or not it’s supported by the user’s browser. Here’s what it looks like:
p {\\n /* Fallback styles here */\\n}\\n\\n@supports selector(p:has(a)) {\\n p:has(a) {\\n /* Fancy modern styles here */\\n }\\n}
If the selector passed to the selector()
function isn’t understood by the current browser, everything within is ignored. And if the user’s browser is even older, and doesn’t recognize the @supports
at-rule, then the whole block is ignored. Either way, it works out.
Now, the thing is, there is no way to “mimic” :has
using older CSS. Our fallback styles won’t really be able to reproduce the same effect. Instead, we should think of it as having two sets of styles that accomplish the same goal in different ways. I\'ll include an example in the next section.
On this blog’s new “About Josh” page, I use a “bento box” layout containing a bunch of little cards. Some of these cards have clickable children:
\\n\\nFor folks who navigate with a keyboard, however, the experience was a bit more funky. Some of the children dynamically change size, leading to curious focus outlines like this:
\\n\\nTo solve this problem, I moved the focus outline to the parent container. Here’s what it looks like now:
\\n\\nThis solves our problem, and I think it also looks pretty nice!
\\nLet’s dig into how this works. Here’s roughly what the HTML looks like:
\\n<div class=\\"bento-card\\">\\n <p>\\n I\'m\\n <button>188cm</button>\\n tall.\\n </p>\\n</div>
In the past, I might’ve solved this by making the whole .bento-card
container a <button>
, but this isn’t a good idea. Cramming so much stuff into a button would introduce several usability and accessibility issues; for example, users can\'t click-and-drag to select text inside buttons!
Fortunately, we can keep our nice semantic markup and accomplish our goals with :has
:
.bento-card:has(button:focus-visible) {\\n outline: 2px solid var(--color-primary);\\n}\\n\\n/* Remove the default button focus outline */\\n.bento-card button {\\n outline: none;\\n}
When .bento-card
contains a focused button, we add an outline to it. The outline is applied to the parent .bento-card
, rather than to the button itself.
If you’re not familiar with the :focus-visible
pseudo-class, it works exactly like :focus
, but it only applies when the browser detects that the user is using the keyboard (or other non-pointer device) to navigate. When a mouse-wielding user focuses the button by clicking it, :focus-visible
won’t be triggered, and no focus outline will be shown.
I\'m also removing the default focus outline from the button, to prevent double focus indicators. This is something we should be very cautious about. In fact, our solution isn’t yet complete, since we also need to provide a fallback experience for folks using older browsers.
\\nHere’s what that looks like:
\\n@supports selector(:has(*)) {\\n .bento-card:has(button:focus-visible) {\\n outline: 2px solid var(--color-primary);\\n }\\n\\n .bento-card button {\\n outline: none;\\n }\\n}
In this updated version, the outline modifications will only be applied for folks who visit using modern browsers. If someone is using a legacy browser, none of this stuff will apply, and they’ll see the standard focus outlines. Even though it’s a little funky, I think it’s a reasonable fallback experience.
\\nI\'m also taking a little shortcut here: rather than test for the specific selector I\'m using (.bento-card:has(button:focus-visible)
), I\'m instead using the smallest valid :has
selector, :has(*)
. The browser won\'t actually try and resolve the selector we supply, so it doesn’t matter which elements are selected. @supports
works by looking at the syntax and establishing whether it\'s valid or not.
:focus-within
?:focus-within
is a pseudo-class that selects an element which contains a focused descendant. It allows us to do something quite similar:
.bento-card:focus-within {\\n outline: 2px solid var(--color-primary);\\n}
The :focus-within
pseudo-class has been around much longer than :has
, and so it has significantly better browser support(opens in new tab). Seems like a better approach, no?
There are two reasons why I prefer :has
in this situation:
:focus-within
matches the :focus
state, not the :focus-visible
state. This means that the outline will show even for users who click the button using a mouse. There is no :focus-visible-within
.If I used :focus-within
, it wouldn\'t be clear to the user which interactive child is actually focused!
Ultimately, :focus-within
can be useful, but it’s a pretty coarse tool. We have much finer control using :has
.
CSS has dozens and dozens of pseudo-classes beyond :focus-visible
, and we can use any of them to apply CSS conditionally with :has
!
Let’s look at another example from this blog. Here’s a custom form control I use in a couple of places. I call it an “X/Y Pad”:
\\n(This is an interactive element! You can click and drag the handle to change the X/Y values. For keyboard users, you can focus the handle and use the arrow keys.)
\\nNotice that while you drag/adjust the handle, the container changes color! The code looks something like this:
\\n<style>\\n .xy-pad {\\n --dot-color: gray;\\n }\\n .xy-pad:has(.handle:active),\\n .xy-pad:has(.handle:focus-visible), {\\n --dot-color: var(--color-primary);\\n }\\n</style>\\n\\n<div class=\\"xy-pad\\">\\n <svg>\\n <!-- Dotted background here --\x3e\\n </svg>\\n\\n <button class=\\"handle\\"></button>\\n</div>
If you’re not familiar, the :active
pseudo-class is applied when a button is being clicked and held. While the user is dragging the handle, our :has
selector matches, and we change the value of a CSS variable, --dot-color
.
Additionally, I\'ve added a secondary selector with :focus-visible
, so that keyboard users get the same treatment.
The --dot-color
CSS variable is used in several places, for the borders and lines and dots. The dots themselves are dynamically generated as a bunch of SVG circles:
<circle fill=\\"var(--dot-color)\\">
If you’d like to learn more about how custom controls like my “X/Y Pad” work, I\'m planning on covering stuff like this in my upcoming animations course! You can join my newsletter to learn more about it over the coming months.
This is maybe the coolest use-case I\'ve found so far. We can use :has
as a sort of global event listener.
For example, suppose we’re building a modal/dialog component. When the modal is open, we want to disable scrolling on the page. We can do this by applying some CSS to the <html>
tag:
/* Scrolling disabled while this is set: */\\nhtml {\\n overflow: hidden;\\n}
Here’s how I would have solved this in the past, using a JS framework like React:
\\n// Register a side-effect that runs whenever `isOpen` changes:\\nReact.useEffect(() => {\\n if (isOpen) {\\n // Save the current value for `overflow`,\\n // so that we can restore it later:\\n const { overflow } =\\n document.documentElement.getComputedStyle();\\n\\n // Apply the new value to disable scrolling:\\n document.documentElement.style.overflow = \\"hidden\\";\\n\\n // Register a cleanup function that undoes this work,\\n // when `isOpen` flips back to `false`:\\n return () => {\\n document.documentElement.style.overflow = overflow;\\n };\\n }\\n}, [isOpen]);
Don’t worry if you’re not familiar with React. The point here is that this is a really clunky way to solve this problem!
\\nWe can solve this in a much nicer way with :has
:
html:has([data-disable-document-scroll=\\"true\\"]) {\\n overflow: hidden;\\n}
If the HTML contains an element that sets this data attribute, no matter where it is in the DOM, we’ll apply overflow: hidden
.
Inside our Modal
component, we’ll trigger it by conditionally setting the data attribute:
function Modal({ isOpen, children }) {\\n return (\\n <div\\n data-disable-document-scroll={isOpen}\\n >\\n {/* Modal stuff here */}\\n </div>\\n );\\n}
How friggin’ cool is that?? The instant our modal opens, this data attribute gets flipped to \\"true\\"
, which means our :has
selector becomes fulfilled, and scrolling becomes disabled. If this data attribute flips back to \\"false\\"
, or if the element itself is removed from the DOM, scrolling will automatically be restored. ✨
This example uses React, but we can leverage the same trick in a vanilla JavaScript context. Here’s a quick sketch:
\\nfunction toggleModal(isOpen) {\\n const element = document.querySelector(\'...\');\\n element.dataset.disableDocumentScroll = isOpen;\\n}
You might be wondering about the performance implications of this strategy. With :has
on the root HTML tag, doesn\'t that mean that the browser would have to inspect the entire DOM in order to tell if the condition is met or not?
I decided to test this using the new “Selector Stats” feature(opens in new tab) in the Chrome devtools.
On this blog post, with more than 2500 DOM nodes, this selector took an average of 0.1 milliseconds (0.0001 seconds) to resolve. I performed this test on one of the slowest computers I\'ve ever used in my life, a $100 Intel Celeron laptop that struggles with things like displaying images. The result was more than 10x faster on my 2021 MacBook Pro.
Browsers are really good at doing style recalculation. This isn\'t something we need to worry about. 😄
Jen Simmons discovered that we can use this trick to create a JavaScript-free “Dark Mode” toggle. Here’s an example:
\\n<style>\\n /* Default (light mode) colors: */\\n body {\\n --color-text: black;\\n --color-background: white;\\n }\\n\\n /* Dark mode colors: */\\n body:has(#dark-mode-toggle:checked) {\\n --color-text: white;\\n --color-background: black;\\n }\\n</style>\\n\\n<!-- Somewhere in the DOM: --\x3e\\n<input id=\\"dark-mode-toggle\\" type=\\"checkbox\\">\\n<label for=\\"dark-mode-toggle\\">\\n Enable Dark Mode\\n</label>
When the user clicks the checkbox, the :checked
pseudo-class is applied to it, which causes our :has
selector to match. We overwrite the baseline CSS variables with new dark-mode ones, and the theme is effectively swapped!
To be clear, Dark Mode is a surprisingly complicated thing, and this approach isn’t really a complete implementation (for example, it doesn’t save/restore the user’s preferred option, or inherit the default theme from the operating system). Plus, I wouldn’t want a core piece of functionality to depend on a CSS feature with only ~92% support. But still, it’s friggin’ cool that we can add a “Dark Mode” toggle with only a single CSS rule and no JS!
\\nYou can read more about this approach, and see lots of other cool examples, in Jen’s wonderful blog post(opens in new tab).
\\nSo far, all of the examples we’ve looked at involve styling the parent based on one of its descendants. This is very cool, but it’s only the tip of the iceberg.
\\nCheck this out:
\\nCode Playground
Result
Refresh results paneIn this scenario, I\'m selecting all paragraphs that come right before a <figure>
tag. The big difference here is that there’s no parent/child relationship; the paragraphs and figures are siblings!
Now, to be clear, we’ve been able to do similar things in CSS for quite a while, using the “next-sibling combinator”, +
. This little fella allows us to select an element that comes after a given selector:
Code Playground
Result
Refresh results paneOn its own, the +
combinator can only be used to select elements that come after a given selector in the DOM. It only works in one direction. With :has
, we can flip the order, which means that together, we can select elements in either direction!
Code Playground
Result
Refresh results paneWe’re not limited to direct siblings, either. With :has
, we can style one element based on another element in a totally different container!
Here’s a wild example, adapted from Ahmad’s comprehensive blog post on :has(opens in new tab). Try hovering over the category buttons and/or the books:
\\nCode Playground
Result
Refresh results paneIf you’re reading this blog post on a mobile/tablet device and don\'t have a mouse to hover with, you can alternatively tap the category buttons to trigger the same effect. And if you’re a keyboard user, you can focus the category buttons.
The CSS for these alternative control schemes can be found at the bottom of the \\"CSS\\" tab.
Hovering over one of the category buttons will add a hover state to the buttons themselves, as well as any books that match the selected category! Likewise, hovering over one of the books highlights the matching category.
\\nIt’s hard to parse the CSS in the constrained space within the playground, so here’s the core CSS logic in a more spacious box:
\\nhtml:has([data-category=\\"sci-fi\\"]:hover) [data-category=\\"sci-fi\\"] {\\n background: var(--highlight-color);\\n}
html:has(\\n [data-category=\\"sci-fi\\"]:hover\\n) [data-category=\\"sci-fi\\"] {\\n background: var(--highlight-color);\\n}
The first part of this selector uses the same “global detection” logic we saw earlier. We’re checking to see if the DOM contains a node that:
\\ncategory
data attribute to \\"sci-fi\\"
, andInstead of applying styles directly to the <html>
tag, though, we’re instead looking for any descendants that have the category
data attribute set to \\"sci-fi\\"
.
To paraphrase the logic here, I\'m essentially saying: “If the HTML document contains at least 1 hovered element with category
set to \\"sci-fi\\"
, apply the following CSS to all elements with that category”. In this particular case, the CSS I\'m applying is a lilac background color, but it could be anything!
The wild thing about this example is that the actual DOM structure doesn’t matter. The category buttons are in a totally different part of the DOM from the book elements. There’s no parent/child relationship, or even a sibling relationship! The only thing they have in common is that they’re both descendants of the root <html>
tag, same as any other node in the document.
It kinda feels like :has
is the “missing selector” in CSS. Historically, there have been a bunch of relationships we just couldn’t express in CSS. With :has
, we can select any element based on the properties/status of any other element. No limits!
As we’ve seen, the :has
selector is incredibly powerful. Things that used to require JavaScript can now be accomplished exclusively using CSS!
But just because we can solve problems like this, does that mean we should?
\\nI\'m a big fan of using whichever tool can solve the problem with the least amount of complexity. And when a problem can be solved either with CSS or JavaScript, the CSS solution tends to be much simpler.
\\nWith :has
, however, things can get pretty complicated. Here’s a “final” version of the snippet we just saw, including alternative controls for mobile/keyboard:
html:where(\\n :has([data-category=\\"sci-fi\\"]:hover),\\n :has([data-category=\\"sci-fi\\"]:focus-visible),\\n :has([data-category=\\"sci-fi\\"]:active),\\n) [data-category=\\"sci-fi\\"],\\nhtml:where(\\n :has([data-category=\\"fantasy\\"]:hover),\\n :has([data-category=\\"fantasy\\"]:focus-visible),\\n :has([data-category=\\"fantasy\\"]:active),\\n) [data-category=\\"fantasy\\"],\\nhtml:where(\\n :has([data-category=\\"romance\\"]:hover),\\n :has([data-category=\\"romance\\"]:focus-visible),\\n :has([data-category=\\"romance\\"]:active),\\n) [data-category=\\"romance\\"] {\\n background: var(--highlight-color);\\n}
(The :where
pseudo-class allows us to “group” related selectors. It’s equivalent to writing each clause out as a separate selector.)
If I was building this UI using a framework like React, I think it would actually be simpler to create a state variable that tracks which category is currently active. It would also be more flexible; we could have dynamic categories, rather than hardcoded ones. And books could belong to multiple categories. And it would work in Internet Explorer.
\\nI included this example because it really is an incredible demonstration of what :has
can do, but if I was building this particluar UI for real, I would implement this logic in JavaScript.
In practice, I find myself using :has
in less grandiose ways, like the focus outlines on the “About” page, or for disabling scroll on mobile. It’s a super handy selector in these circumstances, and works very well in the context of a React application!
As I mentioned earlier, I recently rebuilt this blog, using a bunch of modern CSS. This is the first of several blog posts I plan to write 😄. If you’d like to be notified when I publish new content, you can join my newsletter:
\\nAnd if you\'d like to learn more about :has
, there are tons of amazing resources out there. Here are some of my favourites:
September 9th, 2024
There are a lot of speed bumps and potholes on the road to JavaScript proficiency. One of the biggest and most daunting is Promises.
\\nIn order to understand Promises, we need a surprisingly deep understanding of how JavaScript works and what its limitations are. Without that context, Promises won’t really make much sense.
\\nIt can be frustrating because the Promises API is so important nowadays. It’s become the de facto way of working with asynchronous code. Modern web APIs are built on top of Promises. There’s no getting around it: if we want to be productive with JavaScript, it really helps to understand Promises.
\\nSo, in this tutorial, we’re going to learn about Promises, but we’ll start at the beginning. I’ll share all of the critical bits of context that took me years to understand. And by the end, hopefully, you’ll have a much deeper understanding of what Promises are and how to use them effectively. ✨
\\nThis blog post is intended for beginner-to-intermediate JavaScript developers. Some knowledge of basic JavaScript syntax is assumed.
Suppose we wanted to build a Happy New Year! countdown, something like this:
\\n––
Start CountdownIf JavaScript was like most other programming languages, we could solve the problem like this:
\\nfunction newYearsCountdown() {\\n print(\\"3\\");\\n sleep(1000);\\n\\n print(\\"2\\");\\n sleep(1000);\\n\\n print(\\"1\\");\\n sleep(1000);\\n\\n print(\\"Happy New Year! 🎉\\");\\n}
In this hypothetical code snippet, the program would pause when it hits a sleep()
call, and then resume after the specified amount of time has passed.
Unfortunately, there is no sleep
function in JavaScript, because it’s a single-threaded language.*Technically, modern JavaScript has access to multiple threads via Web Workers, but those extra threads don\'t have access to the DOM, so they can’t really be used in most situations. A “thread” is a long-running process that executes code. JavaScript only has one thread, and so it can only do one thing at a time. It can’t multitask. This is a problem because if our lone JavaScript thread is busy managing this countdown timer, it can’t do anything else.
When I was first learning about this stuff, it wasn’t immediately obvious to me why this was a problem. If the countdown timer is the only thing happening right now, isn’t it fine if the JS thread was fully occupied during that time??
\\nWell, even though JavaScript doesn’t have a sleep
function, it does have some other functions that occupy the main thread for an extended amount of time. We can use those other methods to get a glimpse into what it would be like if JavaScript had a sleep
function.
For example, window.prompt()
. This function is used to gather information from the user, and it halts execution of our code much like our hypothetical sleep()
function would.
Click the button in this playground, and then try to interact with the page while the prompt is open:
\\nCode Playground
HTML
Result
Refresh results paneNotice that while the prompt is open, the page is totally unresponsive? You can\'t scroll, click any links, or select any text! The JavaScript thread is busy waiting for us to provide a value so that it can finish running that code. While it’s waiting, it can’t do anything else, and so the browser locks down the UI.
\\nOther languages have multiple threads, and so it\'s no big deal if one of them gets preoccupied for a while. In JavaScript, though, we only have the one, and it’s used for everything: handling events, managing network requests, updating the UI, etc.
\\nIf we want to create a countdown, we need to find a way to do it without blocking the thread.
\\nIn the example above with window.prompt()
, the entire UI becomes unresponsive while the browser waits for us to provide a value.
This is kinda strange… the browser doesn’t rely on JavaScript to scroll the page, or to select text. So why can’t we do any of those things?
I think browsers work this way to prevent bugs. Scrolling the page, for example, triggers “scroll” events which can be caught and handled with JavaScript. If the JS thread is occupied while the scroll event happens, that code never runs, which could lead to bugs if the developer assumed that scroll events would always be handled.
It could also be a UX thing; maybe the browser disables the UI so that the user can’t ignore the prompt. Either way, though, I suspect a native sleep
function would need to work the same way to prevent bugs.
The main tool in our toolbox for solving these sorts of problems is setTimeout
. setTimeout
is a function which accepts two arguments:
Here\'s an example:
\\nconsole.log(\'Start\');\\n\\nsetTimeout(\\n () => {\\n console.log(\'After one second\');\\n },\\n 1000\\n);
The chunk of work is passed in through a function. This pattern is known as a callback.
\\nThe hypothetical sleep()
function we saw before is like calling a company and waiting on hold for the next available representative. setTimeout()
is like pressing 1 to have them call you back when the representative is available. You can hang up the phone and get on with your life.
setTimeout()
is known as an asynchronous function. This means that it doesn’t block the thread. By contrast, window.prompt()
is synchronous, because the JavaScript thread can\'t do anything else while it’s waiting.
The big downside with asynchronous code is that it means our code won\'t always run in a linear order. Consider the following setup:
\\nconsole.log(\'1. Before setTimeout\');\\n\\nsetTimeout(() => {\\n console.log(\'2. Inside setTimeout\');\\n}, 500);\\n\\nconsole.log(\'3. After setTimeout\');
You might expect these logs to fire in order from top to bottom: 1
> 2
> 3
. But remember, the whole idea with callbacks is that we’re scheduling a call back. The JavaScript thread doesn’t sit around and wait, it keeps running.
Imagine if we gave the JavaScript thread a journal and asked it to keep track of all the things it does while it runs this code. After running, the journal would look something like this:
\\n00:000
: Log \\"1. Before setTimeout\\".00:001
: Register a timeout.00:002
: Log \\"3. After setTimeout\\".00:501
: Log \\"2. Inside setTimeout\\".setTimeout()
registers the callback, like scheduling a meeting on a calendar. It only takes a tiny fraction of a second to register the callback, and once that’s done, it moves right along, executing the rest of the program.
Callbacks are used all over JavaScript, not just for timers. For example, here’s how we listen for pointer?The term “pointer” is an umbrella category for all UI input methods that involve pointing at something, including the mouse, tapping a finger on a touchscreen, a stylus, etc. events:
\\nCode Playground
window.addEventListener(\'pointermove\', (event) => {\\n const container = document.querySelector(\'#data\');\\n\\n container.innerText = `${event.clientX} • ${event.clientY}`;\\n});
window.addEventListener()
registers a callback that will be called whenever a certain event is detected. In this case, we’re listening for pointer movements. Whenever the user moves the mouse or drags their finger along a touchscreen, we’re running a chunk of code in response.
Like with setTimeout
, the JavaScript thread doesn’t focus exclusively on watching and waiting for mouse events. It tells the browser “hey, let me know when the user moves the mouse”. When the event fires, the JS thread will circle back and run our callback.
But OK, we’ve wandered pretty far from our original problem. If we want to set up a 3-second countdown, how do we do it?
\\nBack in the day, the most common solution was to set up nested callbacks, something like this:
\\nconsole.log(\\"3…\\");\\n\\nsetTimeout(() => {\\n console.log(\\"2…\\");\\n\\n setTimeout(() => {\\n console.log(\\"1…\\");\\n\\n setTimeout(() => {\\n console.log(\\"Happy New Year!!\\");\\n }, 1000);\\n }, 1000);\\n}, 1000);
This is wild, right? Our setTimeout
callbacks create their own setTimeout
callbacks!
When I started tinkering with JavaScript in the early 2000s, this sort of pattern was pretty common, though we all sorta recognized how not-ideal it was. We referred to this pattern as Callback Hell.
\\nPromises were developed to solve some of the problems of Callback Hell.
\\nThe setTimeout
API takes a callback function and a duration. After the specified amount of time has passed, the callback function is called.
But how?? If the JavaScript thread isn\'t babysitting the timeout, watching it like a hawk, how does it know when it’s time to call the callback?
It’s beyond the scope of this tutorial, but JavaScript has something called an event loop. When we call setTimeout
, a little message is added to a queue. Whenever the JS thread isn\'t executing code, it’s watching the event loop, checking for messages.
When the timeout expires, a metaphorical notification light comes on in the event loop, like an answering machine with a new message. If the JS thread wasn’t doing anything, it\'ll jump on it right away and execute the callback passed to setTimeout()
.
This does mean that timeouts aren\'t 100% accurate. There’s only one JavaScript thread, and it might be busy doing something else when that notification light comes on, like handling a scroll event or waiting on a window.prompt()
. If we specify a 1000ms timeout, we can trust that at least 1000 milliseconds have passed, but it might be slightly longer.
You can learn more about the event loop on MDN(opens in new tab).
So, as discussed, we can\'t simply tell JavaScript to stop and wait before executing the next line of code, since it would block the thread. We’re going to need some way of separating the work into asynchronous chunks.
\\nInstead of nesting, though, what if we could chain them together? To tell JavaScript to do this, then this, then this?
\\nJust for fun, let’s pretend that we had a magic wand, and we could change the setTimeout
function to work however we wanted. What if we did something like this:
console.log(\'3\');\\n\\nsetTimeout(1000)\\n .then(() => {\\n console.log(\'2\');\\n\\n return setTimeout(1000);\\n })\\n .then(() => {\\n console.log(\'1\');\\n\\n return setTimeout(1000);\\n })\\n .then(() => {\\n console.log(\'Happy New Year!!\');\\n });
Instead of passing the callback directly to setTimeout
, which leads to nesting and Callback Hell, what if we could chain them together with a special .then()
method?
This is the core idea behind Promises. A Promise is a special construct, added to JavaScript in 2015 as part of a big language update.
\\nUnfortunately, setTimeout
still uses the older callback style, since setTimeout
was implemented long before Promises; changing how it works would break older websites. Backwards compatibility is a great thing, but it means that things are sometimes a bit messy.
But modern web APIs are built on top of Promises. Let\'s look at an example.
\\nThe fetch()
function allows us to make network requests, typically to retrieve some data from the server.
Consider this code:
\\nconst fetchValue = fetch(\'/api/get-data\');\\n\\nconsole.log(fetchValue);\\n// -> Promise {<pending>}
When we call fetch()
, it starts the network request. This is an asynchronous operation, and so the JavaScript thread doesn\'t stop and wait. The code keeps on running.
But then, what does the fetch()
function actually produce? It can’t be the actual data from the server, since we just started the request and it’ll be a while until it’s resolved. Instead, it’s sort of like an IOU?An IOU is a note that acknowledges a debt. Pronounced like “I Owe You”., a note from the browser that says “Hey, I don’t have your data yet, but I promise I\'ll have it soon!”.
More concretely, Promises are JavaScript objects. Internally, Promises are always in one of three states:
\\npending
— the work is in-progress, and hasn\'t yet completed.fulfilled
— the work has successfully completed.rejected
— something has gone wrong, and the Promise could not be fulfilled.While a Promise is in the pending
state, it’s said to be unresolved. When it finishes its work, it becomes resolved. This is true whether the promise was fulfilled or rejected.
Typically, we want to register some sort of work to happen when the Promise has been fulfilled. We can do this using the .then()
method:
fetch(\'/api/get-data\')\\n .then((response) => {\\n console.log(response);\\n // Response { type: \'basic\', status: 200, ...}\\n });
fetch()
produces a Promise, and we call .then()
to attach a callback. When the browser receives a response, this callback will be called, and the response object will be passed through.
If you’ve worked with the Fetch API before, you’ve probably noticed that a second step is required, to actually get the data we need as JSON:
fetch(\'/api/get-data\')\\n .then((response) => {\\n return response.json();\\n })\\n .then((json) => {\\n console.log(json);\\n // { data: { ... } }\\n });
response.json()
produces a brand new Promise, one which is fulfilled when the response is fully available as JSON.
But wait, why is response.json()
asynchronous?? We already waited for the response, shouldn’t it already exist as JSON?
Not necessarily. A core part of web infrastructure is the ability for servers to stream the response, sending it in batches. This is commonly done for media (eg. videos on YouTube), but it can also be done for larger JSON payloads.
The fetch()
Promise is resolved when the browser receives the first byte of data from the server. The response.json()
Promise is resolved when the browser has received the last byte of data.
In practical terms, it\'s pretty rare for JSON to be sent in chunks, and so both Promises will resolve at the same time, but the Fetch API is designed to support streamed responses, and so this extra little dance is necessary.
When we use the Fetch API, the Promises are created behind the scenes, by the fetch()
function. But what if the API we want to work with doesn’t support Promises?
For example, setTimeout
was created before Promises existed. If we want to avoid Callback Hell when working with timeouts, we’ll need to create our own Promises.
Here’s what the syntax looks like:
\\nconst demoPromise = new Promise((resolve) => {\\n // Do some sort of asynchronous work, and then\\n // call `resolve()` to fulfill the Promise.\\n});\\n\\ndemoPromise.then(() => {\\n // This callback will be called when\\n // the Promise is fulfilled!\\n})
Promises are generic. They don’t “do” anything on their own. When we create a new Promise instance with new Promise()
, we also supply a function with the specific asynchronous work we want to do. This can be anything: performing a network request, setting a timeout, whatever.
When that work is finished, we call resolve()
, which signals to the Promise that everything went well and resolves the Promise.
Let’s circle back to our original challenge, creating a countdown timer. In that case, the asynchronous work is waiting for a setTimeout
to expire.
We can create our own little Promise-based helper, which wraps around setTimeout
, like this:
function wait(duration) {\\n return new Promise((resolve) => {\\n setTimeout(resolve, duration);\\n });\\n}\\n\\nconst timeoutPromise = wait(1000);\\n\\ntimeoutPromise.then(() => {\\n console.log(\'1 second later!\')\\n});
This code looks super intimidating. Let’s see if we can break it down it:
\\nwait
. This function takes a single parameter, duration
. Our goal is to use this function as a sort of sleep
function, but one that works fully asynchronously.wait
, we’re creating and returning a new Promise
. Promises don’t do anything on their own; we need to call the resolve
function when the async work is completed.setTimeout
. We’re feeding it the resolve
function we got from the Promise, as well as the duration
supplied by the user.setTimeout
calls resolve
, which signals that the Promise is fulfilled, which causes the .then()
callback to be fired as well.It’s OK if this code still hurts your brain 😅. We’re combining a lot of hard concepts here! Hopefully the general strategy is clear, even if all the pieces are still a bit fuzzy.
\\nOne thing that might help clarify this stuff: in the code above, we’re passing the resolve
function directly to setTimeout
. Alternatively, we could create an inline function, like we were doing earlier, which invokes the resolve
function:
function wait(duration) {\\n return new Promise((resolve) => {\\n setTimeout(\\n () => resolve(),\\n duration\\n );\\n });\\n}
JavaScript has “first class functions”, which means that functions can be passed around like any other data type (strings, numbers, etc). This is a lovely feature, but it can take a while for this to feel intuitive. This alternative form is a bit less direct, but it works exactly the same way, so if this is clearer to you, you can absolutely structure things this way!
\\nOne important thing to understand about Promises is that they can only be resolved once. Once a Promise has been fulfilled or rejected, it stays that way forever.
\\nThis means that Promises aren’t really suitable for certain things. For example, event listeners:
\\nwindow.addEventListener(\'mousemove\', (event) => {\\n console.log(event.clientX);\\n})
This callback will be fired whenever the user moves their mouse, potentially hundreds or even thousands of times. Promises aren’t a good fit for this sort of thing.
\\nHow about our “countdown” timer scenario? While we can’t re-trigger the same wait
Promise, we can chain multiple Promises together:
wait(1000)\\n .then(() => {\\n console.log(\'2\');\\n return wait(1000);\\n })\\n .then(() => {\\n console.log(\'1\');\\n return wait(1000);\\n })\\n .then(() => {\\n console.log(\'Happy New Year!!\');\\n });
When our original Promise is fulfilled, the .then()
callback is called. It creates and returns a new Promise, and the process repeats.
So far, we’ve been calling the resolve
function without arguments, using it purely to signal that the asynchronous work has completed. In some cases, though, we’ll have some data that we want to pass along!
Here’s an example using a hypothetical database library that uses callbacks:
\\nfunction getUser(userId) {\\n return new Promise((resolve) => {\\n // The asynchronous work, in this case, is\\n // looking up a user from their ID\\n db.get({ id: userId }, (user) => {\\n // Now that we have the full user object,\\n // we can pass it in here...\\n resolve(user);\\n });\\n });\\n}\\n\\ngetUser(\'abc123\').then((user) => {\\n // ...and pluck it out here!\\n console.log(user);\\n // { name: \'Josh\', ... }\\n})
Unfortunately, when it comes to JavaScript, Promises aren’t always kept. Sometimes, they’re broken.
\\nFor example, with the Fetch API, there is no guarantee that our network requests will succeed! Maybe the internet connection is flaky, or maybe the server is down. In these cases, the Promise will be rejected instead of fulfilled.
\\nWe can handle it with the .catch()
method:
fetch(\'/api/get-data\')\\n .then((response) => {\\n // ...\\n })\\n .catch((error) => {\\n console.error(error);\\n });
When a Promise is fulfilled, the .then()
method is called. When it is rejected, .catch()
is called instead. We can think of it like two separate paths, chosen based on the Promise’s state.
So, let’s say the server sends back an error. Maybe a 404 Not Found, or a 500 Internal Server Error. That would cause the Promise to be rejected, right?
Surprisingly no. In that case, the Promise would still be fulfilled, and the Response
object would have information about the error:
Response {\\n ok: false,\\n status: 404,\\n statusText: \'Not Found\',\\n}
This can be a bit surprising, but it does sorta make sense: we were able to fulfill our Promise and receive a response from the server! We may not have gotten the response we wanted, but we still got a response.
It checks out, at least by genie-with-three-wishes(opens in new tab) logic.
When it comes to hand-crafted Promises, we can reject them using a 2nd callback parameter, reject
:
new Promise((resolve, reject) => {\\n someAsynchronousWork((result, error) => {\\n if (error) {\\n reject(error);\\n return;\\n }\\n\\n resolve(result);\\n });\\n});
If we run into problems inside our Promise, we can call the reject()
function to mark the promise as rejected. The argument(s) we pass through — typically an error — will be passed along to the .catch()
callback.
As we saw earlier, promises are always in one of three possible states: pending
, fulfilled
, and rejected
. Whether a Promise is “resolved” or not is a separate thing. So, shouldn\'t my parameters be named “fulfill” and “reject”?
Here’s the deal: the resolve()
function will usually mark the promise as fulfilled
, but that\'s not an iron-clad guarantee. If we resolve our promise with another promise, things get pretty funky. Our original Promise gets “locked on” to this subsequent Promise. Even though our original Promise is still in a pending
state, it is said to be resolved at this point, since the JavaScript thread has moved onto the next Promise.
This is something I only learned after publishing this blog post (thanks to the readers who reached out and let me know!), and honestly, I don’t think it\'s something 99% of us need to worry about. If you do want to dig even deeper, you can learn more in this document: States and Fates(opens in new tab).
One of the really great parts of modern JavaScript is the async
/ await
syntax. Using this syntax, we can get pretty darn close to our ideal countdown structure:
async function countdown() {\\n console.log(\\"5…\\");\\n await wait(1000);\\n\\n console.log(\\"4…\\");\\n await wait(1000);\\n\\n console.log(\\"3…\\");\\n await wait(1000);\\n\\n console.log(\\"2…\\");\\n await wait(1000);\\n\\n console.log(\\"1…\\");\\n await wait(1000);\\n\\n console.log(\\"Happy New Year!\\");\\n}
But wait, I thought this was impossible! We can’t pause a JavaScript function while it’s halfway through, since that blocks the thread from doing anything else!
\\nThis new syntax is secretly powered by Promises. If we put on our detective hat, we can see how this works:
\\nasync function addNums(a, b) {\\n return a + b;\\n}\\n\\nconst result = addNums(1, 1);\\n\\nconsole.log(result);\\n// -> Promise {<fulfilled>: 2}
We’d expect the returned value to be a number, 2
, but it\'s actually a Promise that resolves to the number 2
. The moment we slap that async
keyword on a function, we guarantee that it returns a Promise, even if the function doesn’t do any sort of asynchronous work.
The code above is essentially syntactic sugar for this:
\\nfunction addNums(a, b) {\\n return new Promise((resolve) => {\\n resolve(a + b);\\n });\\n}
Similarly, the await
keyword is syntactic sugar for the .then()
callback:
// This code...\\nasync function pingEndpoint(endpoint) {\\n const response = await fetch(endpoint);\\n return response.status;\\n}\\n\\n// ...is equivalent to this:\\nfunction pingEndpoint(endpoint) {\\n return fetch(endpoint)\\n .then((response) => {\\n return response.status;\\n });\\n}
Promises give JavaScript the underlying infrastructure it needed in order to provide syntax that looks and feels synchronous, while actually being asynchronous under the hood.
\\nIt’s pretty friggin’ great.
\\nFor the past couple of years, my full-time job has been building and sharing educational resources like this blog post. I also have a CSS course(opens in new tab) and a React course(opens in new tab).
\\nOne of the most popular requests from students has been for me to make a course on vanilla JavaScript, and it\'s something I\'ve been thinking a lot about. I’ll likely publish a few more posts on vanilla JavaScript topics in the months ahead.
\\nIf you’d like to be notified when I publish something new, the best way is to join my newsletter. I\'ll shoot you an email whenever I release any new blog posts, and keep you updated with my courses. ❤️
July 11th, 2024
Over the years, React has given us a number of tools for optimizing the performance of our applications. One of the most powerful hidden gems is useDeferredValue
. It can have a tremendous impact on user experience in certain situations! ⚡
I recently used this hook to fix a gnarly performance issue on this blog, and it sorta blew my mind. The improvement on low-end devices felt illegal, like black magic.
\\nuseDeferredValue
has a bit of an intimidating reputation, and it is a pretty sophisticated tool, but it isn’t too scary with the right mental model. In this tutorial, I’ll show you exactly how it works, and how you can use it to dramatically improve the performance of your applications.
A couple of years ago, I released Shadow Palette Generator, a tool for generating realistic shadows:
\\n\\nBy experimenting with sliders and other controls, you can design your own set of shadows. The CSS code is provided for you to copy/paste it into your own application.
\\nHere’s the problem: the controls in this UI are designed to provide immediate feedback; as the user slides the “Oomph” slider, for example, they see the effect of that change right away. This means that the UI is re-rendered dozens of times a second while one of these inputs is being dragged.
\\nNow, React is fast, and most of this UI is pretty easy to update. The problem is the syntax-highlighted code snippet at the bottom:
\\nSyntax highlighting is a surprisingly complex operation. First, the raw code has to be “tokenized”, a process which splits the code into a set of labeled pieces. Each token can be given a different color, and so each token needs to be wrapped in its own <span>
.
Here’s the amount of markup required for a single line from this snippet:
\\nWithout any optimizations, we’re asking React to re-calculate all of this markup dozens of times per second. On most devices, the browser just won’t be able to do this quickly enough, and things will get pretty choppy:
\\n\\nThe change
events are firing up to 60 times per second, but the UI can only process a handful of updates per second. The result is a UI that feels janky and unresponsive.
It’s an interesting problem: the most important part of this UI is the set of figures on the left showing what the shadows look like. We want this part to update immediately in response to the user’s tweaks, so that they can understand the effect of their changes. We also want the controls themselves to feel snappy and responsive.
\\nThe code snippet, on the other hand, doesn’t really need to be updated dozens of times a second; the user only cares about the code at the end, when they’re ready to copy it over to their application. By recalculating it on every change, the entire user experience is degraded.
\\nPut another way, this UI has high-priority and low-priority areas:
\\nWe want the high-priority stuff to update in real-time, as quickly as possible. But the low-priority stuff should be put on the back burner.
\\nMy original solution to this problem used a technique known as “throttling”. Essentially, I restricted this component so that it could only re-render once every 200 milliseconds.
\\nHere’s what this looked like:
\\n\\nNotice that the code snippet updates much less frequently than the other parts of the UI? It will only update every 200 milliseconds, 5 times per second, while the rest of the UI can re-render as often as necessary.
\\nThis is better, but it\'s far from a perfect solution.
\\nIt still feels a bit laggy / janky; users won’t understand that we’re intentionally slowing down part of the UI!
\\nMore importantly, people use a wide variety of devices, from super-powerful modern computers to ancient low-end Android phones. If the user’s device is fast enough, the throttle is unnecessary, and we’re just slowing things down for no reason. On the other hand, if the device is really slow, 200ms might not be sufficient, and the important parts of the UI will still get janked up.
\\nThis is exactly the sort of problem that useDeferredValue
can help with.
useDeferredValue
is a React hook that allows us to split our UI into high-priority and low-priority areas. It works by allowing React to interrupt itself when something important happens.
To help us understand how this works, let’s start with a simpler example. Consider this code:
\\nfunction App() {\\n const [count, setCount] = React.useState(0);\\n\\n return (\\n <>\\n <ImportantStuff count={count} />\\n <SlowStuff count={count} />\\n\\n <button onClick={() => setCount(count + 1)}>\\n Increment\\n </button>\\n </>\\n );\\n}
Our piece of state is count
, a number which can be incremented by clicking a button. ImportantStuff
represents our high-priority part of the UI. We want this to update right away, whenever count
changes. SlowStuff
represents the less-important part of the UI.
Whenever the user clicks the button to increment count
, React has to re-render both of these child components before the UI can be updated.
Let’s analyze this. Click the button below to see this render in action:
\\nThe UI in this demo is a video, showing a recorded interaction. You can scrub through this timeline to see exactly what the UI looked like at any moment in time. Notice that the render starts when the button is clicked, but the UI doesn’t update until the render has completed.
\\nThis render represents the entire chunk of work that React has to do, rendering both ImportantStuff
and SlowStuff
. Click/tap this render snapshot to peek inside:
In this hypothetical example, ImportantStuff
renders super quickly. The bulk of the time is spent rendering SlowStuff
.
If the user clicks the button too quickly, our renders will “pile up”, since React isn\'t able to finish the job before the next update happens. This leads to janky UI:
\\nBefore that first render (count: 1
) has finished, the user clicks the button again, setting count
to 2
. React abandons that render and starts a new one with the correct value for count
. The UI only gets updated when a render successfully completes.
Now, given all that context, let’s see how useDeferredValue
helps us solve this problem.
Here’s the code:
\\nfunction App() {\\n const [count, setCount] = React.useState(0);\\n const deferredCount = React.useDeferredValue(count);\\n\\n return (\\n <>\\n <ImportantStuff count={count} />\\n <SlowStuff count={deferredCount} />\\n\\n <button onClick={() => setCount(count + 1)}>\\n Increment\\n </button>\\n </>\\n );\\n}
For the initial render, count
and deferredCount
are the exact same value (0
). When the user clicks the “Increment” button, though, something interesting happens:
Each render now shows the value for count
as well as the deferred value we pass to <SlowStuff>
. If there isn’t enough space to include their labels, the timeline instead shows count
and deferredCount
separated by a line.
In a moment, I’ll explain exactly what’s going on here, but first I’d encourage you to spend a few moments poking at this. Can you make sense of what React is doing, and why it might be helpful?
The timeline is your friend. You can scrub through the video by clicking/pressing and dragging the yellow time indicator. Alternatively, you can focus it and use the left/right arrow keys, to move one frame at a time.
Alright, let’s unpack this. When the count
state changes, the App
component re-renders immediately. count
is now equal to 1
, but interestingly, deferredCount
hasn’t changed. It still resolves to the previous value of 0
.
This means that SlowStuff
receives the exact same props that it did in the previous render. If it’s been memoized with React.memo()
, it won’t bother re-rendering, since React already knows what would be produced. It’s able to re-use the stuff from the first render.
Right after that render finishes, a second re-render is started, except now, deferredCount
has been updated to match count
’s value of 1
. This means that SlowStuff
will re-render this time. When all is said and done, the UI has been fully updated.
Why go through all that song and dance?? You might be thinking that this seems unnecessarily complicated, that it\'s a lot of work to wind up in the same place as before.
\\nHere’s why this is so clever: If React gets interrupted by another state change, the important stuff has already been updated. React can abandon the less-important second render, and start work immediately on the more-important part.
\\nThis is hard to describe in words, but hopefully this recording will make it clearer:
\\nLike we saw earlier, the user is clicking too fast for React to finish updating everything in time. But, because each re-render is split into high-priority and low-priority parts, React is still able to update the important part of the UI between clicks. When those extra clicks happen, React abandons its work-in-progress, but that’s fine since that work was low-priority.
\\nThis is tricky business. If you’re feeling a bit overwhelmed, the next section should help, as we explore the underlying mechanism that allows this to work.
\\nAn important thing to note: useDeferredValue
only works when the slow / low-priority component has been wrapped with React.memo()
:
import React from \'react\';\\n\\nfunction SlowComponent({ count }) {\\n // Component stuff here\\n}\\n\\nexport default React.memo(SlowComponent);
React.memo()
instructs React to only re-render this component when its props/state changes. Without React.memo()
, SlowComponent
would re-render whenever its parent component re-renders, regardless of whether the count
prop has changed or not.
This is a really important thing to understand, so let’s go over it in a bit more depth. As a reminder, this is the relevant code:
\\nfunction App() {\\n const [count, setCount] = React.useState(0);\\n const deferredCount = React.useDeferredValue(count);\\n\\n return (\\n <>\\n <ImportantStuff count={count} />\\n <SlowStuff count={deferredCount} />\\n\\n <button onClick={() => setCount(count + 1)}>\\n Increment\\n </button>\\n </>\\n );\\n}
When the user clicks the button for the first time, the count
state will increment from 0
to 1
. The App
component will re-render, but the useDeferredValue
hook will re-use the previous value. deferredCount
will be assigned to 0
, not 1
.
The default behaviour in React is for all child components to be re-rendered, regardless of whether their props have changed or not. Without React.memo()
, both ImportantStuff
and SlowStuff
would re-render, and we wouldn\'t get any benefit from useDeferredValue
.
When we wrap SlowStuff
with React.memo()
, React will check to see if a re-render is actually necessary by comparing the current props with the previous ones. And since deferredCount
is still 0
, React says “Ok, nothing new here. This chunk of the UI doesn’t have to be recalculated”.
This was the lightbulb moment for me. useDeferredValue
allows us to postpone rendering the low-priority parts of our UI, pushing that work down the road like a really boring homework assignment. Eventually, that work will be done, and the UI will be fully updated. But it\'s on the back burner; whenever the state changes, React abandons that work and focuses on the more important stuff.
I recognize that I\'m assuming a lot of knowledge here about how React renders work. If your head is spinning, I have another blog post that should help a lot!
So many React developers have misconceptions around how rendering in React works, so hopefully this post clarifies some things, and provides the necessary context for understanding useDeferredValue
.
So, we’ve seen how useDeferredValue
works with a single primitive value like count
. But things in the real world are rarely so simple.
In my “Shadow Palette Generator”, I have several pieces of relevant state:
\\nfunction ShadowPaletteGenerator() {\\n const [oomph, setOomph] = React.useState(0.5);\\n const [crispy, setCrispy] = React.useState(0.5);\\n const [background, setBackground] = React.useState(\'#F00\')\\n const [tint, setTint] = React.useState(true);\\n const [resolution, setResolution] = React.useState(0.75);\\n const [lightPosition, setLightPosition] = React.useState({\\n x: -0.2,\\n y: -0.5,\\n });\\n\\n const cssCode = generateShadows(oomph, crispy, background, tint, resolution, lightPosition);\\n\\n return (\\n <>\\n {/* Other stuff omitted for brevity */}\\n\\n <CodeSnippet lang=\\"css\\" code={cssCode} />\\n </>\\n );\\n}
My initial thought was that I\'d need to create a deferred value for each one:
\\nconst deferredOomph = React.useDeferredValue(oomph);\\nconst deferredCrispy = React.useDeferredValue(crispy);\\nconst deferredBg = React.useDeferredValue(background);\\nconst deferredTint = React.useDeferredValue(tint);\\nconst deferredResolution = React.useDeferredValue(resolution);\\nconst deferredLight = React.useDeferredValue(lightPosition);
I could do this, but there’s a simpler option. I can defer the derived value, the chunk of CSS code generated within the render:
\\nconst [oomph, setOomph] = React.useState(0.5);\\nconst [crispy, setCrispy] = React.useState(0.5);\\nconst [background, setBackground] = React.useState(\'#F00\')\\nconst [tint, setTint] = React.useState(true);\\nconst [resolution, setResolution] = React.useState(0.75);\\nconst [lightPosition, setLightPosition] = React.useState({\\n x: -0.2,\\n y: -0.5,\\n});\\n\\nconst cssCode = generateShadows(oomph, crispy, backgroundColor, tint, resolution, lightPosition);\\n\\nconst deferredCssCode = React.useDeferredValue(cssCode);\\n\\nreturn (\\n <>\\n {/* Other stuff omitted for brevity */}\\n\\n <CodeSnippet lang=\\"css\\" code={deferredCssCode} />\\n </>\\n);
The hook is called useDeferredValue
, not useDeferredState
. There’s no rule that says the value has to be a state variable!
This is why it’s so important to understand the underlying mechanism here. The critical thing is that our low-priority component (CodeSnippet
in this case) doesn’t receive new values for any of its props during the high-priority render.
In some cases, we might want to make it clear to the user when parts of the UI are stale, so that they know that a re-calculation is in progress.
\\nFor example, maybe we could do something like this:
\\n\\nWhile <SlowStuff>
is out of date, we make it semi-transparent and include a little spinner. That way, the user knows that this part of the UI is recalculating.
But hm, how can we tell whether part of the UI is stale or not? It turns out that we already have all the tools we need for this!
\\nHere’s the code:
\\nfunction App() {\\n const [count, setCount] = React.useState(0);\\n const deferredCount = React.useDeferredValue(count);\\n\\n const isBusyRecalculating = count !== deferredCount;\\n\\n return (\\n <>\\n <ImportantStuff count={count} />\\n <SlowWrapper\\n style={{ opacity: isBusyRecalculating ? 0.5 : 1 }}\\n >\\n <SlowStuff count={deferredCount} />\\n\\n {isBusyRecalculating && <Spinner />}\\n </SlowWrapper>\\n\\n <button onClick={() => setCount(count + 1)}>\\n Increment\\n </button>\\n </>\\n );\\n}
We can tell whether the UI is stale or not by comparing count
and deferredCount
.
When I first saw this, I thought it was suspiciously simple. But when I really thought about it, it made sense:
\\ndeferredCount
reuses the previous value. count
gets updated to 1
, but deferredCount
is still 0
. The values are different.deferredCount
is updated to the current value, 1
. Both count
and deferredCount
point to the same value.The same mechanism that allows us to skip rendering <SlowStuff>
on the first render also allows us to tell that the UI isn\'t fully in sync yet. Pretty cool, right?
Now, whether we actually want to do this is another matter. I tested it out on my Shadow Palette Generator:
\\n\\nPersonally, I don’t think that this is an improvement in this case. It draws the user’s attention to the code snippet when it should stay fixed on the shadow figures.
\\nDepending on the context, though, this could be a really useful way to make sure users know that part of the UI is stale!
\\nA couple of weeks ago, React 19 entered beta. This upcoming major version is tweaking a bunch of stuff, and useDeferredValue
is getting a nice new lil’ superpower!
Before React 19, useDeferredValue
would get initialized to the supplied value:
function App() {\\n const [count, setCount] = React.useState(0);\\n const deferredCount = React.useDeferredValue(count);\\n\\n // On the initial render:\\n console.log(deferredCount); // 0\\n console.log(count === deferredCount); // true\\n}
React doesn’t do the \\"double render\\" thing we’ve been talking about because React doesn’t have a previous value it can use. And so, effectively, useDeferredValue
has no effect for the first render.
Starting in React 19, we can specify an initial value:
\\nconst deferredCount = React.useDeferredValue(count, initialValue);
Why would we want to do this?? This pattern will allow us to potentially speed up the initial render.
\\nFor example, in our Shadow Palette Generator example, I could do something like this:
\\nconst cssCode = generateShadows(oomph, crispy, backgroundColor, tint, resolution, lightPosition);\\n\\nconst deferredCssCode = React.useDeferredValue(\\n cssCode,\\n null\\n);\\n\\nreturn (\\n <>\\n {/* Other stuff omitted for brevity */}\\n\\n {deferredCssCode !== null && (\\n <CodeSnippet lang=\\"css\\" code={deferredCssCode} />\\n )}\\n </>\\n);
During the quick high-priority render, deferredCssCode
will be null
, and so we won’t even render <CodeSnippet>
. Immediately after that quick render, however, this component automatically re-renders, filling in that slot with the code.
This should allow the application as a whole to become responsive more quickly, since we don’t have to wait for less-important parts of the UI.
\\nIn this tutorial, we’ve touched on one of the main use cases for useDeferredValue
, but it can also be useful in some other situations, like when working with Suspense-enabled data-fetching libraries. You can learn more in the official React docs(opens in new tab).
Alright, so with the useDeferredValue
hook in place, check out what the end result looks like:
So good! Everything is butter smooth. 💯
\\nBut hang on, I\'m testing this on a high-end MacBook Pro. What is the experience like on a lower-end device?
\\nA few years ago, I went into my local computer store and asked to buy the cheapest new Windows laptop they had. They dug up a US$110 Intel Celeron Acer laptop. Here’s how it runs on this machine, with useDeferredValue
implemented:
It’s not as smooth, but for a machine that struggles to open its own Start menu, this is pretty great! Notice that the code snippet doesn’t update until I\'ve finished interacting with the controls. useDeferredValue
is helping us a ton here.
Like so much in React, useDeferredValue
seems really complex unless you have the right mental model. Over the years, React has become a very sophisticated tool, and if we want to use it effectively, we need to develop an intuition for how it works.
I spent nearly two years creating the ultimate resource for learning React. It\'s called The Joy of React(opens in new tab). It covers everything I’ve learned after nearly a decade of professional React experience.
\\nIf you found this blog post helpful, you’ll get so much out of my course. The course is optimized for “lightbulb moments”, building a robust mental model for how React works, and how you can use it to build rich, dynamic web applications.
\\nThanks so much for reading! 💖
May 13th, 2024
On May 4th, 2023, Vercel announced the stable release of Next 13.4, becoming the first React framework to be built on top of React Server Components.
\\nThis is a big deal! RSC (React Server Components) gives us an official way to write server-exclusive code in React. It opens a lot of interesting new doors, as I wrote about in my blog post, Making Sense of RSC.
\\nBut you can\'t make an omelette without cracking a few eggs. RSC is a fundamental change to how React works, and some of the libraries and tools we\'ve been using have gotten… scrambled 😅. For those of us who use libraries like styled-components/Emotion, there hasn’t been a clear path forward.
\\nOver the past few months, I’ve been digging into this, building an understanding of the compatibility issues, and learning about what the options are. At this point, I feel like I have a pretty solid grasp on the whole situation. I’ve also discovered some pretty exciting developments that have been flying under the radar. ✨
\\nIf you use a CSS-in-JS library, my hope is that this blog post will help clear away a lot of confusion, and give you some practical options for what to do.
\\nWhen this discussion comes up online, one of the most common suggestions is to switch to a different CSS tool. After all, there are no shortage of options in the React ecosystem!
For many of us, though, this isn\'t a practical suggestion. I have more than 5,000 styled components across my blog and course platform. Migrating to an entirely different tool is much easier said than done.
And honestly, even if I could snap my fingers and swap in a totally different library, I wouldn\'t want to. I really like the styled
API!
Later in this blog post, we will discuss some alternative CSS libraries, but we’ll focus on options with similar APIs to styled-components.
In order to understand the compatibility issue, we need to understand React Server Components. Before we can talk about that, though, we need to make sure that we understand Server Side Rendering (SSR).
\\nSSR is an umbrella term that comprises several different strategies and implementations, but the most typical version of it looks like this:
\\nReact needs to run on the user’s device to handle interactivity. The HTML generated by the server is totally static; it won\'t include any event handlers we’ve written (eg. onClick
), or capture any refs we’ve specified (with the ref
attribute).
OK, but why does it have to re-do all of the exact same work?? When React boots up on the user’s device, it’ll discover a bunch of pre-existing UI, but it won\'t have any context about it, such as which component owns each HTML element. React needs to perform the exact same work in order to reconstruct the component tree, so that it can wire up the existing HTML correctly, attaching event handlers and refs to the correct elements. React needs to draw itself a map so that it can pick up where the server left off.
\\nThere\'s one big limitation with this model. All of the code we write will be executed on the server and the client. There\'s no way to create components that render exclusively on the server.
\\nLet’s suppose we\'re building a full-stack web application, with data in a database. If you were coming from a language like PHP, you might expect to be able to do something like this:
\\nfunction Home() {\\n const data = db.query(\'SELECT * FROM SNEAKERS\');\\n\\n return (\\n <main>\\n {data.map(item => (\\n <Sneaker key={item.id} item={item} />\\n ))}\\n </main>\\n );\\n}
In theory, this code could work just fine on the server, but that exact same code will be re-executed on the user’s device, which is a problem, since client-side React won’t have access to our database. There’s no way to tell React “Run this code only on the server, and re-use the resulting data on the client”.
\\nMeta-frameworks built on top of React have come up with their own solutions. For example, in Next.js, we can do this:
\\nexport async function getServerSideProps() {\\n const data = await db.query(\'SELECT * FROM SNEAKERS\');\\n\\n return {\\n props: {\\n data,\\n },\\n };\\n}\\n\\nfunction Home({ data }) {\\n return (\\n <main>\\n {data.map(item => (\\n <Sneaker key={item.id} item={item} />\\n ))}\\n </main>\\n );\\n}
The Next.js team said “Alright, so the exact same React code has to run on the server and client… but we can add some extra code, outside of React, that only runs on the server!”.
\\nWhen the Next.js server receives a request, it\'ll first invoke the getServerSideProps
function, and whatever it returns will be fed in as props to the React code. The exact same React code runs on the server and client, so there’s no problem. Pretty clever, right?
I\'m honestly a pretty big fan of this approach, even today. But it does feel a bit like a workaround, an API created out of necessity because of a React limitation. It also only works at the page level, at the very top of each route; we can\'t pop a getServerSideProps
function anywhere we want.
React Server Components provides a more intuitive solution to this problem. With RSC, we can do database calls and other server-exclusive work right in our React components:
\\nasync function Home() {\\n const data = await db.query(\'SELECT * FROM SNEAKERS\');\\n\\n return (\\n <main>\\n {data.map(item => (\\n <Sneaker key={item.id} item={item} />\\n ))}\\n </main>\\n );\\n}
In the “React Server Components” paradigm, components are Server Components by default. A Server Component runs exclusively on the server. This code will not re-run on the user\'s device; the code won\'t even be included in the JavaScript bundle!
\\nThis new paradigm also includes Client Components. A Client Component is a component that runs on both the server and client. Every React component you\'ve ever written in “traditional” (pre-RSC) React is a Client Component. It\'s a new name for an old thing.
\\nWe opt in to Client Components with a new \\"use client\\"
directive at the top of the file:
\'use client\';\\n\\nfunction Counter() {\\n const [count, setCount] = React.useState(0);\\n\\n return (\\n <button onClick={() => setCount(count + 1)}>\\n Count: {count}\\n </button>\\n );\\n}
This directive creates a “client boundary”; all components in this file, as well as any that are imported, will render as a Client Component, running first on the server and again on the client.
\\nUnlike other React features (eg. hooks), React Server Components requires deep integration with the bundler. As I write this in April 2024, the only practical way to use React Server Components is with Next.js, though I expect this to change in the future.
\\nThe key thing to understand about Server Components is that they don\'t provide the “full” React experience. Most React APIs don\'t work in Server Components.
\\nFor example, useState
. When a state variable changes, the component re-renders, but Server Components can\'t re-render; their code is never even sent to the browser, and so React would have no idea how to process a state change. From React’s perspective, any markup generated by Server Components is set in stone and cannot be changed on the client.*To be clear, the DOM itself isn\'t immutable; we can still modify it with plain JavaScript. But React won’t be able to re-render the parts of the application generated by Server Components
Similarly, we can\'t use useEffect
inside Server Components because effects don’t run on the server, they only run after renders on the client. And since Server Components are excluded from our JavaScript bundles, client-side React would never know that we’d written a useEffect
hook.
Even the useContext
hook can\'t be used inside Server Components, because the React team hasn\'t yet solved the problem of how React Context can be shared across both Server Components and Client Components.
Here’s how I look at it: Server Components aren\'t really React components, at least not as we\'ve traditionally understood them. They\'re much more like PHP templates, rendered by a server to create the original HTML. The real innovation is that Server Components and Client Components can co-exist in the same application!
\\nIn this blog post, I want to focus on the most pertinent details of React Server Components, the stuff we need to know in order to understand the compatibility issues with CSS-in-JS frameworks.
If you\'d like to learn more about React Server Components, though, I have a separate blog post that explores this new world in much more depth:
Alright, so we\'ve covered the fundamentals of React Server Components. Now let\'s talk about the fundamentals of CSS-in-JS libraries like 💅 styled-components!
\\nHere\'s a quick example:
\\nimport styled from \'styled-components\';\\n\\nexport default function Homepage() {\\n return (\\n <BigRedButton>\\n Click me!\\n </BigRedButton>\\n );\\n}\\n\\nconst BigRedButton = styled.button`\\n font-size: 2rem;\\n color: red;\\n`;
Instead of putting our CSS in a class like .red-btn
, we instead attach that CSS to a freshly-generated React component. This is what makes styled-components special; components are the reusable primitive, not classes.
styled.button
is a function that dynamically generates a new React component for us, and we’re assigning that component to a variable called BigRedButton
. We can then use that React component the same way we’d use any other React component. It\'ll render a <button>
tag that has big red text.
But how exactly does the library apply this CSS to this element? We have three main options:*Additionally, styles can be added dynamically through the CSS Object Model (CSSOM), but this isn’t an option during SSR, so I\'m not including it here.
\\nstyle
attribute.<link>
.<style>
tag, typically in the <head>
of the current HTML document.If we run this code and inspect the DOM, the answer is revealed:
\\n<html>\\n <head>\\n <style data-styled=\\"active\\">\\n .abc123 {\\n font-size: 2rem;\\n color: red;\\n }\\n </style>\\n </head>\\n\\n <body>\\n <button className=\\"abc123\\">\\n Click me!\\n </button>\\n </body>\\n</html>
styled-components will write the provided styles to a <style>
tag that the library manages. In order to connect those styles to this particular <button>
, it generates a unique class name, \\"abc123\\"
.
All of this work first happens during the initial React render:
\\n<style>
tag is dynamically generated on the user\'s device, just like all of the DOM nodes that React creates.<style>
tag.As the user interacts with our application, certain styles might need to be created, modified, or destroyed. For example, suppose we have a conditionally-rendered styled-component:
\\nfunction Header() {\\n const user = useUser();\\n\\n return (\\n <>\\n {user && (\\n <SignOutButton onClick={user.signOut}>\\n Sign Out\\n </SignOutButton>\\n )}\\n </>\\n );\\n}\\n\\nconst SignOutButton = styled.button`\\n color: white;\\n background: red;\\n`;
Initially, if user
is undefined, <SignOutButton>
won’t be rendered, and so none of these styles will exist. Later, if the user signs in, our application will re-render, and styled-component kicks into gear, injecting these styles into the <style>
tag.
Essentially, every styled component is a regular React component, but with an extra lil’ side effect: they also render their styles to a <style>
tag.
For our purposes today, this is the most important takeaway, but if you\'d like to drill deeper into the inner workings of the library, I wrote a blog post all about it called “Demystifying styled-components”.
\\nTo summarize what we\'ve learned so far:
\\n<style>
tag that gets updated as components re-render.The fundamental incompatibility is that styled-components are designed to run in-browser, whereas Server Components never touch the browser.
\\nInternally, styled-components makes heavy use of the useContext
hook. It\'s meant to be tied into the React lifecycle, but there is no React lifecycle for Server Components. And so, if we want to use styled-components in this new “React Server Components” world, every React component that renders even a single styled-component needs to become a Client Component.
I don’t know about you, but it\'s pretty rare for me to have a React component that doesn’t include any styling. I\'d estimate that 90%+ of my component files use styled-components. Most of these components are otherwise totally static; they don\'t use state or any other Client Component features.
\\nThis is a bummer, since it means we’re not able to take full advantage of this new paradigm… but it’s not actually the end of the world.
\\nIf I could change one thing about React Server Components, it would be the name “Client Component”. This name implies that these components only render on the client, but that’s not true. Remember, “Client Component” is a new name for an old thing. Every React component in every React application created before May 2023 is a Client Component.
\\nAnd so, even if only 10% of the components in a styled-components application can become Server Components, that’s still an improvement! Our applications will still become a bit lighter and faster than they were in a pre-RSC world. We still get all the benefits of SSR. That hasn’t changed.
\\nYou might be wondering why the maintainers of styled-components / Emotion haven\'t updated their libraries to become compatible with React Server Components. We’ve known this was coming for over a year, why haven\'t they found a solution yet??
The styled-components maintainers are currently blocked by missing APIs in React. Specifically, React hasn\'t provided a RSC-friendly alternative to Context, and styled-components needs some way to share data between components, in order to correctly apply all of the styles during Server Side Rendering.
A few weeks ago, I did some pretty deep exploration(opens in new tab), and honestly, I have a hard time imagining how this could ever work without React Context. As far as I can tell, the only solution would be to completely rewrite the whole library to use an entirely different approach. Not only would this cause significant breaking changes, it\'s also a completely unreasonable thing to expect a team of volunteer open-source maintainers to do.
If you\'re curious to learn more, there\'s a styled-components Github issue(opens in new tab) which explains what the blockers are. I’ve seen similar discussions in the Emotion repo as well.
So far, the story has been kinda grim. There is a fundamental incompatibility between React Server Components and styled-components, and the library maintainers haven’t been given the tools they’d need to add support.
\\nFortunately, the React community hasn\'t been sleeping on this issue! Several libraries are being developed which offer a styled-components-like API, but with full compatibility with React Server Components! ✨
\\nInstead of being tied into the React lifecycle, these tools have taken a different approach; all of the processing is done at compile-time.
\\nModern React applications have a build step, where we turn TypeScript/JSX into JavaScript, and package thousands of individual files into a handful of bundles. This work is done when our application is deployed, before it starts running in production. Why not process our styled components during this step, instead of at runtime?
\\nThis is the core idea behind all of the libraries we’ll discuss in this section. Let\'s dive in!
\\nLinaria(opens in new tab) was created all the way back in 2017. It\'s almost as old as styled-components!
\\nThe API looks identical to styled-components:
\\nimport styled from \'@linaria/react\';\\n\\nexport default function Homepage() {\\n return (\\n <BigRedButton>\\n Click me!\\n </BigRedButton>\\n );\\n}\\n\\nconst BigRedButton = styled.button`\\n font-size: 2rem;\\n color: red;\\n`;
Here’s the really clever bit: during the compile step, Linaria transforms this code, and moves all of the styles into CSS Modules(opens in new tab).
\\nAfter running Linaria, the code would look something like this:
\\n/* /components/Home.module.css */\\n.BigRedButton {\\n font-size: 2rem;\\n color: red;\\n}
/* /components/Home.js */\\nimport styles from \'./Home.module.css\';\\n\\nexport default function Homepage() {\\n return (\\n <button className={styles.BigRedButton}>\\n Click me!\\n </button>\\n );\\n}
If you’re not already familiar with CSS Modules, it\'s a lightweight abstraction over CSS. You can pretty much treat it as plain CSS, but you don\'t have to worry about globally-unique names. During the compile step, right after Linaria works its magic, generic names like .BigRedButton
are transformed into unique ones like .abc123
.
The important thing is that CSS Modules are already widely-supported. It\'s one of the most popular options out there. Meta-frameworks like Next.js already have first-class support for CSS Modules.
\\nAnd so, rather than reinvent the wheel and spend years building a robust production-ready CSS solution, the Linaria team decided to take a shortcut. We can write styled-components, and Linaria will pre-process them into CSS Modules, which will then be processed into plain CSS. All of this happens at compile-time.
\\nLong before RSC was a thing, the community has been building compile-time libraries like Linaria. The performance advantages are compelling: styled-components adds 11 kilobytes (gzip) to our JavaScript bundle, but Linaria adds 0kb, since all the the work is done ahead of time. Additionally, server-side rendering gets a bit quicker, since we don\'t have to spend any time collecting and applying styles.
That said, the styled-components runtime isn’t just dead weight. We can do things in styled-components that just aren’t possible at compile-time. For example, styled-components can dynamically update the CSS when some React state changes.
Fortunately, CSS has gotten a lot more powerful in the near-decade since styled-components was first created. We can use CSS variables to handle most dynamic use cases. These days, having a runtime can offer a slightly-nicer developer experience in some situations, but in my opinion, it isn’t really necessary anymore.
This does mean that Linaria and other compile-time CSS-in-JS libraries won\'t truly be drop-in replacements for styled-components/Emotion. We’ll have to spend some time reworking dynamic components. But this is a tiny fraction of the work compared to switching to an entirely different CSS tool.
So, should we all migrate our styled-components applications to Linaria?
\\nUnfortunately, there’s a problem. While Linaria itself is actively maintained, there are no official bindings for Next.js, and getting Linaria to work with Next.js is non-trivial.
\\nThe most popular integration, next-linaria(opens in new tab), hasn’t been updated in 3 years, and isn’t compatible with the App Router / React Server Components. There is another option, next-with-linaria(opens in new tab), but it has a big warning about not using it in production. 😅
\\nSo, while this might be an option for adventurous developers, it’s not really something I feel comfortable recommending.
\\nPanda CSS(opens in new tab) is a modern CSS-in-JS library developed by the folks who built Chakra UI, a popular component library.
\\nPanda CSS comes with many different interfaces. You can use it like Tailwind, specifying shorthand classes like mb-5
. You can use it like Stitches, using variants and cva. Or, you can use it like styled-components.
Here’s what it looks like with the styled
API:
import { styled } from \'../styled-system/jsx\'\\n\\nexport default function Homepage() {\\n return (\\n <BigRedButton>\\n Click me!\\n </BigRedButton>\\n );\\n}\\n\\nconst BigRedButton = styled.button`\\n font-size: 2rem;\\n color: red;\\n`;
Like Linaria, Panda CSS compiles away, but it instead compiles to Tailwind-style utility classes. The end result would look something like this:
\\n/* /styles.css */\\n.font-size_2rem {\\n font-size: 2rem;\\n}\\n.color_red {\\n color: red;\\n}
/* /components/Home.js */\\nexport default function Homepage() {\\n return (\\n <button className=\\"font-size_2rem color_red\\">\\n Click me!\\n </button>\\n );\\n}
For every unique CSS declaration like color: red
, Panda CSS will create a new utility class in one central CSS file. This file would then be loaded on every route in our React application.
I really want to like Panda CSS. It\'s being developed by a solid team with lots of relevant experience, it offers a familiar API, and they even have a cute skateboarding panda mascot!
\\nAfter experimenting with it, though, I’ve discovered that it’s just not for me. Some of my issues are frivolous/superficial; for example, Panda CSS generates a bunch of stuff that clutters up the project files. This feels a bit messy to me, but ultimately it’s not a significant problem.
\\nThe bigger issue for me is that Panda CSS is missing a critical feature. We can’t cross-reference components.
\\nThis’ll be easier to explain with an example. On this blog, I have a TextLink
component, a styled wrapper around Next.js’s Link
component. By default, it looks like this:
This same component, however, has certain contextual styles. For example, when TextLink
is within Aside
, it looks like this:
I use this Aside
component for tangential / bonus bits of information. I found that the default TextLink
styles didn’t quite work in this context, and so I wanted to apply some overrides.
Here’s how we can express this relationship in styled-components:
\\nimport Link from \'next/link\';\\n\\nimport { AsideWrapper } from \'@/components/Aside\';\\n\\nconst TextLink = styled(Link)`\\n /* Default styles */\\n color: var(--color-primary);\\n text-decoration: none;\\n\\n /* Overrides, when TextLink is within a Aside */\\n ${AsideWrapper} & {\\n color: inherit;\\n text-decoration: underline;\\n }\\n`;
The ampersand character (&
) was recently added to the CSS language as part of the official nesting syntax(opens in new tab), but it’s been a convention in CSS preprocessors and tools for many years. In styled-components, it evaluates to the current selector.
Upon rendering this code, the generated CSS would look something like this:
\\n.textlink_abc123 {\\n color: var(--color-primary);\\n text-decoration: none;\\n}\\n\\n.aside_def456 .textlink_abc123 {\\n color: inherit;\\n text-decoration: underline;\\n}
When I work with CSS, I try to follow a rule: all of the styles for a particular component should be written in one place. I shouldn’t have to go on a scavenger hunt across the application to find all of the different CSS that could apply to a given element!
\\nThis is one of the things that is so powerful about Tailwind; all of the possible styles are colocated together, on the element itself. We don’t have to worry about some other component “reaching in” and styling an element it doesn’t own.
\\nThis pattern is like a supercharged version of that idea. Not only are we listing out all of the styles that apply to TextLink
by default, we’re also listing out the styles that apply contextually. And they’re all grouped together in one spot, between the backticks.
Sadly, this pattern doesn\'t work in Panda CSS. In Panda CSS, we uniquely identify the CSS declarations?A CSS declaration is a single property/value pair, like color: red;
., not the elements themselves, and so there’s no way to express these sorts of relationships.
If you’re not interested in this pattern, then Panda CSS might be a good option for your application! But for me, this is a dealbreaker.
\\nIf you’d like to learn more about this “contextual styles” pattern, I cover it in depth in my blog post, “The styled-components Happy Path”. It\'s a collection of patterns and tips I’ve learned after many years with styled-components.
One of the most popular React component libraries, Material UI(opens in new tab), is built on top of Emotion*The library also gives you the option to switch to styled-components. Their dev team has been grappling with all the same issues around RSC compatibility, and they’ve decided to do something about it.
\\nThey recently open-sourced a new library. It’s called Pigment CSS(opens in new tab). Its API should look pretty familiar at this point:
\\nimport { styled } from \'@pigment-css/react\';\\n\\nexport default function Homepage() {\\n return (\\n <BigRedButton>\\n Click me!\\n </BigRedButton>\\n );\\n}\\n\\nconst BigRedButton = styled.button`\\n font-size: 2rem;\\n color: red;\\n`;
Pigment CSS runs at compile-time, and it uses the same strategy as Linaria, compiling to CSS Modules. There are plugins for both Next.js and Vite.
\\nIn fact, it uses a low-level tool called WyW-in-JS(opens in new tab) (“What you Want in JS”). This tool evolved out of the Linaria codebase, isolating the “compile to CSS Modules” business logic and making it generic, so that libraries like Pigment CSS can build their own APIs on top of it.
\\nHonestly, this feels like the perfect solution to me. CSS Modules are already so battle-tested and heavily optimized. And from what I\'ve seen so far, Pigment CSS is awesome, with great performance and DX.
\\nThe next major version of Material UI will support Pigment CSS, with plans to eventually drop support for Emotion/styled-components altogether. As a result, Pigment CSS will become one of the most widely-used CSS-in-JS libraries. Material UI is downloaded ~5 million times per week on NPM, ~1/5th as much as React itself!
\\nIt\'s still very early; Pigment CSS was only open-sourced in March 2024. But the team is putting significant resources behind this project. I can’t wait to see how things develop!
\\nIn addition to the libraries we’ve covered so far, there are many more projects in the ecosystem that are doing interesting things. Here are some more projects I’m keeping an eye on:
\\nAlright, so we’ve explored a whole bunch of options, but the question remains: if you have a production application that uses a “legacy” CSS-in-JS library, what should you actually do?
\\nIt’s a bit counter-intuitive, but in many cases, I don’t actually think you need to do anything. 😅
\\nA lot of the discussion online makes it sound like you can’t use styled-components in a modern React / Next.js application, or that there\'s a huge performance penalty for doing so. But that’s not really true.
\\nI think a lot of folks are getting RSC (React Server Components) and SSR (Server Side Rendering) mixed up. Server Side Rendering still works exactly the same way as it always has, it isn’t affected by any of this stuff. Your application shouldn’t get slower if you migrate to Next’s App Router or another RSC implementation. In fact, it’ll probably get a bit faster!
\\nFrom a performance perspective, the main benefit with RSC and zero-runtime CSS libraries is TTI, “Time To Interactive”. This is the delay between the UI being shown to the user and the UI being fully interactive. If neglected, it can produce bad user experiences, where the user clicks a button and nothing happens because, behind the scenes, the application hasn\'t hydrated yet.
\\nAnd so, if your application takes a long time to hydrate right now, there’s a strong case to be made for migrating to a zero-runtime CSS library. But, if your app already has a solid TTI, your users likely won’t see any benefit from this migration.
\\nI feel like the biggest issue, in many cases, is FOMO. As developers, we want to use the latest and greatest tools. It\'s no fun adding a bunch of \\"use client\\"
directives, knowing we’re not benefitting as much from a new optimization. But is this really a compelling reason to undergo a big migration?
I maintain two primary production applications: this blog, and a course platform I use for my interactive courses (CSS for JavaScript Developers(opens in new tab) and The Joy of React(opens in new tab)).
\\nMy course platform is still using the Next.js Pages Router with styled-components, and I have no plans to migrate it any time soon. I\'m happy with the user experience it offers, and I don\'t believe there would be a significant performance benefit to migrating.
\\nMy blog also currently uses the Next.js Pages Router with styled-components, though I am in the process of migrating it to use the Next.js App Router. I\'ve chosen to use Linaria(opens in new tab) + next-with-linaria(opens in new tab), at least for now. When Pigment CSS is a bit more mature, I plan on switching over.
\\nReact Server Components is super cool. The React/Vercel teams have done an incredible job rethinking how React works on the server. But honestly, having embarked on one of these migrations myself, I’m not sure I would really recommend it for most production applications. Even though App Router has been marked as \\"stable\\", it\'s still nowhere near as mature as the Pages Router, and there are still some rough edges.
\\nIf you’re happy with your application’s performance, I don\'t think you should feel any urgency around updating/migrating ❤️. I\'d suggest giving it a year or two, to let the ecosystem mature a little bit.
\\nI’ve been using React for almost a decade now, and it\'s truly been my favourite way to build dynamic web applications. I spent a couple of years compiling everything I know about React into an interactive self-paced course called The Joy of React(opens in new tab).
In this course, we build a mental model of how React works, and I share the “happy practices” that have helped me become so productive with React. It even covers React Server Components and the Next.js App Router in depth, along with other modern features like Suspense and Streaming Server Side Rendering.
You can learn more here:
April 15th, 2024
For a long time, centering an element within its parent was a surprisingly tricky thing to do. As CSS has evolved, we\'ve been granted more and more tools we can use to solve this problem. These days, we\'re spoiled for choice!
\\nI decided to create this tutorial to help you understand the trade-offs between different approaches, and to give you an arsenal of strategies you can use, to handle centering in all sorts of scenarios.
\\nHonestly, this turned out to be way more interesting than I initially thought 😅. Even if you\'ve been using CSS for a while, I bet you\'ll learn at least 1 new strategy!
\\nThe first strategy we\'ll look at is one of the oldest. If we want to center an element horizontally, we can do so using margins set to the special value auto
:
.element {\\n max-width: fit-content;\\n margin-left: auto;\\n margin-right: auto;\\n}
First, we need to constrain the element\'s width; by default, elements in Flow layout will expand horizontally to fill the available space, and we can\'t really center something that is full-width.
\\nI could constrain the width with a fixed value (eg. 200px
), but really what I want in this case is for the element to shrinkwrap around its content. fit-content
is a magical value that does exactly this. Essentially, it makes “width” behave like “height”, so that the element’s size is determined by its contents.
Why am I setting max-width
instead of width
? Well, my goal is to stop the element from expanding horizontally. I want to clamp its maximum size. If I used width
instead, it would lock it to that size, and the element would overflow when the container is really narrow. If you drag that “Container Width” slider all the way to the left, you can see that the element shrinks with its container.
Now that our element is constrained, we can center it with auto margins.
\\nI like to think of auto margins like Hungry Hungry Hippos. Each auto margin will try to gobble up as much space as possible. For example, check out what happens if we only set margin-left: auto
:
.element {\\n max-width: fit-content;\\n margin-left: auto;\\n}
When margin-left
is the only side with auto margins, all of the extra space gets applied as margin to that side. When we set both margin-left: auto
and margin-right: auto
, the two hippos each gobble up an equal amount of space. This forces the element to the center.
Also: I\'ve been using margin-left
and margin-right
because they\'re familiar, but there\'s a better, more-modern way to do this:
.element {\\n max-width: fit-content;\\n margin-inline: auto;\\n}
margin-inline
will set both margin-left
and margin-right
to the same value (auto
). It has very good browser support(opens in new tab), having landed in all major browsers several years ago.
margin-inline
is more than just a convenient shorthand for margin-left
+ margin-right
. It\'s part of a collection of logical properties, designed to make it easier to internationalize the web.
In English, characters are written in a horizontal line from left to right. Those characters are composed into words and sentences, and assembled into “blocks” (paragraphs, headings, lists, etc). Blocks are stacked vertically, from top to bottom. We can think of this as the orientation of English-language websites.
This isn\'t universal, though! Some languages, like Arabic and Hebrew, are written from right to left. Other languages, like Chinese, have historically been written vertically, with characters running from top to bottom, and blocks running side to side.*Interestingly, this has been changing over the past few decades, and most East Asian languages these days are written horizontally from left to right, especially on the web.
The primary goal of logical properties is to create an abstraction that sits above these differences. Instead of setting margin-left
for left-to-right languages and flipping it to margin-right
for right-to-left languages, we can instead use margin-inline-start
. The margin will automatically be applied to the correct side, depending on the page\'s language.
Even though this centering method has been around forever, I still find myself reaching for it on a regular basis! It\'s particularly useful when we want to center a single child, without affecting any of its siblings (for example, an image in-between paragraphs in a blog post).
\\nLet\'s continue on our centering journey.
\\nFlexbox is designed to give us a ton of control when it comes to distributing a group of items along a primary axis. It offers some really powerful tools for centering!
\\nLet\'s start by centering a single element, both horizontally and vertically:
\\n.container {\\n display: flex;\\n justify-content: center;\\n align-items: center;\\n}
The really cool thing about Flexbox centering is that it works even when the children don’t fit in their container! Try shrinking the width/height, and notice that the element overflows symmetrically.
\\nIt also works for multiple children. We can control how they stack with the flex-direction
property:
.container {\\n display: flex;\\n flex-direction: row;\\n justify-content: center;\\n align-items: center;\\n gap: 4px;\\n}
Out of all the centering patterns we\'ll explore in this tutorial, this is probably the one I use the most. It\'s a great jack-of-all-trades, a great default option.
\\nSo far, we\'ve been looking at how to center an element within its parent container. But what if we want to center an element in a different context? Certain elements like dialogs, prompts, and GDPR banners need to be centered within the viewport.
\\nThis is the domain of positioned layout, a layout mode used when we want to take something out of flow and anchor it to something else.
\\nHere\'s what this looks like:
\\n.element {\\n position: fixed;\\n inset: 0px;\\n width: 12rem;\\n height: 5rem;\\n max-width: 100vw;\\n max-height: 100dvh;\\n margin: auto;\\n}
Of all the strategies we\'ll discuss, this one is probably the most complex. Let\'s break it down.
\\nWe\'re using position: fixed
, which anchors this element to the viewport. I like to think of the viewport like a pane of glass that sits in front of the website, like the window of a train that shows the landscape scrolling by. An element with position: fixed
is like a ladybug that lands on the window.
Next, we\'re setting inset: 0px
, which is a shorthand that sets top
, left
, right
, and bottom
all to the same value, 0px
.
With only these two properties, the element would stretch to fill the entire viewport, growing so that it\'s 0px from each edge. This can be useful in some contexts, but it\'s not what we\'re going for here. We need to constrain it.
\\nThe exact values we pick will vary on the specifics of each situation, but in general we want to set default values (with width
and height
), as well as max values (max-width
and max-height
), so that the element doesn\'t overflow on smaller viewports.
There\'s something interesting here: we\'ve set up an impossible condition. Our element can\'t be 0px from the left and 0px from the right and only 12rem wide (assuming the viewport is wider than 12rem). We can only pick 2:
\\n.element {\\n position: fixed;\\n width: 12rem;\\n}
The CSS rendering engine resolves this tension by prioritizing. It will listen to the width
constraint, since that seems important. And if it can\'t anchor to the left and the right, it\'ll pick an option based on the page\'s language; so, in a left-to-right language like English, it\'ll sit along the left edge.
But! When we bring our old friend margin: auto
into the equation, something interesting happens. It changes how the browser resolves the impossible condition; instead of anchoring to the left edge, it centers it.
And, unlike auto margins in Flow layout, we can use this trick to center an element both horizontally and vertically.
\\n.element {\\n position: fixed;\\n inset: 0px;\\n width: 12rem;\\n height: 5rem;\\n max-width: 100vw;\\n max-height: 100dvh;\\n margin: auto;\\n}
It\'s a lot to remember, but there are 4 key ingredients for this trick.
\\ninset: 0px
We can use the same trick to center something in a single direction. For example, we can build a GDPR cookie banner that is horizontally centered, but anchored near the bottom of the viewport:
\\n.element {\\n position: fixed;\\n left: 0px;\\n right: 0px;\\n bottom: 8px;\\n width: 12rem;\\n max-width: calc(\\n 100vw - 8px * 2\\n );\\n margin-inline: auto;\\n}
We value your privacy data.
We use cookies to enhance your browser experience by selling this data to advertisers. This is extremely valuable.
By omitting top: 0px
, we remove the impossible condition in the vertical direction, and our banner is anchored to the bottom edge. As a nice touch, I used the calc
function to clamp the max width, so that there\'s always a bit of buffer around the element.
I also swapped margin: auto
for margin-inline: auto
, which isn\'t strictly necessary, but feels more precise.
The approach described above requires that we give our element a specific size, but what about when we don\'t know how big it should be?
\\nIn the past, we had to resort to transform hacks to accomplish this, but fortunately, our friend fit-content
can help here as well!
.element {\\n position: fixed;\\n inset: 0;\\n width: fit-content;\\n height: fit-content;\\n margin: auto;\\n}
A
This will cause the element to shrink around its contents. We can still set a max-width
if we\'d like to constrain it (eg. max-width: 60vw
), but we don\'t need to set a max-width; the element will automatically stay contained within the viewport.
The most terse way I know to center something both horizontally and vertically is with CSS Grid:
\\n.container {\\n display: grid;\\n place-content: center;\\n}
The place-content
property is a shorthand for both justify-content
and align-content
, applying the same value to both rows and columns. The result is a 1×1 grid with a cell right in the middle of the parent container.
This solution looks quite a bit like our Flexbox solution, but it\'s important to keep in mind that it uses a totally different layout algorithm. In my own work, I\'ve found that the CSS Grid solution isn\'t as universally effective as the Flexbox one.
\\nFor example, consider the following setup:
\\n.container {\\n display: flex;\\n justify-content: center;\\n align-items: center;\\n}\\n\\n.element {\\n width: 50%;\\n height: 50%;\\n}
Weird, right? Why does the CSS Grid version get so teensy-tiny?!
\\nHere\'s the deal: the child element is given width: 50%
and height: 50%
. In Flexbox, these percentages are calculated based on the parent element, .container
, which is what we want.
In CSS Grid, however, the percentages are relative to the grid cell. We\'re saying that the child element should be 50% as wide as its column, and 50% as tall as its row.
\\nNow, we haven\'t actually given the row/column an explicit size; we haven\'t defined grid-template-columns
or grid-template-rows
. When we omit this information, the grid tracks will calculate their size based on their contents, shrinkwrapping around whatever is in each row/column.
The end result is that our grid cell is the same size as .element
’s original size, and then the element shrinks to 50% of that grid cell:
.container {\\n display: grid;\\n place-content: center;\\n}\\n\\n.element {\\n width: 50%;\\n height: 50%;\\n}
This is a whole rabbithole, and I don\'t want to get too far off track; my point is that CSS Grid is a sophisticated layout algorithm, and sometimes, the extra complexity gets in the way. We could add some more CSS to fix this code, but I think it\'s simpler to use Flexbox instead.
\\nCSS Grid gives us one more centering super-power. With CSS Grid, we can assign multiple elements to the same cell:
\\n.container {\\n display: grid;\\n place-content: center;\\n}\\n\\n.element {\\n grid-row: 1;\\n grid-column: 1;\\n}
We still have a 1×1 grid, except now we\'re cramming multiple children to sit in that cell with grid-row
/ grid-column
.
In case it\'s not clear, here\'s a quick sketch of the HTML for this kind of setup:
\\n<div class=\\"container\\">\\n <img class=\\"element\\" />\\n <img class=\\"element\\" />\\n <img class=\\"element\\" />\\n <img class=\\"element\\" />\\n</div>
In other layout modes, the elements would stack horizontally or vertically, but with this CSS Grid setup, the elements stack back-to-front, since they\'re all told to share the same grid space. Pretty cool, right?
\\nIncredibly, this can work even when the child elements are different sizes! Check this out:
\\n.container {\\n display: grid;\\n place-content: center;\\n place-items: center;\\n}\\n\\n.element {\\n grid-row: 1;\\n grid-column: 1;\\n}
In this demo, dashed red lines are added to show the grid row and column. Notice that they expand to contain the largest child; with all the elements added, the resulting cell is as wide as the pink skyline image, and as tall as the colourful space image!
\\nWe do need one more property to make this work: place-items: center
. place-items
is a shorthand for justify-items
and align-items
, and these properties control the alignment of the images within the grid cell.
Without this property, the grid cell would still be centered, but the images within that cell would all stack in the top-left corner:
\\n.container {\\n display: grid;\\n place-content: center;\\n place-items: center;\\n}\\n\\n.element {\\n grid-row: 1;\\n grid-column: 1;\\n}
This is pretty advanced stuff! You can learn more about how the CSS Grid layout mode works in a recent tutorial I published, An Interactive Guide to CSS Grid.
\\nText is its own special thing in CSS. We can\'t influence individual characters using the techniques explored in this post.
\\nFor example, if we try to center a paragraph with Flexbox, we\'ll center the block of text, not the text itself:
\\n.container {\\n display: flex;\\n justify-content: center;\\n align-items: center;\\n}
Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry\'s standard dummy text ever since the 1500s.
Flexbox is centering the paragraph within the viewport, but it doesn\'t affect the individual characters. They remain left-aligned.
\\nWe need to use text-align
to center the text:
.container {\\n display: flex;\\n justify-content: center;\\n align-items: center;\\n text-align: center;\\n}
Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry\'s standard dummy text ever since the 1500s.
Earlier, we saw how we can use auto margins to center an element horizontally in Flow layout. If we want that element to be centered vertically as well, we need to switch to a different layout mode, like Flexbox or Grid.
\\n…or do we?
\\nCheck this out:
\\n.container {\\n align-content: center;\\n}\\n\\n.element {\\n max-width: fit-content;\\n margin-inline: auto;\\n}
What the heck?? align-content
is a CSS Grid thing, but we aren\'t setting display: grid
here. How is this working?
One of the biggest epiphanies I\'ve ever had about CSS is that it\'s a collection of layout algorithms. The properties we write are inputs to those algorithms. align-content
was first implemented in Flexbox, and took on an even bigger role in CSS Grid, but it wasn\'t implemented in the default layout algorithm, Flow layout. Until now.
As I write this in early 2024, browser vendors are in the process of implementing align-content
in Flow layout, so that it controls the “block” direction alignment of content. It\'s still early days; this new behaviour is only available in Chrome Canary (behind a flag) and Safari Technical Preview.
(I should note, the demo above is fake. I got a feel for the new align-content
support in Chrome Canary and Safari TP, and then recreated the exact same behaviour using Flexbox. Sorry for the deception!)
As far as I can tell, this new option doesn\'t unlock any new doors, in terms of what sorts of UIs we can create. We can already reproduce the same behaviour using the techniques explored in this tutorial.
Still, I\'m looking forward to this becoming widely-available. It\'s always felt a bit silly to me that we had to flip to an entirely separate layout mode just to center something.
So, for many years, I treated CSS like a collection of patterns. I had a bunch of memorized snippets that would paste from my brain, to solve whatever problem I was currently facing.
\\nThis worked alright, but it did feel pretty limiting. And every now and then, things would inexplicably break; a snippet I’d used hundreds of times would suddenly behave differently.
\\nWhen I took the time to learn CSS at a deeper level, my experience with the language completely changed. So many things clicked into place. Instead of relying on memorized snippets, I could instead rely on my intuition! ✨
\\nIn this tutorial, we’ve explored a handful of useful centering patterns, and I hope they’ll come in handy the next time you need to center something. Truthfully, though, we\'ve only scratched the surface here; there are so many ways we can use modern CSS to center stuff! Instead of memorizing even more snippets, I think it\'s better to build a robust mental model of how CSS works, so that we can come up with solutions on-the-fly!
\\nI spent 2 years of my life creating the ultimate resource for developing a deep understanding of CSS. It\'s called CSS for JavaScript Developers.
\\nIf you found this tutorial helpful, you’ll get so much out of my course. We take a similar approach to the entire CSS language, building an intuition for how all of the different layout algorithms work.
\\nIt includes interactive text content like this blog post, but also videos, exercises, real-world-inspired workshops, and even a few minigames. It\'s unlike any other course you’ve taken.
\\nYou can learn more here:
\\nBefore we wrap up, let\'s summarize what we\'ve learned by building a sort of decision tree, so that we can figure out when to use which method.
\\nLike a carpenter’s workshop, we\'ve assembled quite a lot of helpful tools in this tutorial, each with its own specialized purpose. I hope that you’ve learned some new strategies here! Happy centering. ❤️
November 25th, 2024
CSS Grid is one of the most amazing parts of the CSS language. It gives us a ton of new tools we can use to create sophisticated and fluid layouts.
\\nIt\'s also surprisingly complex. It took me quite a while to truly become comfortable with CSS Grid!
\\nIn this tutorial, I\'m going to share the biggest 💡 lightbulb moments I\'ve had in my own journey with CSS Grid. You\'ll learn the fundamentals of this layout mode, and see how to do some pretty cool stuff with it. ✨
\\nCSS Grid is the most modern tool for building layouts in CSS, but it isn\'t exactly “bleeding edge”. It\'s been supported by all major browsers since 2017!
According to caniuse, CSS Grid is supported for 97.8% of users(opens in new tab). This is excellent browser support; Flexbox is only about 0.5% more supported!
CSS is comprised of several different layout algorithms, each designed for different types of user interfaces. The default layout algorithm, Flow layout, is designed for digital documents. Table layout is designed for tabular data. Flexbox is designed for distributing items along a single axis.
\\nCSS Grid is the latest and greatest layout algorithm. It\'s incredibly powerful: we can use it to build complex layouts that fluidly adapt based on a number of constraints.
\\nThe most unusual part of CSS Grid, in my opinion, is that the grid structure, the rows and columns, are defined purely in CSS:
\\nWith CSS Grid, a single DOM node is sub-divided into rows and columns. In this tutorial, we\'re highlighting the rows/columns with dashed lines, but in reality, they\'re invisible.
\\nThis is super weird! In every other layout mode, the only way to create compartments like this is by adding more DOM nodes. In Table layout, for example, each row is created with a <tr>
, and each cell within that row using <td>
or <th>
:
<table>\\n <tbody>\\n <!-- First row --\x3e\\n <tr>\\n <!-- Cells in the first row --\x3e\\n <td></td>\\n <td></td>\\n <td></td>\\n </tr>\\n\\n <!-- Second row --\x3e\\n <tr>\\n <!-- Cells in the second row --\x3e\\n <td></td>\\n <td></td>\\n <td></td>\\n </tr>\\n </tbody>\\n</table>
Unlike Table layout, CSS Grid lets us manage the layout entirely from within CSS. We can slice up the container however we wish, creating compartments that our grid children can use as anchors.
\\nWe opt in to the Grid layout mode with the display
property:
.wrapper {\\n display: grid;\\n}
By default, CSS Grid uses a single column, and will create rows as needed, based on the number of children. This is known as an implicit grid, since we aren\'t explicitly defining any structure.
\\nHere\'s how this works:
\\nImplicit grids are dynamic; rows will be added and removed based on the number of children. Each child gets its own row.
\\nBy default, the height of the grid parent is determined by its children. It grows and shrinks dynamically. Interestingly, this isn\'t even a “CSS Grid” thing; the grid parent is still using Flow layout, and block elements in Flow layout grow vertically to contain their content. Only the children are arranged using Grid layout.
\\nBut what if we give the grid a fixed height? In that case, the total surface area is divided into equally-sized rows:
\\nBy default, CSS Grid will create a single-column layout. We can specify columns using the grid-template-columns
property:
Code Playground
Result
Refresh results paneIn the playground above, and throughout this tutorial, I\'m using dashed lines to show the divisions between columns and rows. In CSS grid, these lines are invisible, and can\'t be made visible.
I\'m using some ✨ blog magic ✨ here, faking it with pseudo-elements. You can pop over to the CSS tab if you\'d like to see how.
By passing two values to grid-template-columns
— 25%
and 75%
— I\'m telling the CSS Grid algorithm to slice the element up into two columns.
Columns can be defined using any valid CSS <length-percentage> value(opens in new tab), including pixels, rems, viewport units, and so on. Additionally, we also gain access to a new unit, the fr
unit:
Code Playground
Result
Refresh results panefr
stands for “fraction”. In this example, we\'re saying that the first column should consume 1 unit of space, while the second column consumes 3 units of space. That means there are 4 total units of space, and this becomes the denominator. The first column eats up ¼ of the available space, while the second column consumes ¾.
The fr
unit brings Flexbox-style flexibility to CSS Grid. Percentages and <length>
values create hard constraints, while fr
columns are free to grow and shrink as required, to contain their contents.
Try shrinking this container to see the difference:
\\nIn this scenario, our first column has a cuddly ghost that has been given an explicit width of 55px. But what if the column is too small to contain it?
\\nfr
-based columns are flexible, and so the column won\'t shrink below its minimum content size, even if that means breaking the proportions.To be more precise: the fr
unit distributes extra space. First, column widths will be calculated based on their contents. If there\'s any leftover space, it\'ll be distributed based on the fr
values. This is very similar to flex-grow
, as discussed in my Interactive Guide to Flexbox.
In general, this flexibility is a good thing. Percentages are too strict.
\\nWe can see a perfect example of this with gap
. gap
is a magical CSS property that adds a fixed amount of space between all of the columns and rows within our grid.
Check out what happens when we toggle between percentages and fractions:
\\nNotice how the contents spill outside the grid parent when using percentage-based columns? This happens because percentages are calculated using the total grid area. The two columns consume 100% of the parent\'s content area, and they aren\'t allowed to shrink. When we add 16px of gap
, the columns have no choice but to spill beyond the container.
The fr
unit, by contrast, is calculated based on the extra space. In this case, the extra space has been reduced by 16px, for the gap
. The CSS Grid algorithm distributes the remaining space between the two grid columns.
gap
vs. grid-gap
When CSS Grid was first introduced, the grid-gap
property was used to add space between columns and rows. Pretty quickly, however, the community realized that this feature would be awesome to have in Flexbox as well. And so, the property was given a more-generic name, gap
.
These days, grid-gap
has been marked as deprecated, and browsers have aliased it to gap
. Both properties do the exact same thing. And they both have nearly-identical browser support*You have to go back to 2017-2018 to find browsers that have implemented grid-gap
but not gap
. We\'re talking less than 0.1% of current users., around 96%(opens in new tab).
And so, I recommend using gap
rather than grid-gap
, whether you\'re working with Flexbox or CSS Grid. But there\'s also no urgency when it comes to converting existing grid-gap
declarations.
What happens if we add more than two children to a two-column grid?
\\nWell, let\'s give it a shot:
\\nCode Playground
Result
Refresh results paneInteresting! Our grid gains a second row. The grid algorithm wants to ensure that every child has its own grid cell. It’ll spawn new rows as-needed to fulfill this goal. This is handy in situations where we have a variable number of items (eg. a photo grid), and we want the grid to expand automatically.
\\nIn other situations, though, we want to define the rows explicitly, to create a specific layout. We can do that with the grid-template-rows
property:
Code Playground
Result
Refresh results paneBy defining both grid-template-rows
and grid-template-columns
, we\'ve created an explicit grid. This is perfect for building page layouts, like the “Holy Grail”?This was the name given to the most common layout in the days of the early web: a header, sidebar, main content area, and footer. layout at the top of this tutorial.
Let\'s suppose we\'re building a calendar:
\\nCSS Grid is a wonderful tool for this sort of thing. We can structure it as a 7-column grid, with each column consuming 1 unit of space:
\\n.calendar {\\n display: grid;\\n grid-template-columns: 1fr 1fr 1fr 1fr 1fr 1fr 1fr;\\n}
This works, but it\'s a bit annoying to have to count each of those 1fr
’s. Imagine if we had 50 columns!
Fortunately, there\'s a nicer way to solve for this:
\\n.calendar {\\n display: grid;\\n grid-template-columns: repeat(7, 1fr);\\n}
The repeat
function will do the copy/pasting for us. We\'re saying we want 7 columns that are each 1fr
wide.
Here\'s the playground showing the full code, if you\'re curious:
\\nCode Playground
Result
Refresh results paneThis calendar is a quick-and-dirty example of how we can use CSS Grid to create a specific layout. It isn\'t intended to be production-ready.
If you\'re looking to build a date-picker, you should absolutely use an accessibility-focused library like React Aria(opens in new tab).
By default, the CSS Grid algorithm will assign each child to the first unoccupied grid cell, much like how a tradesperson might lay tiles in a bathroom floor.
\\nHere\'s the cool thing though: we can assign our items to whichever cells we want! Children can even span across multiple rows/columns.
\\nHere\'s an interactive demo that shows how this works. Click/press and drag to place a child in the grid*If you\'re not using a pointer device like a mouse or touchscreen, keyboard-based controls have also been provided. Check out the “Help” screen below for more information.:
\\n.parent {\\n display: grid;\\n grid-template-columns:\\n repeat(4, 1fr);\\n grid-template-rows:\\n repeat(4, 1fr);\\n}\\n\\n.child {\\n \\n\\n}
The grid-row
and grid-column
properties allow us to specify which track(s) our grid child should occupy.
If we want the child to occupy a single row or column, we can specify it by its number. grid-column: 3
will set the child to sit in the third column.
Grid children can also stretch across multiple rows/columns. The syntax for this uses a slash to delineate start and end:
\\n.child {\\n grid-column: 1 / 4;\\n}
At first glance, this looks like a fraction, ¼. In CSS, though, the slash character is not used for division, it\'s used to separate groups of values. In this case, it allows us to set the start and end columns in a single declaration.
\\nIt\'s essentially a shorthand for this:
\\n.child {\\n grid-column-start: 1;\\n grid-column-end: 4;\\n}
There\'s a sneaky gotcha here: The numbers we\'re providing are based on the column lines, not the column indexes.
\\nIt\'ll be easiest to understand this gotcha with a diagram:
\\nConfusingly, a 4-column grid actually has 5 column lines. When we assign a child to our grid, we anchor them using these lines. If we want our child to span the first 3 columns, it needs to start on the 1st line and end on the 4th line.
\\nIn a left-to-right language like English, we count the columns from left to right. With negative line numbers, however, we can also count in the opposite direction, from right to left.
.child {\\n /* Sit in the 2nd column from the right: */\\n grid-column: -2;\\n}
The really cool thing is that we can mix positive and negative numbers. Check this out:
Notice that the child spans the full width of the grid, even though we aren\'t changing the grid-column
assignment at all!
We\'re saying here that our child should span from the first column line to the last column line. No matter how many columns there are,this handy declaration will work as intended.
You can see a practical use case for this trick in my blog post, “Full-Bleed Layout Using CSS Grid”.
Alright, time to talk about one of the coolest parts of CSS Grid. 😄
\\nLet\'s suppose we\'re building this layout:
\\nUsing what we\'ve learned so far, we could structure it like this:
\\n.grid {\\n display: grid;\\n grid-template-columns: 2fr 5fr;\\n grid-template-rows: 50px 1fr;\\n}\\n\\n.sidebar {\\n grid-column: 1;\\n grid-row: 1 / 3;\\n}\\n.header {\\n grid-column: 2;\\n grid-row: 1;\\n}\\n.main {\\n grid-column: 2;\\n grid-row: 2;\\n}
This works, but there\'s a more ergonomic way to do this: grid areas.
\\nHere\'s what it looks like:
\\nLike before, we\'re defining the grid structure with grid-template-columns
and grid-template-rows
. But then, we have this curious declaration:
.parent {\\n grid-template-areas:\\n \'sidebar header\'\\n \'sidebar main\';\\n}
Here\'s how this works: We\'re drawing out the grid we want to create, almost as if we were making ASCII art?Art made out of typographical characters. Popular in the days of command-line computing.. Each line represents a row, and each word is a name we\'re giving to a particular slice of the grid. See how it sorta looks like the grid, visually?
\\nThen, instead of assigning a child with grid-column
and grid-row
, we assign it with grid-area
!
When we want a particular area to span multiple rows or columns, we can repeat the name of that area in our template. In this example, the “sidebar” area spans both rows, and so we write sidebar
for both cells in the first column.
Should we use areas, or rows/columns? When building explicit layouts like this, I really like using areas. It allows me to give semantic meaning to my grid assignments, instead of using inscrutable row/column numbers. That said, areas work best when the grid has a fixed number of rows and columns. grid-column
and grid-row
can be useful for implicit grids.
There\'s a big gotcha when it comes to grid assignments: tab order will still be based on DOM position, not grid position.
\\nIt\'ll be easier to explain with an example. In this playground, I\'ve set up a group of buttons, and arranged them with CSS Grid:
\\nCode Playground
Result
Refresh results paneIn the “RESULT” pane, the buttons appear to be in order. By reading from left to right, and from top to bottom, we go from one to six.
\\nIf you\'re using a device with a keyboard, try to tab through these buttons. You can do this by clicking the first button in the top left (“One”), and then pressing Tab to move through the buttons one at a time.
\\nYou should see something like this:
\\n\\nThe focus outline jumps around the page without rhyme or reason, from the user\'s perspective. This happens because the buttons are being focused based on the order they appear in the DOM.
\\nTo fix this, we should re-order the grid children in the DOM so that they match the visual order, so that I can tab through from left to right, and from top to bottom.*This will even work correctly for right-to-left languages like Arabic and Hebrew; CSS Grid columns will be mirrored in these languages, with column 1 being on the right instead of the left. And so, the same DOM order works for all languages.
\\nreading-order
CSS propertyThe CSS Working Group is aware that this is kind of a flaw with CSS Grid. Having to match the DOM order with the visual order sorta defeats the purpose of being able to assign children to specific grid cells!
A fix is in the works. In the future, the reading-order
CSS property should allow us to instruct the browser to ignore the DOM order, and focus elements in the order they appear onscreen.
Unfortunately, this property is not yet implemented in any browsers, as of September 2024. You can learn more about this property in Rachel Andrew’s post, Solving the CSS layout and source order disconnect(opens in new tab).
In all the examples we\'ve seen so far, our columns and rows stretch to fill the entire grid container. This doesn\'t need to be the case, however!
\\nFor example, let\'s suppose we define two columns that are each 90px wide. As long as the grid parent is larger than 180px, there will be some dead space at the end:
\\nWe can control the distribution of the columns using the justify-content
property:
If you\'re familiar with the Flexbox layout algorithm, this probably feels pretty familiar. CSS Grid builds on the alignment properties first introduced with Flexbox, taking them even further.
\\nThe big difference is that we\'re aligning the columns, not the items themselves. Essentially, justify-content
lets us arrange the compartments of our grid, distributing them across the grid however we wish.
If we want to align the items themselves within their columns, we can use the justify-items
property:
When we plop a DOM node into a grid parent, the default behaviour is for it to stretch across that entire column, just like how a <div>
in Flow layout will stretch horizontally to fill its container. With justify-items
, however, we can tweak that behaviour.
This is useful because it allows us to break free from the rigid symmetry of columns. When we set justify-items
to something other than stretch
, the children will shrink down to their default width, as determined by their contents. As a result, items in the same column can be different widths.
We can even control the alignment of a specific grid child using the justify-self
property:
Unlike justify-items
, which is set on the grid parent and controls the alignment of all grid children, justify-self
is set on the child. We can think of justify-items
as a way to set a default value for justify-self
on all grid children.
So far, we\'ve been talking about how to align stuff in the horizontal direction. CSS Grid provides an additional set of properties to align stuff in the vertical direction:
\\nalign-content
is like justify-content
, but it affects rows instead of columns. Similarly, align-items
is like justify-items
, but it handles the vertical alignment of items inside their grid area, rather than horizontal.
To break things down even further:
\\njustify
— deals with columns.align
— deals with rows.content
— deals with the grid structure.items
— deals with the DOM nodes within the grid structure.Finally, in addition to justify-self
, we also have align-self
. This property controls the vertical position of a single grid item within its cell.
There\'s one last thing I want to show you. It\'s one of my favourite little tricks with CSS Grid.
\\nUsing only two CSS properties, we can center a child within a container, both horizontally and vertically:
\\nThe place-content
property is a shorthand. It\'s syntactic sugar for this:
.parent {\\n justify-content: center;\\n align-content: center;\\n}
As we\'ve learned, justify-content
controls the position of columns. align-content
controls the position of rows. In this situation, we have an implicit grid with a single child, and so we wind up with a 1×1 grid. place-content: center
pushes both the row and column to the center.
There are lots of ways to center a div in modern CSS, but this is the only way I know of that only requires two CSS declarations!
\\nIn this tutorial, we\'ve covered some of the most fundamental parts of the CSS Grid layout algorithm, but honestly, there\'s so much more stuff we haven\'t talked about!
\\nIf you found this blog post helpful, you might be interested to know that I\'ve created a comprehensive learning resource that goes way deeper. It\'s called CSS for JavaScript Developers(opens in new tab).
\\n\\nThe course uses the same technologies as my blog, and so it\'s chock full of interactive explanations. But there are also bite-sized videos, practice exercises, real-world-inspired projects, and even a few mini-games.
\\nIf you found this blog post helpful, you\'ll love the course. It follows a similar approach, but for the entire CSS language, and with hands-on practice to make sure you\'re actually developing new skills.
\\nIt\'s specifically built for folks who use a JS framework like React/Angular/Vue. 80% of the course focuses on CSS fundamentals, but we also see how to integrate those fundamentals into a modern JS application, how to structure our CSS, stuff like that.
\\nIf you struggle with CSS, I hope you\'ll check it out. Gaining confidence with CSS is game-changing, especially if you\'re already comfortable with HTML and JS. When you complete the holy trinity, it becomes so much easier to stay in flow, to truly enjoy developing web applications.
\\nYou can learn more here:
\\nI hope you found this tutorial useful. ❤️
\\nNovember 25th, 2024